Skip to content

Commit

Permalink
Remove unecessary commands.
Browse files Browse the repository at this point in the history
  • Loading branch information
tmanik committed Aug 29, 2024
1 parent 592d71b commit 4fddbcb
Show file tree
Hide file tree
Showing 4 changed files with 201 additions and 31 deletions.
20 changes: 8 additions & 12 deletions gcp-deployment/gcp-cleanup-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,15 @@ kubectl delete secret db-credentials

## 2. Delete GKE Cluster

Deletion command:
```bash
gcloud container clusters delete weather-cluster --zone=us-central1-a
gcloud container clusters delete weather-cluster --zone=us-central1-a --quiet > /dev/null 2>&1 &
```

Status check:

```bash
gcloud container clusters describe weather-cluster --zone=us-central1-a
```

## 3. Delete Container Images
Expand All @@ -38,18 +45,7 @@ gcloud compute forwarding-rules delete RULE_NAME --global

# Delete static IPs
gcloud compute addresses delete ADDRESS_NAME --region=REGION

# Delete storage buckets
gsutil rm -r gs://BUCKET_NAME

# Delete service accounts
gcloud iam service-accounts delete SERVICE_ACCOUNT_EMAIL
```

## 5. Delete Project (Optional, Use with Caution)

```bash
gcloud projects delete YOUR_PROJECT_ID
```

Replace placeholders with actual resource names. Double-check the Google Cloud Console to ensure all resources are removed.
Binary file added gcp-deployment/image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
105 changes: 105 additions & 0 deletions gcp-deployment/production-workflow-explanation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Production Workflow and Best Practices

## Architecture Overview

```mermaid
graph TD
A[GitHub Private Repo] -->|Trigger| B[GitHub Actions CI/CD]
B -->|Build & Test| C[Container Registry]
B -->|Deploy| D[GKE Cluster]
D -->|Read/Write| E[Cloud SQL]
D -->|Log| F[Cloud Logging]
D -->|Monitor| G[Cloud Monitoring]
H[Cloud IAM] -->|Manage Access| D
H -->|Manage Access| E
I[Secret Manager] -->|Provide Secrets| D
J[Cloud NAT] -->|Outbound Traffic| D
K[Load Balancer] -->|Inbound Traffic| D
L[Terraform] -->|Manage| D
L -->|Manage| E
L -->|Manage| J
L -->|Manage| K
```

## Workflow Explanation

Our production workflow follows a GitOps approach, leveraging Infrastructure as Code (IaC) and Continuous Integration/Continuous Deployment (CI/CD) principles. Here's a step-by-step breakdown of the workflow:

1. **Code Changes**:
- Developers make changes to the application code or infrastructure definitions in the private GitHub repository.
- Changes are proposed via pull requests for review.

2. **CI/CD Pipeline Trigger**:
- GitHub Actions CI/CD pipeline is triggered on pull requests and pushes to the main branch.

3. **Infrastructure Provisioning**:
- Terraform job in the CI/CD pipeline manages infrastructure:
- Initializes Terraform
- Plans infrastructure changes
- Applies changes (only on merge to main)

4. **Build and Test**:
- Application code is built and unit tests are run.
- Container images are built and pushed to Container Registry.

5. **Deployment**:
- Updated container images are deployed to the GKE cluster.
- Kubernetes manifests are applied to update deployments.

6. **Monitoring and Logging**:
- Deployed applications send logs to Cloud Logging.
- Cloud Monitoring tracks system and application metrics.

## Best Practices Implemented

### 1. Version Control and Code Review
- All code (application and infrastructure) is version-controlled in GitHub.
- Pull request reviews ensure code quality and security.

### 2. Infrastructure as Code (IaC)
- Terraform is used to define and manage GCP resources.
- Infrastructure changes go through the same review process as application code.

### 3. CI/CD Automation
- GitHub Actions automates testing, building, and deployment processes.
- Reduces manual errors and ensures consistent deployments.

### 4. Containerization
- Applications are containerized for consistency across environments.
- Container images are versioned and stored in Container Registry.

### 5. Orchestration
- Google Kubernetes Engine (GKE) is used for container orchestration.
- Ensures high availability and scalability of applications.

### 6. Secret Management
- Sensitive information is stored in Secret Manager.
- Secrets are securely injected into applications at runtime.

### 7. Network Security
- Cloud NAT is used for secure outbound traffic from private GKE nodes.
- Load Balancer manages inbound traffic, improving security and performance.

### 8. Database Management
- Cloud SQL provides a managed database solution.
- Regular backups and update management are handled by GCP.

### 9. Monitoring and Logging
- Comprehensive monitoring with Cloud Monitoring.
- Centralized logging with Cloud Logging for easier troubleshooting and auditing.

### 10. Access Control
- Cloud IAM manages access to GCP resources.
- Principle of least privilege is enforced.

### 11. Scalability
- GKE and Cloud SQL provide built-in scalability options.
- Infrastructure can be easily scaled using Terraform.

### 12. Disaster Recovery
- Multi-region deployments can be implemented for critical services.
- Regular backups and documented recovery procedures ensure business continuity.

## Conclusion

This workflow embraces DevOps principles, ensuring a secure, scalable, and maintainable production environment. By leveraging GCP services and following these best practices, we create a robust system that can reliably deliver and scale our applications.
107 changes: 88 additions & 19 deletions gcp-deployment/weather-data-pipeline-deployment-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,37 +10,30 @@
1. List and set your GCP project:
```bash
gcloud projects list
gcloud config set project ${PROJECT_ID}
export PROJECT_ID=$(gcloud config get-value project)
```
Indentify the name of the appropriate project by looking at the PROJECT_ID field.

2. Create a GKE cluster:
```bash
gcloud container clusters create weather-cluster --num-nodes=2 --zone=us-central1-a
gcloud config set project <insert_name_of_project>
export PROJECT_ID=$(gcloud config get-value project)
```

3. Get credentials for the cluster:
2. Create a GKE cluster:
```bash
gcloud container clusters get-credentials weather-cluster --zone=us-central1-a
gcloud container clusters create weather-cluster-2 --num-nodes=2 --zone=us-central1-a --quiet > /dev/null 2>&1 &
```

4. Create Kubernetes secret for database credentials:
The above should take about 5-8 minutes. We will run other processes that can be done in parallel and check the status of this from time to time.

To check the status of the cluster deployment, run the command below:

```bash
kubectl create secret generic db-credentials \
--from-literal=DB_NAME=your_db_name \
--from-literal=DB_USER=your_db_user \
--from-literal=DB_PASSWORD=your_db_password \
--from-literal=DB_HOST=postgres \
--from-literal=DB_PORT=5432
gcloud container clusters describe weather-cluster --zone=us-central1-a
```

## Build and Push Docker Images

Navigate to the project root directory:

```bash
cd ../gcp-deployment/k8s-artifacts-v2
```
## Build and Push Docker Images

Build and push data pipeline images:

Expand All @@ -63,29 +56,83 @@ cd ../flask-app
# Build and push Flask app image
docker build -t gcr.io/${PROJECT_ID}/flask-app:latest .
docker push gcr.io/${PROJECT_ID}/flask-app:latest

```


## Get credentials for the cluster.

Before we get credentials for the cluster, we'll need to check if the cluster has finished being created by running the command:

```bash
gcloud container clusters describe weather-cluster --zone=us-central1-a
```


Once the cluster has finished being created, you can run the command below to create and get the cluster credentials.

You can know if the cluster is finished running if the output of the command above is
```text
status: RUNNING
```

Anything other than the status of "RUNNING" means that the cluster has not finished being created.

If the cluster is still being created, wait for a few minutes and do not proceed to the next steps till the creation process is finished.

Create a Kubernetes secret for database credentials:
```bash
kubectl create secret generic db-credentials \
--from-literal=DB_NAME=your_db_name \
--from-literal=DB_USER=your_db_user \
--from-literal=DB_PASSWORD=your_db_password \
--from-literal=DB_HOST=postgres \
--from-literal=DB_PORT=5432
```
Then
```bash
gcloud container clusters get-credentials weather-cluster --zone=us-central1-a
```

## Deploy to Kubernetes

Change directory to the folder with the Kubernetes artifacts:

```bash
cd ../gcp-deployment/k8s-artifacts
```

Deploy PostgreSQL:

```bash
envsubst < postgres-deployment.yaml | kubectl apply -f -
envsubst < postgres-service.yaml | kubectl apply -f -
```

# Wait for PostgreSQL to be ready

Run the command below to check if Postgres is ready. We cannot move to the next step unless it is.
```
kubectl wait --for=condition=ready pod -l app=postgres --timeout=300s
```

If Postgres is ready, the output of the command above should say:

```text
pod/postgres-<arbitrary-characters> condition met
```
Once Postgres has finished being deployed, proceed to the next step.

Deploy data pipeline job:

```bash
envsubst < data-pipeline-job.yaml | kubectl apply -f -
kubectl create job --from=cronjob/data-pipeline-sequence data-pipeline-manual-trigger-new

# Wait for data pipeline job to complete
kubectl wait --for=condition=complete job/data-pipeline --timeout=600s
kubectl wait --for=condition=complete job/data-pipeline-manual-trigger-new --timeout=600s
```
Once completed, proceed to the next step.

Deploy Flask app:

Expand All @@ -94,6 +141,28 @@ envsubst < flask-app-deployment.yaml | kubectl apply -f -
envsubst < flask-app-service.yaml | kubectl apply -f -
```


## Monitoring and waiting

List all pods:
```bash
kubectl get pods
```

![alt text](image.png)

View logs for all containers in a pod:
```bash
kubectl logs <pod-name> --all-containers=true
```
View logs for the data-pipeline pod. You should expect to see similar logs as you saw when you ran the docker-compose deployment on your local machine. The final log should say:
```text
Data transformation and loading to final table completed.
```

View logs for the flask-app pod.


## Useful Commands for Monitoring and Debugging

1. List all pods:
Expand Down

0 comments on commit 4fddbcb

Please sign in to comment.