Table of Contents
This project leverages Terraform to provision AWS resources, including VPC components, EC2 instances, Elastic Container Registry (ECR) and IAM roles. Additionally, it orchestrates the deployment of an Amazon Elastic Kubernetes Service (EKS) cluster.
Once the EKS cluster is operational, it hosts a Flask application within NGINX servers orchestrated by Kubernetes pods. To access the web application NGINX ingress controller will be installed. Furthermore, the setup incorporates Kubernetes add-ons such as external-dns and cert-manager. These components facilitate the automatic management of SSL certificates and the dynamic updating of DNS records in Route 53 for a specified domain.
To add more automation to the process - both deployment of the infrastructure and the application deployment are being handled by Github Actions pipelines.
Please note that deploying this project may incur costs that are not covered by the AWS free tier. We advise using the AWS pricing calculator to estimate potential expenses before proceeding.
This GitHub repository utilizes GitHub Actions for triggering Terraform deployments to AWS. Once Terraform successfully creates all the necessary underlying infrastructure, a Flask application is deployed via a CI/CD Pipeline.
To enhance security and maintain separation, the application is deployed within a dedicated Kubernetes namespace ("Environment") based on the current Git branch where changes are pushed. For instance, commits made to the dev branch trigger the creation of Kubernetes pods within the dev namespace.
The Flask app deployment process involves the creation of essential components:
- NGINX Servers: Two NGINX server pods are deployed to host the Flask application.
- NGINX Ingress Controller: Responsible for handling incoming HTTP requests, redirecting them to HTTPS, and routing requests to the appropriate ClusterIP service based on the URL path.
- LoadBalancer Service: Creates classic AWS Loadbalancer, vital part of the NGINX controller.
- ClusterIP Service: Exposes port 80 and forwards incoming connections to the respective pods located in specific namespaces based on the
Environmentvariable. - External-DNS Addon: Creates a DNS A record in Route 53 based on the
Environmentvariable: <Environment>.<your-domain.name> - Certificate Manager: Manages the creation and installation of SSL certificates via the third-party SSL Provider Let's Encrypt for <
Environment>.<your-domain.name>.
For example if changes are being committed to Dev branch, pipeline will create pods, NGINX ingress controller and ClusterIP service in dev namespace in Kubernetes cluster. Also External-DNS addon will create an A record for dev.<your-domain.name> and SSL certificate will be installed for dev.<your-domain.name>.
The Flask application dynamically visualizes the Environment variable on the screen. For instance, if changes are committed to the dev branch, accessing dev.<your-domain.name> in a browser will confirm connection to the Dev environment.
- Git and Github account
- Docker
- Terraform
- Pre-commit hooks - tflint, helm lint
- AWS CLI
- Registered domain name
- SonarCloud and Snyk accounts
- AWS user with write privileges to create S3 bucket, DynamoDB table, IAM role, OIDC provider
-
Fork this github repo.
-
Set repository secrets. In "Secrets and Varibles" setting page of you repository add the following Repository Secrets which you will need further for your pipeline:
SONAR_TOKEN- this is the token used to authenticate access to SonarCloud. You can generate a token on your Security page in SonarCloud.SNYK_TOKEN- this token is needed to authenticate to Snyk vulnerability scanner. You can get that token following the instructions in Snyk documentation.
-
Set Repository variables. In "Secrets and Varibles" setting page of you repository add the following Repository Variable which you will need further for your pipeline:
AWS_REGION- this is the AWS region where infrastructure will be deployed. For example us-east-1, eu-central-1, etc.
-
Authenticate to AWS by adding your Access and Secret keys:
aws configure -
Clone the repo locally:
git clone <forked repo> -
Create a new branch, for example
dev:git branch dev git checkout dev -
Edit file
terraform/pre-deploy/github.tfand replace"repo:eevlogiev/telerik-flask-project:*"with your repo -
Deploy initial infrastructure components - S3 bucket, DynamoDB and Github OIDC provider:
cd terraform/pre-deploy terraform init terraform plan terraform apply -
After successul terraform run, the following files will be updated with the present AWS account id:
helm/values.yaml terraform/pre-deploy/assume_role.sh terraform/pre-deploy/role-arn.txtThese files will be needed later during the deployment process.
AWS infrastructure needed for the Flask application is being provisioned with terraform via Github workflow described in .github/workflows/terraform.yaml. This workflow will create a VPC, Subnets, Route tables, Internet Gateway, NAT Gateway, Security groups, ECR, IAM roles, EKS cluster, Helm charts, EKS addons which will be needed later for the application.
This Github action is triggered in case of any change in the terraform folder.
-
Update file
terraform/locals.tfand replaceev4o.comwith your registered domain:domain = "<your domain>" -
Commit and push your changes to the remote Github repo:
git add <changed files> git commit git push --set-upstream origin dev -
Create a Pull request to merge the changes from
devbranch into themainbranch. -
Pull request will trigger the Terraform Infrastructure Change Management Pipeline Github action. This action will add a comment in the PR with the
terraform planoutput but will NOT executeterraform apply. -
Review Pull request - check Commits, suggested file changes and the
terraform planoutput in the Conversation section of the PR. -
Deploy infrastructure in AWS - after peer review click on the Merge button in Pull Request Section.
-
Pushing changes to main branch (or Merge into main branch) triggers Terraform Infrastructure Change Management Pipeline but this time with
terraform applyjob. -
Go to Actions and check the status of the Terraform Infrastructure Change Management Pipeline. If successful - this workflow will create EKS cluster with 2 nodes and all the underlying infrastructure in AWS.
Once all underlying infrastructure is provisioned in AWS, you can deploy the Flask application. Again application is being installed via Github Actions workflow described in .github/workflows/deployment.yaml.
CI/CD Pipeline run the following jobs:
- Code style checks - EditorConfig, Pylint and Black Python linters
- Unit testing - runs simple unit test calling
test_web.py - Static Application Security Testing (SAST) - SonarCloud and Snyk vulnerability scanners (make sure you have SONAR_TOKEN and SNYK_TOKEN set in your Repository Secrets first!)
- Build Docker image and tag it with the current Commit hash
- Push Docker image to AWS ECR
- Deploy application by using helm chart
CI/CD Pipeline is being triggered by any change in the folloing locations:
/appfolder/helmfolderDockerfile
- Update file
/helm/values.yamland replace domain and email variables with your registered domain name and your email address:
domain = <your domain>
email = <your email>
- Update file
/.github/workflows/developmentand configure SonarCloud Organization and Projectkey as following:
-Dsonar.organization=<your organization>
-Dsonar.projectKey=<your project>
- Point the NS servers for your domain to the Route53 DNS servers. Get the NS records for your AWS hosted zone:
aws route53 list-hosted-zones-by-name
aws route53 get-hosted-zone --id <hosted zone id>
- Go to the Domain registrar (like GoDaddy.com) for your domain and configure the NameServers entries listed in the previous step as Nameservers for your domain:
From now on all DNS queries for your registered domain will be handled by AWS Route 53.
-
Commit and push your changes to the remote Github repo:
git add <changed files> git commit git push -
Pushing to
devbranch will trigger the CI/CD Pipeline. If all previous steps are completed successfully, application will be deployed indevenvironment -
Verify application deployment:
aws eks update-kubeconfig --name flask-cluster --region <AWS region> source terraform/pre-deploy/assume_role.sh kubectl get pods -n dev kubectl get ingress -n dev kubectl get svc -n kube-system -
Successfull deployment will install a helm chart which creates 2 pods in
devnamespace where NGINX server will be running, NGINX ingress controller indevnamespace, LoadBalancer service indevnamespace. It will also request and install SSL certificate for dev. and add a DNS record for dev. in Route 53. -
Open dev. in your browser and you should get to the landing page of Dev environment.
-
Create a Pull request to merge the changes from
devbranch into themainbranch. -
If you are happy with the result, just review the PR and click on Merge to merge
devintomainbranch. -
Merging to
mainbranch will trigger again the CI/CD Pipeline. If all previous steps are completed successfully, application will be deployed inprodenvironment -
Verify application deployment:
kubectl get pods -n prod kubectl get ingress -n prod kubectl get svc -n kube-system -
Successfull deployment will install a helm chart which creates 2 pods in
prodnamespace where NGINX server will be running, NGINX ingress controller inprodnamespace, LoadBalancer service inprodnamespace. It will also request and install SSL certificate for and add a DNS record for in Route 53. -
Open in your browser and you should get to the landing page of Prod environment.
- 0.1
- Initial Release
Distributed under the GPL-3.0 License. Further information in the LICENSE file.


