A simple Python app deployed to Azure Container Apps and access postgres DB.
MIT License
This repository contains solutions to two distinct tasks:
The codebase can be better visualized as below:
In this section, we will cover a practical solution for setting up a deployable production environment for a simplified application. This environment consists of a API service deployed on Azure Cloud. The goal is to automate the setup of this production environment using "infrastructure as code" principles. Below are the steps to achieve this:
Note: The solution development was conducted on a MacBook M1. Therefore, the instructions are tailored for use in a macOS environment or a similar development environment.
Before we get started ensure you have below tools setup:
Terratest
for testing our TF Code. So only if you wish to write or run tests then you would require it.All the tools we have used so far are Free to use for personal usage.
Start Docker Desktop
From the root directory of this repository execute below command:
make start-app-db
The above command uses docker compose to run containerized instance of our API and postgres-13.5
database and then uploads the mock data into the postgres database.
Test the application by making API requests. For example:
GET - Greeting API (Health check)
curl http://127.0.0.1:3000
{"message":"Hello world!"}
GET - Rates API
curl "http://127.0.0.1:3000/rates?date_from=2021-01-01&date_to=2021-01-31&orig_code=CNGGZ&dest_code=EETLL"
The output should be something like this:
{
"rates" : [
{
"count" : 3,
"day" : "2021-01-31",
"price" : 1154.33333333333
},
{
"count" : 3,
"day" : "2021-01-30",
"price" : 1154.33333333333
},
...
]
}
Stop the application
make stop-app-db
Check all available options
make help
There’s an SQL dump in db/rates.sql
that needs to be loaded into a PostgreSQL 13.5 database.
After installing the database, the data can be imported through:
createdb rates
psql -h localhost -U postgres < db/rates.sql
You can verify that the database is running through:
psql -h localhost -U postgres -c "SELECT 'alive'"
The output should be something like:
?column?
----------
alive
(1 row)
Start from the rates
folder.
DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y python3-pip
pip install -U gunicorn
pip3 install -Ur requirements.txt
gunicorn -b :3000 wsgi
The API should now be running on http://localhost:3000.
Get average rates between ports:
curl "http://127.0.0.1:3000/rates?date_from=2021-01-01&date_to=2021-01-31&orig_code=CNGGZ&dest_code=EETLL"
The output should be something like this:
{
"rates" : [
{
"count" : 3,
"day" : "2021-01-31",
"price" : 1154.33333333333
},
{
"count" : 3,
"day" : "2021-01-30",
"price" : 1154.33333333333
},
...
]
}
The solution uses Terraform (infrastructure as code components) that allow you to deploy this environment on cloud providers such as Azure.
We have 2 logical segregation of the terraform code as below:
/infrastructure/bootstrap
:Bootstrap Infrastructure refers to the essential infrastructure resources that are necessary during the initial provisioning phase and ideally remains unchanged or require infrequent modifications.
Contains terraform code for provisioning Resource Group, Storage account, Service Principal etc.
These kind of infrastructure requires higher level of privileges for provisioning and in some organization it's maintained by a separate team (Cloud Engineering, SRE etc) for various purposes.
Building infrastructure from scratch poses a "Chicken and Egg Paradox" challenge. This challenge arises because, for Terraform to store its state, a storage container is required. We may choose to provision this storage container manually. However, as per our commitment to provisioning all resources through Infrastructure as Code (IAC), we encounter a problem. We can deal with this problem as below:
Follow below steps for provisioning bootstrap infrastructure:
Ensure you have Terraform 1.5.7
installed
terraform --version
Authenticate Azure CLI by running below command from the terminal (or other appropriate means)
az login
From the terminal change the directory
cd infrastructure/bootstrap
Comment terraform backend config backend "azurerm" {}
is commented in infrastructure\bootstrap\main.tf
Initialize Terraform for dev environment
terraform init -var-file=./dev/terraform.tfvars
Plan the Terraform changes and review
terraform plan -var-file=./dev/terraform.tfvars
Apply changes after its reviewed
terraform init -var-file=./dev/terraform.tfvars
Re Initialize Terraform to use a remote backend
Uncomment # backend "azurerm" {}
Then execute below command and when prompted respond as yes
:
terraform init -backend-config=./dev/backend-config.hcl -var-file=./dev/terraform.tfvars
Once successfully executed the local terraform.state
file has been securely stored in the Azure Storage Account.
Repeat the steps for other environments.
We have configured a bootstrap-infrastructure
Github Actions Pipeline to automate the bootstrap infrastructure provisioning and avoid any local execution apart from the initial setup here
/infrastructure/app-infra
:Contains terraform code for provisioning Azure VNet, VNet associated infrastructure components and Azure Container Apps for deploying a simple containerized application.
Application infrastructure refers to any infrastructure that we will use for deploying and running the application. They are intended for the team who owns the application and generally requires very specific privileges for the infrastructure deployment.
The application infrastructure primarily contains the Terraform code for deploying the Python Flask API and the Database connection configurations can be managed by the terraform code defined in the terraform.tfvars
in the respective environment directory.
Override the default values set in the variables.tf
for each environment in the [ENVIRONMENT_NAME]/terraform.tfvars
file respectively as shown below:
app_container_config={
name = "[ENVIRONMENT_NAME]-python-postgres-azure-app"
revision_mode = "Single"
memory = "0.5Gi"
cpu = 0.25
ingress = {
allow_insecure_connections = false
external_enabled = true
target_port = 5000
}
environment_variables = [
{
name = "name"
value = "[ENVIRONMENT_NAME]postgres"
},
{
name = "user"
value = "[ENVIRONMENT_NAME]abhishek"
},
{
name = "host"
value = "[POSTGRES_HOST_URL_FOR_RESPECTIVE_ENVIRONMENT]"
}
]
}
These infrastructure changes are deployed as part of the python-application-deployment
Github Actions Pipeline configure here
We use Github Actions for automating below operations:
We are using Github-OIDC
for securing the connectivity between the Github Actions and Azure Cloud thus reducing the risk of compromising the credentials. Once the service principal is provisioned by the bootstrap infrastructure you must configure them in the github repository under Settings > Secrets and Variables > Actions > New Repository Secret
Python Application Pipeline:
Infrastructure Pipeline:
Secure Access to the Postgres Database deployed on Azure Cloud requires:
We would like to use below components of the Azure Cloud for implementing the solution:
Azure Active Directory: Integrate Azure AD with PostgreSQL database to manage users, authentication, and access control. This allows for centralized user management and provides a basis for approval workflows.
Azure Key Vault: Store and manage the secrets and passwords securely within Azure Key Vault. Key Vault provides features for secret rotation and auditing, which align with our compliance requirements.
Azure Logic Apps: Logic Apps to automates user creation, approval workflows, and password rotation. Logic Apps can integrate with Azure AD and Key Vault to perform these tasks.
Azure Monitor and Azure Security Center: Use of the Azure services to monitor and audit activities on our Azure PostgreSQL database. It provide insights and compliance checks for our database operations.
The solution can be visualized with the help of below request flow diagram:
Below is the list of the things we must do to make the implementation production ready.
Pipenv
or Poetry
for better dependency management.Test Pyramid strategy
thus maturing testing strategy.semver
for versioning the images and API.Terratest
for Integration test.Smoke/E2E
testing for IAC once the Infrastructure is provisioned. Execute on a scheduled event to detect any drift from the desired state defined as IAC.Prod
deployment and creation of a Change Request
for auditing purposes.Chaos Testing
using Simian Army for testing Disaster Recovery strategy.Dev
and Pre
. Configure VNet to allow access only within Organizations private network, example once the users are connected to the VPN.tunnel
for using with the CI/CD pipeline thus allowing access to Dev/Pre environment API for executing tests once deployed.