Single and Multi-Region Deployment of Consul Clusters on Azure
MPL-2.0 License
This objective of this project is to provide an examples of a single and multi-region Consul cluster deployment in Azure using Terraform. This is a high-level overview of the environment(s) that is created:
In order to perform the steps in this guide, you will need to have an Azure subscription for which you can create Service Principals as well as network and compute resources. You can create a free Azure account here.
Certain steps will require entering commands through the Azure CLI. You can find out more about installing it here.
Create Azure API Credentials - set up the main Service Principal that will be used by Terraform:
export
environment variables for the main Terraform Service Principal. For example, create a env.sh
file with the following values (obtained from step 1
above):
export ARM_SUBSCRIPTION_ID="xxxxxxxx-yyyy-zzzz-xxxx-yyyyyyyyyyyy"
export ARM_CLIENT_ID="xxxxxxxx-yyyy-zzzz-xxxx-yyyyyyyyyyyy"
export ARM_CLIENT_SECRET="xxxxxxxx-yyyy-zzzz-xxxx-yyyyyyyyyyyy"
export ARM_TENANT_ID="xxxxxxxx-yyyy-zzzz-xxxx-yyyyyyyyyyyy"
You can then source these environment variables as such:
$ source env.sh
Create a read-only Azure Service Principal (using the Azure CLI) that will be used to perform the Consul auto-join (make note of these values as you will use them later in this guide):
$ az ad sp create-for-rbac --role="Reader" --scopes="/subscriptions/[YOUR_SUBSCRIPTION_ID]"
git clone
the hashicorp-guides/azure-consul
repository
cd
into the desired Terraform subdirectory: azure-consul/terraform/[single-region | multi-region]
At this point, you will need to create a terraform.tfvars
file, which contains the Azure read-only credentials for Consul auto-join.
NOTE: We explicitly add this file to our .gitignore
file to avoid inadvertantly committing sensitive information. There's a terraform.tfvars.example
file provided that you can copy and update with your specific values:
auto_join_subscription_id
, auto_join_client_id
, auto_join_client_secret
, auto_join_tenant_id
will use the values obtained from creating the read-only auto-join Service Principal created in step #5 of the Deployment Prerequisites earlier.
Run terraform init
to initialize the working directory and download appropriate providers
Run terraform plan
to verify deployment steps and validate all modules
Finally, run terraform apply
to deploy the Consul cluster
jumphost_ssh_connection_strings = [
ssh-add private_key.pem && ssh -A -i private_key.pem [email protected]
]
consul_private_ips = [
ssh [email protected],
ssh [email protected],
ssh [email protected]
]
Since we are installing and configuring Consul at runtime, you will need to wait several minutes for everything to complete. You can view the progress of the installation with tail -f /var/log/user-data.log
.
Once you see the message "Completed Configuration of Consul Node. Run 'consul members' to view cluster information."
you can perform the following:
Run consul members
to view the status of the local cluster:
$ consul members
Node Address Status Type Build Protocol DC Segment
consul-eastus-0 10.1.48.4:8301 alive server 1.0.0 2 dc1 <all>
consul-eastus-1 10.1.64.4:8301 alive server 1.0.0 2 dc1 <all>
consul-eastus-2 10.1.80.4:8301 alive server 1.0.0 2 dc1 <all>
consul members -wan
:$ consul members -wan
Node Address Status Type Build Protocol DC Segment
consul-eastus-0.dc1 10.1.48.4:8302 alive server 1.0.0 2 dc1 <all>
consul-eastus-1.dc1 10.1.64.4:8302 alive server 1.0.0 2 dc1 <all>
consul-eastus-2.dc1 10.1.80.4:8302 alive server 1.0.0 2 dc1 <all>
consul-westus-0.dc1 10.0.48.4:8302 alive server 1.0.0 2 dc1 <all>
consul-westus-1.dc1 10.0.64.4:8302 alive server 1.0.0 2 dc1 <all>
consul-westus-2.dc1 10.0.80.4:8302 alive server 1.0.0 2 dc1 <all>