This sample shows how to deploy a private AKS cluster with a public DNS zone for the name resolution of the API server name to the private IP address of its private endpoint.
This project can be used to deploy a private AKS cluster with a Public DNS address with Dynamic allocation of IPs and enhanced subnet support, Azure Active Directory Pod Identity, and more. Instead, this sample shows how to deploy a private AKS cluster with a Private DNS address.
This sample provides an ARM templates to deploy the following topology two node pools.
The ARM template deploys:
As a best practice, you should always consider using a private AKS cluster in your production environment, or at least secure access to the API server, by using authorized IP address ranges in Azure Kubernetes Service. When using a private AKS cluster, the API Server is only accessible from your virtual network, any peered virtual network, or on-premises network connected via S2S VPN or ExpressRoute to the virtual network hosting your AKS cluster. Any request to the API Server goes over the virtual network and does not traverse the internet. The API server endpoint has no public IP address. To manage the API server, you'll need to use a virtual machine that has access to the AKS cluster's virtual network. There are several options for establishing network connectivity to the private cluster.
Creating a virtual machine in the same virtual network as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges. For more information, see Create a private Azure Kubernetes Service cluster. For more information on Azure Private Links, see What is Azure Private Link?.
Today when you need to access a private AKS cluster, for example to use the kubectl command-line tool, you have to use a virtual machine located in the same virtual network of the AKS cluster or a peered network. Likewise, if you use Azure DevOps or GitHub Actions to deploy workloads to your private AKS cluster, you need to use an Azure DevOps Linux or Windows self-hosted agent or a GitHub Actions self-hosted runner located in the same virtual network of the AKS cluster or a peered network. This usually requires your virtual machine to be connected via VPN or Express Route to the cluster virtual network or a jumpbox virtual machine to be created in the cluster virtual network. AKS run command allows you to remotely invoke commands in an AKS cluster through the AKS API. This feature provides an API that allows you to, for example, execute just-in-time commands from a remote laptop for a private cluster. This can greatly assist with quick just-in-time access to a private cluster when the client machine is not on the cluster private network while still retaining and enforcing the same RBAC controls and private API server.
Here are some samples that show how to use the az aks command to run commands to a private AKS cluster.
Simple command
az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl get pods -n kube-system"
Deploy a manifest by attaching the specific file
az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl apply -f deployment.yaml -n default" -f deployment.yaml
Deploy a manifest by attaching a whole folder
az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl apply -f deployment.yaml -n default" -f .
Perform a Helm install and pass the specific values manifest
az aks command invoke -g <resourceGroup> -n <clusterName> \
-c "helm repo add bitnami https://charts.bitnami.com/bitnami && helm repo update && helm install my-release -f values.yaml bitnami/nginx" \
-f values.yaml
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new Dynamic IP Allocation capability in Azure CNI solves this problem by allotting pod IP addresses from a subnet separate from the subnet hosting the AKS cluster nodes. This feature offers the following benefits:
Better IP utilization
: private IP addresses are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of private IP addresses in the cluster compared to the traditional CNI solution, which statically allocate to each worker node the same number of private IP addresses from the subnet.Scalable and flexible
: when using separate subnets for nodes and pods, the two subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same virtual network. You can also configure a separate pod subnet for a node pool. Likewise, you can deploy node pools in the same subnet or in separate node pools. You can define the subnet for worker nodes and pods of a node pool at provisioning time.High performance
: Since pod are assigned private IP addresses from a subnet, they have direct connectivity to other cluster pod and resources in the same virtual network or any peered virtual network. The solution supports very large clusters without any degradation in performance.Separate VNet policies for pods
: Since pods have a separate subnet, you can configure separate virtual network policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a NAT Gateway associated to the subnet hosting pods, and using NSGs to filter traffic between node pools.You can use the deploy.sh
Bash script to deploy the topology. Make sure to change the name of the AKS cluster in the deploy.sh
Bash script and substitute the placeholders in the azuredeploy.parameters.json
file with meaningful values. Also, make sure to enable the following public preview features before deploying the ARM template:
The following picture shows the resources deployed by the ARM template in the target resource group.
The following picture shows the resources deployed by the ARM template in the node resource group associated to the AKS cluster:
In the visio
folder you can find the Visio document which contains the above diagrams.
If you open an ssh session to the Linux virtual machine and manually run the nslookup command using the FQND of the API server as a parameter, you should see an output like the the following:
In order to connect the AKS cluster, you can run th following Bash script on the Jumpbox virtual machine:
#!/bin/bash
name="<name of the AKS cluster>"
resourceGroup="<name of the AKS resource group>"
# Install Azure CLI on Ubuntu
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Login with your Azure account
az login
# Install Kubectl
sudo az aks install-cli
# Use the following command to configure kubectl to connect to the new Kubernetes cluster
echo "Getting access credentials configure kubectl to connect to the ["$aksName"] AKS cluster..."
az aks get-credentials --name $name --resource-group $resourceGroup