Local Kubernetes automation with Terraform, Ansible, and VirtualBox.
MIT License
This project automates the deployment of a Kubernetes cluster using Ansible, Terraform and VirtualBox. It provides a simple and reproducible way to set up a local Kubernetes environment for development and testing purposes.
NOTE: This project is an inspiration from the Kraven Security blog: "How to Create a Local Kubernetes Cluster: Terraform and Ansible" that will be linked at the end of the README.md file.
Open a terminal on your local machine.
Generate a new SSH key pair:
ssh-keygen -t rsa -b 4096 -C "[email protected]"
When prompted, press Enter to accept the default file location (~/.ssh/id_rsa
).
Enter a secure passphrase when prompted (or press Enter for no passphrase).
Your new SSH key pair is now generated and saved in the ~/.ssh
directory.
Create a main.tf
file with the following content:
terraform {
required_providers {
virtualbox = {
source = "terra-farm/virtualbox"
version = "0.2.2-alpha.1"
}
}
}
provider "virtualbox" {
# Configuration options
}
Define your VirtualBox resources in the main.tf
file. For example:
resource "virtualbox_vm" "node" {
count = 3
name = "node-${count.index + 1}"
image = "path/to/your/ubuntu.iso"
cpus = 2
memory = "2048 mib"
network_adapter {
type = "bridged"
host_interface = "en0"
}
}
Initialize and apply the Terraform configuration:
terraform init
terraform apply
This setup uses the terra-farm/virtualbox
provider to create and manage VirtualBox VMs through Terraform. Adjust the VM specifications and network settings as needed for your Kubernetes cluster.
Create an ansible.cfg
file in your project directory with the following content:
[defaults]
inventory = ./inventory
host_key_checking = False
Create an inventory file named inventory
to store the host information from Terraform outputs:
[nodes]
node1 ansible_host=<IP_ADDRESS_1>
node2 ansible_host=<IP_ADDRESS_2>
node3 ansible_host=<IP_ADDRESS_3>
Replace <IP_ADDRESS_X>
with the actual IP addresses obtained from Terraform outputs.
Create a vars.yaml
file to store variables for your Ansible playbook:
---
ansible_user: vagrant
ansible_ssh_private_key_file: ~/.ssh/id_rsa
kind_version: "v0.20.0"
kubectl_version: "v1.27.3"
Create an Ansible playbook named k8s_deploy.yaml
for setting up the Kubernetes cluster:
---
- hosts: nodes
become: true
vars_files:
- vars.yaml
tasks:
# Include tasks for installing Docker, Kind, and kubectl
# (You can refer to the context provided earlier for these tasks)
Running the Ansible playbook should look something like this:
ansible-playbook -i inventory k8s_deploy.yaml --extra-vars "@vars.yaml"
This setup uses a static inventory file to store host information from Terraform outputs, configures Ansible to use this inventory, and sets up the necessary files for running the Ansible playbook to deploy the Kubernetes cluster. The --extra-vars
option is used to include variables from the vars.yaml
file.
SSH into the first node:
ssh vagrant@<IP_ADDRESS_1>
Check if Docker is installed:
docker --version
Check if Kind is installed:
kind --version
Test the cluster:
kubectl get nodes
If everything is set up correctly, you should see the nodes in the cluster. Now you can start deploying your applications to the cluster.