Building a multi-master multi-node Kubernetes homelab with kubeadm, Ansible, Helm and Terraform.
BSD-3-CLAUSE License
A repository to keep resources and configuration files used with my Kubernetes homelab.
ansible
- Ansible playbooks to deploy Kubernetes homelab.cka
- CKA study notes.ckad
- CKAD study notes.kubernetes
- Kubernetes resources that are defined in YAML and to be deployed using kubectl
.kubernetes/helm
- Kubernetes resources to be deployed using helm
charts.packer
- configuration files build Qemu/KVM images with Packer.pxe
- configuration files for PXE boot and Kickstart.regcred
- docker registry credentials.terraform
- configuration files to manage Kubernetes with Terraform.Network is configured as follows:
10.11.1.0/24
.10.11.1.1
.10.11.1.2
and 10.11.1.3
.10.11.1.4
currently no special config but a couple of VLANs to separate homelab devices from the rest of the home network.10.11.1.20
.hl.test
(a reserved top level DNS name .test, see rfc2606).10.11.1.140-10.11.1.149
.Hostnames and their IP addresses:
Hostname | IP Address | Information | OS |
---|---|---|---|
mikrotik.hl.test | 10.11.1.1 | Mikrotik L009UiGS-2HaxD router | RouterOS 7 |
admin1.hl.test | 10.11.1.2 | DNS/DHCP master, NTP, SMTP, HAProxy master, Keepalived | Rocky 8 |
admin2.hl.test | 10.11.1.3 | DNS/DHCP slave, NTP, SMTP, HAProxy backup, Keepalived | Rocky 8 |
switch.hl.test | 10.11.1.4 | Netgear GS308E managed switch | V1.00.11EN |
truenas.hl.test | 10.11.1.5 | TrueNAS Core shared storage server for Kubernetes | TrueNAS Core 13 |
pi.hl.test | 10.11.1.7 | Raspberry Pi 1 Model B Pi-hole DNS ad blocker | Raspbian 12 |
mikrotik-lte.hl.test | 10.11.1.11 | Mikrotik RBwAPR-2nD with LTE antennas | RouterOS 6 |
pxe.hl.test | 10.11.1.20 | PXE boot server | Rocky 8 |
kvm1.hl.test | 10.11.1.21 | KVM hypervisor | Rocky 8 |
kvm2.hl.test | 10.11.1.22 | KVM hypervisor | Rocky 8 |
kvm3.hl.test | 10.11.1.23 | KVM hypervisor | Rocky 8 |
kubelb.hl.test | 10.11.1.30 | Virtual IP address for HAProxy/keepalived | N/A |
srv31.hl.test | 10.11.1.31 | Kubernetes control plane | Rocky 9 |
srv32.hl.test | 10.11.1.32 | Kubernetes control plane | Rocky 9 |
srv33.hl.test | 10.11.1.33 | Kubernetes control plane | Rocky 9 |
srv34.hl.test | 10.11.1.34 | Kubernetes worker node | Rocky 9 |
srv35.hl.test | 10.11.1.35 | Kubernetes worker node | Rocky 9 |
srv36.hl.test | 10.11.1.36 | Kubernetes worker node | Rocky 9 |
Kubernetes environment runs on three KVM hypervisors. The goal is to maintain service in the event of a loss of a (single) host. This blog post explains how to build a multi-master Kubernetes homelab cluster by hand using KVM, PXE boot and kubeadm.
Commodity hardware is used to keep costs to a minimum.
Hostname | CPU Cores | RAM (MB) | Storage | OS | Vendor |
---|---|---|---|---|---|
mikrotik.hl.test | 1 | 128 | 128MB | RouterOS 7 | Mikrotik |
mikrotik-lte.hl.test | 1 | 64 | 16MB | RouterOS 6 | Mikrotik |
pxe.hl.test | 4 | 8192 | 120GB SSD | Rocky 8 | Dell |
kvm1.hl.test | 8 | 24576 | 240GB SSD | Rocky 8 | Dell |
kvm2.hl.test | 8 | 24576 | 240GB SSD | Rocky 8 | Dell |
kvm3.hl.test | 8 | 24576 | 240GB SSD | Rocky 8 | Dell |
truenas.hl.test | 4 | 8192 | 240GB SSD, 2x 320GB HDDs in RAID 1 for storage pool | TrueNAS Core 13 | Dell |
pi.hl.test | 1 | 512 | 8GB SD card | Raspbian 12 | Raspberry Pi 1 Model B |
Previously, provisioning of KVM guests was done by using a PXE boot server (CentOS 7, Rocky 8, Rocky 9) with Kickstart templates.
I have since migrated to Packer to make the VM deployment process faster. PXE boot is still used to provision physical hosts (hypervisors).
A TrueNAS NFS server is used to create persistent volumes claims using democratic-csi
.
Homelab provides other services to Kubernetes that aren't covered here:
Velero is used to safely backup and restore Kubernetes cluster resources and persistent volumes.
Component | Software |
---|---|
CNI | Calico |
CRI | Containerd |
CSI | Democratic CSI |
DNS | CoreDNS |
Load Balancer | MetalLB |
Service Mesh | Istio |
SSL certificates are signed by the homelab CA.
Create your own Certificate Authority (CA) for homelab environment. Run the following on Linux:
openssl req -newkey rsa:2048 -keyout homelab-ca.key -nodes -x509 -days 3650 -out homelab-ca.crt
DOMAIN=wildcard.apps.hl.test
openssl genrsa -out "${DOMAIN}".key 2048 && chmod 0600 "${DOMAIN}".key
openssl req -new -sha256 -key "${DOMAIN}".key -out "${DOMAIN}".csr
openssl x509 -req -in "${DOMAIN}".csr -CA homelab-ca.crt -CAkey homelab-ca.key -CAcreateserial -out "${DOMAIN}".crt -days 1825 -sha256
~170W
Monthly, the homelab costs (((170W * 24h) / 1000) * £0.33/kWh * 365days) / 12months = £40.95 (~47$).
The deployment section assumes that the homelab environment has been provisioned.
See ansible/README.md
.
Use this to deploy Kubernetes cluster with Ansible.
See terraform/README.md
.
Use this to deploy various Kubernetes resources with Terraform.
Democratic CSI implements the container storage interface spec providing storage for Kubernetes.
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
helm upgrade --install zfs-nfs \
democratic-csi/democratic-csi \
--namespace democratic-csi \
--create-namespace \
--version "0.11.1" \
--values ./kubernetes/helm/truenas-nfs/freenas-nfs.yaml
Update the config map kubernetes/metallb/metallb-config-map.yml
and specify the IP address range. Deploy MetalLB network load-balancer:
kubectl apply -f ./kubernetes/metallb
The Istio namespace must be created manually.
kubectl create ns istio-system
The kubectl apply
command may show transient errors due to resources not being available in the cluster in the correct order. If that happens, simply run the command again.
kubectl apply -f ./kubernetes/istio/istio-kubernetes.yml
Install httpd-healthcheck:
kubectl apply -f ./kubernetes/httpd-healthcheck
Install Istio add-on Prometheus:
kubectl apply -f ./kubernetes/istio-addons/prometheus
Install Istio add-on Kiali:
kubectl apply -f ./kubernetes/istio-addons/kiali
kubectl apply -f ./kubernetes/monitoring-ns-istio-injection-enabled.yml
kubectl apply -f ./kubernetes/monitoring-ns-with-istio
Deploy kube-state-metrics
:
kubectl apply -f ./kubernetes/kube-state-metrics
Create a secret called prometheus-cluster-name that contains the cluster name the Prometheus instance is running in:
kubectl -n monitoring create secret generic \
prometheus-cluster-name --from-literal=CLUSTER_NAME=kubernetes-homelab
Deploy prometheus
:
kubectl apply -f ./kubernetes/prometheus
kubectl apply -f ./kubernetes/grafana
Alertmanager uses the Incoming Webhooks feature of Slack, therefore you need to set it up if you want to receive Slack alerts.
Update the config map kubernetes/alertmanager/alertmanager-config-map.yml
and specify your incoming webhook URL. Deploy alertmanager
:
kubectl apply -f ./kubernetes/alertmanager
Update the secret file kubernetes/mikrotik-exporter/mikrotik-exporter-secret.yml
and specify your password for the Mikrotik API user. Deploy mikrotik-exporter
:
kubectl apply -f ./kubernetes/mikrotik-exporter
kubectl apply -f ./kubernetes/pihole-exporter
Deploy the Helm chart:
helm repo add enix https://charts.enix.io
helm install x509-certificate-exporter \
enix/x509-certificate-exporter \
--namespace monitoring \
--version "1.20.0" \
--values ./kubernetes/helm/x509-certificate-exporter/values.yml
kubectl create namespace kubecost
helm repo add kubecost https://kubecost.github.io/cost-analyzer/
helm upgrade --install kubecost \
kubecost/cost-analyzer \
--namespace kubecost \
--version "1.91.2" \
--values ./kubernetes/helm/kubecost/values.yaml
kubectl apply -f ./kubernetes/helm/kubecost/kubecost-service.yaml
kubectl create namespace logging
kubectl apply -f ./kubernetes/logging/loki-pvc.yml
kubectl apply -f ./kubernetes/logging/loki-deployment.yml
kubectl apply -f ./kubernetes/logging/promtail-deployment.yml