GitOps principles to define kubernetes cluster state via code. Community around k8s@home is on discord: https://discord.gg/RGvKzVg
APACHE-2.0 License
This repo is my Kubernetes cluster in a declarative state. Flux and Helm Operator watch my clusters folder and makes the changes to my cluster based on the yaml manifests. Renovate auto updates images and helm charts based on upstream changes.
Feel free to join our Discord if you have any questions.
Currently using k0s by way of k0sctl. These configurations can be viewed in provision/k0s.
nix-shell --command 'k0sctl apply -f ./provision/k0s/production.yaml'
nix-shell --command 'k0sctl kubeconfig -c ./provision/k0s/production.yaml > ./hack/main'
sops -d provision/cilium/production.yaml | helm install cilium cilium/cilium -f -
nix-shell --command 'k0sctl kubeconfig -c ./provision/k0s/production.yaml > ./hack/main'
Have flux
installed (need to nixify this)
set GITHUB_TOKEN ghp_Qk5eLNaaaaaaaaaaaaaaaaaaaaaaaaaaa
To boostrap the cluster:
flux bootstrap github \
--components=source-controller,kustomize-controller,helm-controller,notification-controller \
--path=clusters/env/production \
--version=latest \
--owner=crutonjohn \
--repository=gitops
sops -d sops-secret.enc.yaml | kubectl apply -f -
k label nodes horus lion magnus dorn guilliman sanguinius lorgar ${FAMILY_DOMAIN}/bgp=worker
k label nodes lorgar ${FAMILY_DOMAIN}/role=nas
k label nodes dorn guilliman sanguinius ${FAMILY_DOMAIN}/rook=distributed
All my nodes below are running bare metal Ubuntu or Debian
Device | Count | OS Disk Size | Data Disk Size | Ram | Purpose |
---|---|---|---|---|---|
HP 800 G3 Mini | 3 | 120GB SSD | N/A | 32 GB | k8s Control Plane |
Minisforum MS-01 (12600H) | 3 | 1x 1TB NVME | 1x 2TB NVME (rook-ceph) | 64GB | k8s Workers |
Ryzen 3900x Custom | 1 | 2x 1TB SSD | N/A | 128GB | k8s Rook-Ceph NAS |
Supermicro 216BE1C-R741JBOD | 1 | N/A | 24x 1TB SSD | N/A | Disk Shelf |
Can generally be viewed at settings.yaml
A lot of inspiration for my cluster came from the people that have shared their cluster configuration with me. Thanks to all the people who donate their time to the Home Operations community. Join us on Discord!