This tutorial walks you through setting up Kubernetes the way the Kubernetes documentation suggests you might like to do it. A continuation of Kelsey's KTHW tutorial.
This tutorial walks you through setting up Kubernetes the way the Kubernetes documentation suggests you might like to do it - using kubeadm
.
This tutorial is written for people who have followed the tutorial Kubernetes The Hard Way (KTHW) by Kesley Hightower.
As such, this tutorial is called Kubernetes the Kubernetes Way (KTKW).
Similar to KWHW, this tutorial is optimized for learning. Once again, prepare to take the long route to ensure you understand each task required to bootstrap small or large clusters of Kubernetes.
If you are preparing for the Linux/CNCF Foundations' CKA exam, then this tutorial will be valuable to you. The Kubernetes clusters that we will build together will look similar to those you are asked to investigate and repair in your exam.
The results of this tutorial are not how I run Kubernetes in production, nor anyone else at Stark & Wayne (a popular consultancy to help your team stand up, run, and live with Kubernetes or Cloud Foundry). Nonetheless, please participate in raising issues or helping others' resolve their issues and questions. Also, please feel welcome to submit PRs to fix grammar (this is an official British English, and oxford comma zone), regressions from newer versions of Kubernetes, or bugs you experience and resolve. Please. I care greatly for your contributions.
This KTKW tutorial differs in some noticable ways from Kelsey's KTHW labs:
kubeadm init
to initialize the first controller instance, and kubeadm join
to add worker nodes, and then more controller nodes.--config kubeadm.yaml
configuration files.kubeadm
.flannel
, rather than GCE routing.Each of these differences is explained in more detail in the lab chapter Differences from Kubernetes The Hard Way.
So that I can more easily ensure the commands in this tutorial do create working environments, especially for multiple major versions of Kubernetes, I have a bootstrap script. See the section below for examples of small and large clusters.
The examples below assume you have installed gcloud
CLI, have authenticated, have created a GCE project, and have targetted the project, and a region/zone.
You can provide flags to specify the number of master/controllers, workers, and the version of Kubernetes to install:
bootstrap-ktkw --kube 1.15.5 --masters 1 --workers 3
The default bootstrap-ktkw
will deploy 1 controller, 2 workers, with the latest Kubernetes:
$ bootstrap-ktkw
...
$ kubectl get nodes,pods --all-namespaces
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/controller-0 Ready master 4m21s v1.16.3 10.240.0.10 35.230.85.80 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/worker-0 Ready <none> 2m v1.16.3 10.240.0.20 35.227.139.140 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/worker-1 Ready <none> 7s v1.16.3 10.240.0.21 34.82.228.203 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/coredns-5644d7b6d9-8h7kg 1/1 Running 0 4m3s 10.22.0.2 controller-0 <none> <none>
kube-system pod/coredns-5644d7b6d9-9hm5t 1/1 Running 0 4m3s 10.22.0.3 controller-0 <none> <none>
kube-system pod/etcd-controller-0 1/1 Running 0 3m11s 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-apiserver-controller-0 1/1 Running 0 3m21s 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-controller-manager-controller-0 1/1 Running 0 3m7s 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-flannel-ds-amd64-dvrs6 1/1 Running 0 7s 10.240.0.21 worker-1 <none> <none>
kube-system pod/kube-flannel-ds-amd64-n64mm 1/1 Running 0 2m 10.240.0.20 worker-0 <none> <none>
kube-system pod/kube-flannel-ds-amd64-rz7f2 1/1 Running 0 4m3s 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-proxy-kz8g4 1/1 Running 0 4m3s 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-proxy-t4mvk 1/1 Running 0 2m 10.240.0.20 worker-0 <none> <none>
kube-system pod/kube-proxy-xqpwp 1/1 Running 0 7s 10.240.0.21 worker-1 <none> <none>
3 controllers, 3 workers:
$ bootstrap-ktkw --masters 3 --workers 3
...
$ kubectl get nodes,pods --all-namespaces
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/controller-0 Ready master 26m v1.16.3 10.240.0.10 35.227.139.140 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/controller-1 Ready master 23m v1.16.3 10.240.0.11 35.230.85.80 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/controller-2 Ready master 21m v1.16.3 10.240.0.12 104.196.241.199 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/worker-0 Ready <none> 19m v1.16.3 10.240.0.20 34.82.228.203 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/worker-1 Ready <none> 17m v1.16.3 10.240.0.21 34.82.91.107 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
node/worker-2 Ready <none> 15m v1.16.3 10.240.0.22 35.233.134.26 Ubuntu 18.04.3 LTS 5.0.0-1025-gcp containerd://1.2.10
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/coredns-5644d7b6d9-cmk9k 1/1 Running 0 25m 10.244.0.3 controller-0 <none> <none>
kube-system pod/coredns-5644d7b6d9-xbql2 1/1 Running 0 25m 10.244.0.2 controller-0 <none> <none>
kube-system pod/etcd-controller-0 1/1 Running 0 25m 10.240.0.10 controller-0 <none> <none>
kube-system pod/etcd-controller-1 1/1 Running 0 23m 10.240.0.11 controller-1 <none> <none>
kube-system pod/etcd-controller-2 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
kube-system pod/kube-apiserver-controller-0 1/1 Running 0 25m 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-apiserver-controller-1 1/1 Running 1 23m 10.240.0.11 controller-1 <none> <none>
kube-system pod/kube-apiserver-controller-2 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
kube-system pod/kube-controller-manager-controller-0 1/1 Running 1 24m 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-controller-manager-controller-1 1/1 Running 0 22m 10.240.0.11 controller-1 <none> <none>
kube-system pod/kube-controller-manager-controller-2 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-5xt7c 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-8qfg6 1/1 Running 0 19m 10.240.0.20 worker-0 <none> <none>
kube-system pod/kube-flannel-ds-amd64-dlccw 1/1 Running 0 15m 10.240.0.22 worker-2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-jxhst 1/1 Running 1 23m 10.240.0.11 controller-1 <none> <none>
kube-system pod/kube-flannel-ds-amd64-k5jfs 1/1 Running 0 17m 10.240.0.21 worker-1 <none> <none>
kube-system pod/kube-flannel-ds-amd64-sxvbk 1/1 Running 0 25m 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-proxy-6b2pn 1/1 Running 0 17m 10.240.0.21 worker-1 <none> <none>
kube-system pod/kube-proxy-8r6ft 1/1 Running 0 15m 10.240.0.22 worker-2 <none> <none>
kube-system pod/kube-proxy-bhn4g 1/1 Running 0 25m 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-proxy-cnmmh 1/1 Running 0 23m 10.240.0.11 controller-1 <none> <none>
kube-system pod/kube-proxy-gwwcv 1/1 Running 0 19m 10.240.0.20 worker-0 <none> <none>
kube-system pod/kube-proxy-hwzbl 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
kube-system pod/kube-scheduler-controller-0 1/1 Running 1 25m 10.240.0.10 controller-0 <none> <none>
kube-system pod/kube-scheduler-controller-1 1/1 Running 0 22m 10.240.0.11 controller-1 <none> <none>
kube-system pod/kube-scheduler-controller-2 1/1 Running 0 21m 10.240.0.12 controller-2 <none> <none>
To delete the instances and all GCE networking:
destroy-ktkw