Learn how to set up the Kubernetes cluster in 30 mins and deploy the application inside the cluster.
MIT License
This is not a comprehensive guide to learn Kubernetes from scratch, rather this is just a small guide/cheat sheet to quickly setup and run applications with Kubernetes and deploy a very simple application on single workload VM. This repo can be served as quick learning manual to understand Kubernetes.
# In the same directory where you have downloaded Vagrantfile, run
vagrant up
vagrant ssh
This will download the Ubuntu box image and do the entire setup for you with the help of virtual box. It just need virtual box installed.root@vagrant:/home/vagrant# kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "19",
"gitVersion": "v1.19.2",
"gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
"gitTreeState": "clean",
"buildDate": "2020-09-16T13:41:02Z",
"goVersion": "go1.15",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "19",
"gitVersion": "v1.19.2",
"gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
"gitTreeState": "clean",
"buildDate": "2020-09-16T13:32:58Z",
"goVersion": "go1.15",
"compiler": "gc",
"platform": "linux/amd64"
}
}
# This will spin up Kubernetes cluster with CIDR: 10.244.0.0/16
root@vagrant:/home/vagrant# kubeadm init --pod-network-cidr=10.244.0.0/16
kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
root@vagrant:/home/vagrant# mkdir -p $HOME/.kube
root@vagrant:/home/vagrant# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@vagrant:/home/vagrant# sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
kubectl apply -f \
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@vagrant:/home/vagrant# kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
# If everything goes well, you will see something like this.
root@vagrant:/home/vagrant# kubectl get node
NAME STATUS ROLES AGE VERSION
vagrant Ready master 3m40s v1.19.2
Run all the commands from root shell.
Kubernetes runs in client server model, similar to the way the docker runs. Kubernetes server exposes kubernetes-api, and each of kubeadm, kubelet and kubectl connect with this kubernetes server api to get the task done. In the master slave model, there are two entities:
Control Plane : Connects with Worker nodes for resource allocation. Worker nodes : Cluster entitiy that actually allocates tasks and run Pods.
You can create a simple nginx pod with following yaml spec. Save this in file name : pod.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
Key name | Key Description |
---|---|
apiVersion |
Kubernetes server API |
kind |
Kubernetes Resource type: Pod
|
metadata.name |
Name of Kubernetes Pod |
spec.container.name |
Name of Container which will run in a Pod |
spec.container.name |
Name of docker image to run |
Run this Pod spec with. kubectl apply -f pod.yml
root@vagrant:/home/vagrant/kubedata# kubectl apply -f pod.yaml
pod/nginx created
# If everything goes OK, you will se something like this.
root@vagrant:/home/vagrant/kubedata# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 43s
root@vagrant:/home/vagrant/kubedata#
Use : kubectl get pods
to get the list of all Pods.
kubectl exec -it <pod_name> -c <container_name> -- <command>
root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- whoami
root
root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- /bin/sh
# cat /etc/*-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
- name: curl
image: appropriate/curl
stdin: true
tty: true
command: ["/bin/sh"]
Save this into pod-with-two-containers.yml.kubectl apply -f pod-with-two-containers.yml
kubectl delete -f pod-with-two-containers.yml
. This will remove the pod mentioned in spec file.spec.containers.name
.
root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c curl -- /bin/sh
# curl nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#
yml
file, defines how the pods will run in cluster. They can specify:
Run deployments in replicas.
Create file with following specification.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx
Notice the difference.
-- kind: Pod
++ kind: Deployment
++ spec:
++ replicas: 3
++ selector:
++ matchLabels:
++ app: nginx-app
Remove existing pods(if any) kubectl delete pods --all
, and create deployment.
root@vagrant:/home/vagrant/kubedata# kubectl apply -f deployment-replica.yml
deployment.apps/nginx created
root@vagrant:/home/vagrant/kubedata# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/3 3 0 7s
root@vagrant:/home/vagrant/kubedata# kubectl get deployments -w
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/3 3 1 14s
nginx 2/3 3 2 20s
Get the list of all deployments
: kubectl get deployments
or kubectl get deploy
Get the list of all replicaset
: kubectl get replicaset
or kubectl get rs
root@vagrant:/home/vagrant/kubedata# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-d6ff45774-f84l8 1/1 Running 0 4m59s
nginx-d6ff45774-gzxfz 1/1 Running 0 4m59s
nginx-d6ff45774-t69mw 1/1 Running 0 4m59s
root@vagrant:/home/vagrant/kubedata# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 162m
root@vagrant:/home/vagrant/kubedata# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-d6ff45774 3 3 3 162m
root@vagrant:/home/vagrant/kubedata#
Print a detailed description of the selected resources, including related resources such as events or controllers: kubectl describe <resource_type> <resouce_name>
Get deployment configuration in JSON
format: kubectl get deployment nginx -o yaml
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx-app
ports:
- protocol: TCP
port: 80
targetPort: 80
---
.spec.ports.port
to port 80 of target pod specified by spec.ports.taregtPort
root@vagrant:/home/vagrant/kubedata# kubectl apply -f nginx-service.yml
deployment.apps/nginx unchanged
service/nginx created
root@vagrant:/home/vagrant/kubedata#
kubectl get services
to get the list of services.
root@vagrant:/home/vagrant/kubedata# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d5h
nginx ClusterIP 10.104.178.240 <none> 80/TCP 49s
Cluster IP is the IP interface of Pod anstraction on host. curl
cluster IP will connect us to the Pod.
root@vagrant:/home/vagrant/kubedata# curl 10.104.178.240
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
kubectl get endpoints
or kubectl get ep
to get list of exposed endpoints.
root@vagrant:/home/vagrant/kubedata# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:6443 2d5h
nginx 10.244.0.10:80,10.244.0.8:80,10.244.0.9:80 2m
Since I am running 3 different replicas, we are seeing 3 different Pod IPs.root@vagrant:/home/vagrant/kubedata# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d5h
nginx ClusterIP 10.104.178.240 <none> 80/TCP 49s
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Notice:
spec:
++ type: LoadBalancer
kubectl apply -f nginx-service-lb.yml
root@vagrant:/home/vagrant/kubedata# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d5h
nginx LoadBalancer 10.104.178.240 <pending> 80:32643/TCP 17m
Now the state is pending :)netstat -nltp
, and notice the kube-proxy
++ tcp 0 0 0.0.0.0:32643 0.0.0.0:* LISTEN 13095/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 7024/kubelet
++ tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 13095/kube-proxy
See the magic.
root@vagrant:/home/vagrant/kubedata# curl 0.0.0.0:32643
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
LoadBalancer
exposed the service endpoints out of Kubernetes cluster IP interface and in our vagrant host we can access it now directly :)kube-proxy
interface to host machine. And hack is done, then we can access the service running in Pod(replica set deployment) from our host interface directly.32643
is not exposed through kube-proxy over host/control-plane node.
Kubernetes Cluster
+---------------------------------------------+
| POD |
| +---------+ |
| +------> NGINX | |
| | +---------+ |
| LB | |
+--------------+ | +---------------+ POD |
0.0.0.0:32643| Kube Proxy |80 | | | +---------+ |
<------------------>----------->+ SERVICE +------> NGINX | |
| | | 80| | +---------+ |
+--------------+ | +---------------+ |
HOST | | POD |
| | +---------+ |
| +------> NGINX | |
| +---------+ |
+---------------------------------------------+
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
This spec specifies the volume is at /data
on cluster's node.kubectl apply -f pv.yml
root@vagrant:/home/vagrant/kubedata# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv 5Gi RWO Retain Available manual 62s
ReadWriteOnce
, ReadOnlyMany
, ReadWriteMany
Access Mode | Meaning |
---|---|
ReadWriteOnce | volume can be mounted as read-write by a single node |
ReadOnlyMany | volume can be mounted read-only by many nodes |
ReadWriteMany | volume can be mounted as read-write by many nodes |
Create a PVC spec. File
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Apply it.
root@vagrant:/home/vagrant/kubedata# kubectl apply -f pv-claim.yml
persistentvolumeclaim/pv-claim created
root@vagrant:/home/vagrant/kubedata# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-claim Bound pv 5Gi RWO manual 8s
root@vagrant:/home/vagrant/kubedata# kubectl describe pvc pv-claim
Name: pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
root@vagrant:/home/vagrant/kubedata#
Lets create a POD which will use PV as Volume using PVC. File
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-with-pvc
spec:
volumes:
- name: nginx-pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: nginx-with-pv
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-pv-storage
root@vagrant:/home/vagrant/kubedata# kubectl get pods nginx-pod-with-pvc
NAME READY STATUS RESTARTS AGE
nginx-pod-with-pvc 1/1 Running 0 16s
root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx-pod-with-pvc -c nginx-with-pv -- /bin/bash
root@nginx-pod-with-pvc:/# curl localhost
Hi PV
+--------------------------------------+
| +------------+ |
| | POD | +--------------->
| +-----+------+ | | |
| | | | |
| | +-----+------+ | v
| | | PV | | /data
| | +------+-----+ |
| +-----v------+ ^ |
| | PVC +---------+ |
| +------------+ |
| |
+--------------------------------------+
Once we create spec.yml in bits, we will create a big spec to show our Infrastructure as a Code and deploy that 😄.
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbserver
labels:
app: dbserver
spec:
selector:
matchLabels:
app: dbserver
template:
metadata:
labels:
app: dbserver
spec:
containers:
- image: mysql
name: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
value: mysecretpassword
ports:
- containerPort: 3306
name: dbserver
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
peopledb
for sprinboot application to access.
mysql -- mysql -u root -pmysecretpassword
& create database peopledb
apiVersion: v1
kind: Service
metadata:
name: dbservice
spec:
selector:
app: dbserver
ports:
- protocol: TCP
port: 3306
targetPort: 3306
appserver
from this File.
docker build -t appserver .
apiVersion: apps/v1
kind: Deployment
metadata:
name: appserver
spec:
replicas: 1
selector:
matchLabels:
app: appserver
template:
metadata:
labels:
app: appserver
spec:
containers:
- name: appserver
image: appserver
imagePullPolicy: Never
env:
- name: DB_HOST
value: dbservice
apiVersion: v1
kind: Service
metadata:
name: contacts
spec:
type: LoadBalancer
selector:
app: appserver
ports:
- protocol: TCP
port: 80
targetPort: 8080
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mysql"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbserver
labels:
app: dbserver
spec:
selector:
matchLabels:
app: dbserver
template:
metadata:
labels:
app: dbserver
spec:
containers:
- image: mysql
name: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
value: mysecretpassword
ports:
- containerPort: 3306
name: dbserver
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: dbservice
spec:
selector:
app: dbserver
ports:
- protocol: TCP
port: 3306
targetPort: 3306
kuebctl apply -f mysql-spec.yml
😄
apiVersion: apps/v1
kind: Deployment
metadata:
name: appserver
spec:
replicas: 1
selector:
matchLabels:
app: appserver
template:
metadata:
labels:
app: appserver
spec:
containers:
- name: appserver
image: appserver
imagePullPolicy: Never
env:
- name: DB_HOST
value: dbservice
---
apiVersion: v1
kind: Service
metadata:
name: contacts
spec:
type: LoadBalancer
selector:
app: appserver
ports:
- protocol: TCP
port: 80
targetPort: 8080
Quickly apply it with kubectl apply -f appserver-spec.yml
Namespace are software level cluster virtualization over same physical k8s cluster.
root@vagrant:/home/vagrant# kubectl get ns
NAME STATUS AGE
default Active 19d
kube-node-lease Active 19d
kube-public Active 19d
kube-system Active 19d
Kubernetes starts with 4 namespaces:
Get Pods from specific namespace
kubectl get pods --namespace=default
OR kubectl get pods -n default
root@vagrant:/home/vagrant# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-g9wxg 1/1 Running 5 19d
coredns-f9fd979d6-zrdvs 1/1 Running 5 19d
etcd-vagrant 1/1 Running 5 19d
kube-apiserver-vagrant 1/1 Running 5 19d
kube-controller-manager-vagrant 1/1 Running 7 19d
kube-flannel-ds-64l2p 1/1 Running 6 19d
kube-proxy-4j4kw 1/1 Running 5 19d
kube-scheduler-vagrant 1/1 Running 7 19d
kubectl create namespace qa
namespace: qa
, File
apiVersion: v1
kind: Pod
metadata:
name: nginx
++ namespace: qa
spec:
containers:
- name: nginx
image: nginx
kubectl api-resources --namespaced=false
kubectl config get-contexts
root@vagrant:/home/vagrant/kubedata# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin
kubectl config set-context dev-env --cluster=kubernetes --user=new-admin --namespace=dev-env
root@vagrant:/home/vagrant/kubedata# kubectl config set-context dev-env --cluster=kubernetes --user=new-admin --namespace=dev-env
Context "dev-env" created.
root@vagrant:/home/vagrant/kubedata# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
dev-env kubernetes new-admin dev-env
* kubernetes-admin@kubernetes kubernetes kubernetes-admin
kubectl config use-context dev-env
new-admin
authentication. This is the user created during context creation.new-admin
to use the resource in context and create a role binding: Run this before switching contextkubectl config set-credentials new-admin --username=adm --password=changeme
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: new-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected]
EOF