VM setup
Install Multipass on Linux
git remote set-url origin /mnt/h_drive/hdrive_data/Resources/wordpress_nginx/git-repo/git-repo/kubernetes.git
Ref: https://multipass.run/install
https://multipass.run/docs/configure-multipass-storage
https://microk8s.io/docs/install-multipass
multipass launch --name foo
multipass launch --name k8smaster-vm --cpus 4 --disk 20G --memory 4G
multipass list
multipass stop foo bar
multipass start foo
multipass shell foo
multipass delete bar
multipass stop foo
multipass set local.foo.cpus=4
multipass set local.foo.disk=20G
multipass set local.foo.memory=4G
multipass get local.foo.cpus
multipass get local.foo.disk
multipass get local.foo.memory
How to Install Kubernetes Cluster on Ubuntu 22.04
https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/
Prerequisites
- Minimal install Ubuntu 22.04
- Minimum 2GB RAM or more
- Minimum 2 CPU cores / or 2 vCPU
- 20 GB free disk space on /var or more
- Sudo user with admin rights
- Internet connectivity on each node
Lab Setup
- Master Node: 10.21.172.44 – k8smaster
- First Worker Node: 10.21.172.143 – node1
1) Set hostname on Each Node
- Login to to master node and set hostname using hostnamectl command
$ multipass shell foo
$ sudo hostnamectl set-hostname "k8smaster"
$ exec bash
$ sudo hostnamectl set-hostname "node1" // 1st worker node
$ exec bash
Add the following entries in /etc/hosts file on each node
10.21.172.44 k8smaster
10.21.172.143 node1
Execute beneath swapoff and sed command to disable swap. Make sure to run the following commands on all the nodes.
$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Load the following kernel modules on all the nodes
$ sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
Set the following Kernel parameters for Kubernetes, run beneath tee command
$ sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Reload the above changes, run
$ sudo sysctl --system
3) Install Containerd Runtime
In this guide, we are using containerd runtime for our Kubernetes cluster. So, to install containerd, first install its dependencies.
$ sudo apt-get update
$ sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Enable docker repository
$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Now, run following apt command to install containerd
$ sudo apt update
$ sudo apt install -y containerd.io
Configure containerd so that it starts using systemd as cgroup.
$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
$ sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Restart and enable containerd service
$ sudo systemctl restart containerd
$ sudo systemctl enable containerd
4) Add Apt Repository for Kubernetes
Kubernetes package is not available in the default Ubuntu 22.04 package repositories. So we need to add kubernetes repositories, run following command.
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/kubernetes-xenial.gpg
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Note: At time of writing this guide, Xenial is the latest Kubernetes repository but when repository is available for Ubuntu 22.04 (Jammy Jellyfish) then you need replace xenial word with ‘jammy’ in ‘apt-add-repository’ command.
5) Install Kubectl, Kubeadm and Kubelet
Post adding the repositories, install Kubernetes components like kubectl, kubelet and Kubeadm utility on all the nodes. Execute following set of commands,
$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
6) Initialize Kubernetes Cluster with Kubeadm
Now, we are all set to initialize Kubernetes cluster. Run the following Kubeadm command on the master node only.
$ sudo kubeadm init
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.101.165.236:6443 --token ydq2ph.rwkw543ongql5ubu \
--discovery-token-ca-cert-hash sha256:71cd1d99e7421976af20f43d981d7629a8db0f8096f8395f5a3380342187df18
Get cluster and node info
$ kubectl cluster-info
Kubernetes control plane is running at https://10.21.172.44:6443
CoreDNS is running at https://10.21.172.44:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready control-plane 10h v1.28.2
node1 Ready <none> 5h24m v1.28.2
7) Join Worker Nodes to the Cluster
On each worker node, use the kubeadm join command you noted down earlier after initializing the master node on step 6. It should look something like this:
$ kubeadm join 10.21.172.44:6443 --token 0p4mgk.rxn2a6m4c7t1atdz \
--discovery-token-ca-cert-hash sha256:57fc4625deadc0085f6160997419d7f95e8fd5265e857e5d45ac970c9be58f63
8) Install Calico Network Plugin
A network plugin is required to enable communication between pods in the cluster. Run following kubectl command to install Calico network plugin from the master node.
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
Verify the status of pods in kube-system namespace
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-jmxr4 1/1 Running 0 5h29m
calico-node-2txqx 1/1 Running 0 5h29m
calico-node-zkszp 1/1 Running 0 5h29m
coredns-5dd5756b68-k5s68 1/1 Running 0 10h
coredns-5dd5756b68-wd958 1/1 Running 0 10h
etcd-k8smaster 1/1 Running 1 (6h3m ago) 10h
kube-apiserver-k8smaster 1/1 Running 1 (6h3m ago) 10h
kube-controller-manager-k8smaster 1/1 Running 4 (45m ago) 10h
kube-proxy-ksh8j 1/1 Running 1 (6h3m ago) 10h
kube-proxy-p6kns 1/1 Running 0 5h30m
kube-scheduler-k8smaster 1/1 Running 4 (45m ago) 10h
9) Test Your Kubernetes Cluster Installation
To test Kubernetes installation, let’s try to deploy nginx based application and try to access it.
$ kubectl create deployment nginx-app --image=nginx --replicas=2
Check the status of nginx-app deployment
$ kubectl get deployment nginx-app
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-app 2/2 2 2 68s
Expose the deployment as NodePort,
$ kubectl expose deployment nginx-app --type=NodePort --port=80
service/nginx-app exposed
Run following commands to view service status
$ kubectl get svc nginx-app
$ kubectl describe svc nginx-app
Name: nginx-app
Namespace: default
Labels: app=nginx-app
Annotations: <none>
Selector: app=nginx-app
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.110.182.144
IPs: 10.110.182.144
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30309/TCP
Endpoints: 192.168.166.130:80,192.168.166.131:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Delete a service or deployment or pod
$ kubectl delete services hello-world
$ kubectl delete deployment hello-world
$ kubectl delete pod hello-world
$ kubectl delete -f pod6.yml
Use following curl command to access nginx based application
$ curl http://10.21.172.143:30309/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Create a file named “pod5.yml”
kind: Pod
apiVersion: v1
metadata:
name: delhipod
labels:
env: development
class: pods
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello-Bhupinder; sleep 5 ; done"]
Create the pod and apply
kubectl apply -f pod5.yml
Get labels of pods or nodes
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
delhipod 1/1 Terminating 0 10h class=pods,env=development
$ kubectl get nodes --show-labels
Get full details of pods
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
delhipod 1/1 Running 0 11m 192.168.166.136 node1 <none> <none>
Add label at runtime
$ kubectl label pods delhipod myname=soundarya
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
delhipod 1/1 Terminating 0 10h class=pods,env=development,myname=soundarya
Search pod by label
$ kubectl get pods -l myname=soundarya
NAME READY STATUS RESTARTS AGE
delhipod 1/1 Terminating 0 22h
Search pod in set
kubectl get pods -l 'env in (development)'
NAME READY STATUS RESTARTS AGE
delhipod 1/1 Running 0 16m
ReplicaController
kind: ReplicationController
apiVersion: v1
metadata:
name: myreplica
spec:
replicas: 5
selector:
myname: Bhupinder
template:
metadata:
name: testpod6
labels:
myname: Bhupinder
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello-Bhupinder; sleep 5 ; done"]
Get replicaController pods
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
myreplica 4 4 2 6h49m
Get replicat describe
$ kubectl describe rc myreplica
Name: myreplica
Namespace: default
Selector: myname=Bhupinder
Labels: myname=Bhupinder
Annotations: <none>
Replicas: 4 current / 4 desired
Pods Status: 2 Running / 2 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: myname=Bhupinder
Containers:
c00:
Image: ubuntu
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
while true; do echo Hello-Bhupinder; sleep 5 ; done
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m58s replication-controller Created pod: myreplica-r42qd
Normal SuccessfulCreate 2m58s replication-controller Created pod: myreplica-5k4lb
Scale the pod
$ kubectl scale --replicas=3 rc -l myname=Bhupinder
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
delhipod 1/1 Running 0 9h
myreplica-5k4lb 0/1 ImagePullBackOff 0 6m9s
myreplica-wbfpf 1/1 Running 0 6h53m
myreplica-wrs9n 1/1 Running 0 6h53m
ReplicaSet
kind: ReplicaSet
apiVersion: apps/v1
metadata:
name: myrs
spec:
replicas: 2
selector:
matchExpressions:
- {key: myname, operator: In, values: [Bhupinder, Bupinder, Bhopendra]}
- {key: env, operator: NotIn, values: [production]}
template:
metadata:
name: testpod7
labels:
myname: Bhupinder
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Technical-Guftgu; sleep 5 ; done"]
Delete replicaset
$ kubectl delete rs/myrs
replicaset.apps "myrs" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
delhipod 1/1 Running 1 (13m ago) 27h
myrs-m5wzd 1/1 Terminating 1 (13m ago) 17h
myrs-mmt85 1/1 Terminating 1 (13m ago) 17h
myrs-mxks4 1/1 Terminating 1 (13m ago) 17h
myrs-qfcg2 1/1 Terminating 0 47s
myrs-sk4qk 1/1 Terminating 1 (13m ago) 17h
Jobs
Jobs are such pods which would run once and would be terminated. It doesn’t automatically run like normal pod.
apiVersion: batch/v1
kind: Job
metadata:
name: testjob
spec:
template:
metadata:
name: testjob
spec:
containers:
- name: counter
image: centos:7
command: ["bin/bash", "-c", "echo Technical-Guftgu; sleep 5"]
restartPolicy: Never
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
delhipod 0/1 Error 1 30h
testjob-n7gjh 0/1 Completed 0 131m
Jobs in parallel
apiVersion: batch/v1
kind: Job
metadata:
name: testjob1
spec:
parallelism: 5
activeDeadlineSeconds: 10
template:
metadata:
name: testjob1
spec:
containers:
- name: counter
image: centos:7
command: ["bin/bash", "-c", "echo Technical-Guftgu; sleep 20"]
restartPolicy: Never
$ kubectl apply -f myjobparallel.yml
job.batch/testjob1 created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
delhipod 0/1 Error 1 30h
testjob-n7gjh 0/1 Completed 0 144m
testjob1-6vdnw 1/1 Running 0 3s
testjob1-76gzl 1/1 Running 0 3s
testjob1-s5ttc 1/1 Running 0 3s
testjob1-vxn66 1/1 Running 0 3s
testjob1-w9ssf 1/1 Running 0 3s
Watch the changing statue of pods
$ watch kubectl get pods
Cron Jobs
Jobs that need to run on a schedule time. This will run every 1 minute
apiVersion: batch/v1
kind: CronJob
metadata:
name: bhupi
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: ubuntu
name: bhupi
command: ["/bin/bash", "-c", "echo Technical-Guftgu; sleep 5"]
restartPolicy: Never
The main container will only be created once the init container is ready and working properly
apiVersion: v1
kind: Pod
metadata:
name: initcontainer
spec:
initContainers:
- name: c1
image: centos
command: ["/bin/sh", "-c", "echo LIKE AND SUBSCRIBE TECHNICAL GUFTGU > /tmp/xchange/testfile; sleep 30"]
volumeMounts:
- name: xchange
mountPath: "/tmp/xchange"
containers:
- name: c2
image: centos
command: ["/bin/bash", "-c", "while true; do echo `cat /tmp/data/testfile`; sleep 5; done"]
volumeMounts:
- name: xchange
mountPath: "/tmp/data"
volumes:
- name: xchange
emptyDir: {}
Check logs
$ kubectl logs -f pod/initcontainer
Defaulted container "c2" out of: c2, c1 (init)
LIKE AND SUBSCRIBE TECHNICAL GUFTGU
LIKE AND SUBSCRIBE TECHNICAL GUFTGU
LIKE AND SUBSCRIBE TECHNICAL GUFTGU
Check status of pod lifecycle phases
$ kubectl describe pod/initcontainer | grep -A 5 Conditions
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Pod Life Cycle
Lifecycle phases of a pod
- pending – pod has been accepted by k8s, but not running
- running – pod has been bound to a node, all containers are created and atleast one container is running
- succeeded – containers in the pod have terminated and will not restart
- failed – failed for any reason
- completed – when the pod has run everything and nothing is left
- unknown – when state of pod could not be obtained, typically due to network error, node is not known to master
Deployment
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 2
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod
labels:
name: deployment
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Technical-Guftgu; sleep 5; done"]
Check status of deployments
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
mydeployments 2/2 2 2 72s
Describe deployments
$ kubectl describe deploy mydeployments
Name: mydeployments
Namespace: default
CreationTimestamp: Tue, 26 Sep 2023 18:43:28 +0530
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: name=deployment
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: name=deployment
Containers:
c00:
Image: ubuntu
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
while true; do echo Technical-Guftgu; sleep 5; done
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mydeployments-64c9b975b7 (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m25s deployment-controller Scaled up replica set mydeployments-64c9b975b7 to 2
Get replicaset information
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
mydeployments-64c9b975b7 2 2 2 3m41s
Scale up or down the replica
$ kubectl scale --replicas=1 deploy mydeployments
deployment.apps/mydeployments scaled
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
mydeployments-64c9b975b7 1 1 1 4m57s
Status of deployment
$ kubectl rollout status deploy mydeployments
deployment "mydeployments" successfully rolled out
History of versions
$ kubectl rollout history deploy mydeployments
deployment.apps/mydeployments
REVISION CHANGE-CAUSE
1 <none>
Update the manifest file and apply.
Changing the log and image details in yml file
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 2
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod
labels:
name: deployment
spec:
containers:
- name: c00
image: centos
command: ["/bin/bash", "-c", "while true; do echo love you; sleep 5; done"]
$ kubectl apply -f deployment.yml
deployment.apps/mydeployments configured
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
mydeployments 2/2 1 2 28m
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
mydeployments-64c9b975b7 0 0 0 29m
mydeployments-6bcf54f76f 2 2 2 24s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
delhipod 0/1 Error 1 2d5h
mydeployments-6bcf54f76f-j9hfs 1/1 Running 0 68s
mydeployments-6bcf54f76f-rkvj6 1/1 Running 0 50s
$ kubectl logs -f mydeployments-6bcf54f76f-j9hfs
love you
love you
$ kubectl exec mydeployments-6bcf54f76f-j9hfs -- cat /etc/os-release
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
History after changes in the yml file
$ kubectl rollout history deploy mydeployments
deployment.apps/mydeployments
REVISION CHANGE-CAUSE
1 <none>
2 <none>
Undo to the last deployment
$ kubectl rollout undo deploy/mydeployments
deployment.apps/mydeployments rolled back
$ kubectl get pods | grep Running
mydeployments-64c9b975b7-bjn27 1/1 Running 0 3h21m
mydeployments-64c9b975b7-zlb5f 1/1 Running 0 3h20m
$ kubectl exec mydeployments-64c9b975b7-bjn27 -- cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ kubectl logs -f mydeployments-64c9b975b7-bjn27
Technical-Guftgu
Technical-Guftgu
Network call between two containers on same Pod and same Node
kind: Pod
apiVersion: v1
metadata:
name: testpod
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello-Bhupinder; sleep 5 ; done"]
- name: c01
image: httpd
ports:
- containerPort: 80
Install the pod
$ kubectl apply -f pod1.yml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod3 2/2 Running 0 4m6s
Call one container from other container using localhost and port
$ kubectl exec testpod -it -c c00 -- /bin/bash
root@testpod:/
root@testpod:/ apt update && apt install curl
root@testpod3:/ curl localhost:80
<html><body><h1>It works!</h1></body></html>
Network call between two containers on different Pod and same Node
kind: Pod
apiVersion: v1
metadata:
name: testpod1
spec:
containers:
- name: c01
image: nginx
ports:
- containerPort: 80
kind: Pod
apiVersion: v1
metadata:
name: testpod4
spec:
containers:
- name: c03
image: httpd
ports:
- containerPort: 80
$ kubectl apply -f pod2.yml
pod/testpod1 created
$ kubectl apply -f pod3.yml
pod/testpod4 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod1 1/1 Running 0 5s 192.168.166.153 node1 <none> <none>
testpod4 1/1 Running 0 14m 192.168.166.150 node1 <none> <none>
$ curl 192.168.166.153:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</html>
$ curl 192.168.166.150:80
<html><body><h1>It works!</h1></body></html>
ClusterIP
Network call between two containers on different Pod and different Node
These call are within the cluster only
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 1
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod1
labels:
name: deployment
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80
targetPort: 80
selector:
name: deployment
type: ClusterIP
Apply pod and service
$ kubectl apply -f deployhttpd.yml
$ kubectl apply -f service.yml
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mydeployments-685f595fb8-vgd52 1/1 Running 0 37s 192.168.166.138 node1 <none> <none>
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoservice ClusterIP 10.111.72.42 <none> 80/TCP 8s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
$ curl 10.111.72.42:80
<html><body><h1>It works!</h1></body></html>
NodePort
Expose Nginx outside cluster, make a servie accessible from outside the cluster
Exposes the service on the same port of each selected node in the Cluster using NAT.
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments-nginx
spec:
replicas: 1
selector:
matchLabels:
name: deployment-nginx
template:
metadata:
name: testpod1
labels:
name: deployment-nginx
spec:
containers:
- name: c00
image: nginx
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: nginxservice
spec:
ports:
- port: 80
targetPort: 80
selector:
name: deployment-nginx
type: NodePort
$ kubectl apply -f deploynginx.yml
deployment.apps/mydeployments-nginx created
$ kubectl apply -f svcnginx.yml
service/nginxservice created
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
demoservice NodePort 10.110.10.157 <none> 80:32245/TCP 104m name=deployment
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h <none>
nginxservice NodePort 10.100.147.56 <none> 80:30775/TCP 27s name=deployment-nginx
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8smaster Ready control-plane 7d2h v1.28.2 10.21.172.44 <none> Ubuntu 22.04.3 LTS 5.15.0-84-generic containerd://1.6.24
node1 Ready <none> 6d21h v1.28.2 10.21.172.143 <none> Ubuntu 22.04.3 LTS 5.15.0-84-generic containerd://1.6.24
Call the service using node/master IP and service port
NGINX service
curl http://10.21.172.143:30775
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Volumes
EmptyDir, volume is mounted on the Pod
Share volume between containers within the same Pod.
apiVersion: v1
kind: Pod
metadata:
name: myvolemptydir
spec:
containers:
- name: c1
image: centos
command: ["/bin/bash", "-c", "sleep 15000"]
volumeMounts:
- name: xchange
mountPath: "/tmp/xchange"
- name: c2
image: centos
command: ["/bin/bash", "-c", "sleep 10000"]
volumeMounts:
- name: xchange
mountPath: "/tmp/data"
volumes:
- name: xchange
emptyDir: {}
kubectl apply -f myjob.yml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myvolemptydir 2/2 Running 0 6m37s
$ kubectl exec myvolemptydir -c c1 -it -- /bin/bash
[root@myvolemptydir /]
[root@myvolemptydir /]
[root@myvolemptydir xchange]
a1 ab1.txt
$ kubectl exec myvolemptydir -c c2 -it -- /bin/bash
[root@myvolemptydir /]
[root@myvolemptydir /]
[root@myvolemptydir data]
a1 ab1.txt
EmptyDir, volume is mounted on the Host
apiVersion: v1
kind: Pod
metadata:
name: myvolhostpath
spec:
containers:
- image: centos
name: testc
command: ["/bin/bash", "-c", "sleep 15000"]
volumeMounts:
- mountPath: /tmp/hostpath
name: testvolume
volumes:
- name: testvolume
hostPath:
path: /tmp/data
kubectl apply -f hostpath.yml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myvolhostpath 1/1 Running 0 4m26s
$ kubectl exec myvolhostpath -- ls /tmp/
hostpath
ks-script-4luisyla
ks-script-o23i7rc2
ks-script-x6ei4wuu
Check the data folder created in the host, in this case it is the node and not the master
ubuntu@node1:~$ cd /tmp/
ubuntu@node1:/tmp$ ls
data
snap-private-tmp
systemd-private-9b930295dc42473f9a6c7b7ec60205cc-systemd-logind.service-4UOSb2
systemd-private-9b930295dc42473f9a6c7b7ec60205cc-systemd-resolved.service-6Ucg4d
systemd-private-9b930295dc42473f9a6c7b7ec60205cc-systemd-timesyncd.service-rStvZL
Persistent Volume
Sharing the volume between different nodes which is not stored on node, but on separate data volume. https://openebs.io/docs/user-guides/localpv-hostpath
Install OpenEBS
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
Create a configuration file for PersistentVolumeClaim
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-hostpath-pvc
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
Create the PersistentVolumeClaim
$ kubectl apply -f local-hostpath-pvc.yaml
persistentvolumeclaim/local-hostpath-pvc created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-053537cb-ad82-4a23-a522-bab3c9fcccd4 5G RWO Delete Bound default/local-hostpath-pvc openebs-hostpath 17m
$ kubectl get pvc local-hostpath-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-hostpath-pvc Pending openebs-hostpath 4s
Create confiration file for pod using the PV(persistent volume)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-local-hostpath-deploy
spec:
replicas: 1
selector:
matchLabels:
name: local-pvc
template:
metadata:
name: testpod2
labels:
name: local-pvc
spec:
containers:
- name: hello-container
image: busybox
command:
- sh
- -c
- 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
volumeMounts:
- mountPath: /mnt/store
name: local-storage
volumes:
- name: local-storage
persistentVolumeClaim:
claimName: local-hostpath-pvc
Create the Pod
$ kubectl apply -f local-hostpath-pod1.yaml
deployment.apps/hello-local-hostpath-deploy created
Verify that the container in the Pod is running
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-local-hostpath-deploy-57bc4fd889-kx227 1/1 Running 0 67s
$ kubectl exec hello-local-hostpath-deploy-57bc4fd889-kx227 -- cat /mnt/store/greet.txt
Sun Oct 1 16:01:50 UTC 2023 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
$ kubectl describe pod hello-local-hostpath-deploy-57bc4fd889-sqhdv
ame: hello-local-hostpath-deploy-57bc4fd889-sqhdv
Namespace: default
Priority: 0
Service Account: default
Node: node1/10.21.172.143
Start Time: Sun, 01 Oct 2023 22:37:56 +0530
Details about the dynamically provisioned Local PersistentVolume
$ kubectl get pvc local-hostpath-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-hostpath-pvc Bound pvc-053537cb-ad82-4a23-a522-bab3c9fcccd4 5G RWO openebs-hostpath 2m15s
Look at the PersistentVolume details to see where the data is stored
$ kubectl get pv pvc-053537cb-ad82-4a23-a522-bab3c9fcccd4 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: openebs.io/local
creationTimestamp: "2023-10-01T16:01:42Z"
finalizers:
- kubernetes.io/pv-protection
labels:
openebs.io/cas-type: local-hostpath
name: pvc-053537cb-ad82-4a23-a522-bab3c9fcccd4
resourceVersion: "435983"
uid: 60882a22-60f0-4e7d-a339-3c33a9fe6828
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5G
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: local-hostpath-pvc
namespace: default
resourceVersion: "435916"
uid: 053537cb-ad82-4a23-a522-bab3c9fcccd4
local:
fsType: ""
path: /var/openebs/local/pvc-053537cb-ad82-4a23-a522-bab3c9fcccd4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
persistentVolumeReclaimPolicy: Delete
storageClassName: openebs-hostpath
volumeMode: Filesystem
status:
phase: Bound
Available pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-local-hostpath-deploy-57bc4fd889-sqhdv 1/1 Running 0 48s
myvolhostpath 1/1 Running 2 (94m ago) 46h
Check the content via the mount point
$ kubectl exec hello-local-hostpath-deploy-57bc4fd889-sqhdv -- cat /mnt/store/greet.txt
Sun Oct 1 16:39:00 UTC 2023 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Sun Oct 1 16:44:00 UTC 2023 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Sun Oct 1 16:49:02 UTC 2023 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Sun Oct 1 17:06:40 UTC 2023 [hello-local-hostpath-deploy-57bc4fd889-kx227] Hello from OpenEBS Local PV.
Sun Oct 1 17:07:59 UTC 2023 [hello-local-hostpath-deploy-57bc4fd889-sqhdv] Hello from OpenEBS Local PV.
Check the content written in the Persistent volume, in this case, the path is on the node1
ubuntu@node1:/var/openebs$ cd local/
ubuntu@node1:/var/openebs/local$ ls
pvc-750a755b-9bd9-47bf-8caf-f8907f741354
ubuntu@node1:/var/openebs/local$ cd pvc-750a755b-9bd9-47bf-8caf-f8907f741354/
ubuntu@node1:/var/openebs/local/pvc-750a755b-9bd9-47bf-8caf-f8907f741354$ ls
greet.txt
ubuntu@node1:/var/openebs/local/pvc-750a755b-9bd9-47bf-8caf-f8907f741354$ cat greet.txt
Sun Oct 1 16:39:00 UTC 2023 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Liveness Probe
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: mylivenessprobe
spec:
containers:
- name: liveness
image: ubuntu
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 1000
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 30
Namespaces
Create a new namespace
apiVersion: v1
kind: Namespace
metadata:
name: dev
labels:
name: dev
$ kubectl apply -f devns.yml
namespace/dev created
$ kubectl get ns
NAME STATUS AGE
default Active 10d
dev Active 2s
kube-node-lease Active 10d
kube-public Active 10d
kube-system Active 10d
openebs Active 37h
Configuration file for new pod
kind: Pod
apiVersion: v1
metadata:
name: testpod
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Technical Guftgu; sleep 5 ; done"]
restartPolicy: Never
Create Pod in the namespace dev
$ kubectl apply -f devpod.yml -n dev
pod/testpod created
$ kubectl get pods
No resources found in default namespace.
$ kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 7s
Delete pod of the namespace dev
$ kubectl delete -f devpod.yml -n dev
pod "testpod" delete
Set dev namespace as default namespace
$ kubectl config set-context $(kubectl config current-context) --namespace=dev
Context "kubernetes-admin@kubernetes" modified.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 24s
Find current namespace
$ kubectl config view | grep namespace
namespace: dev
Manage Resource Quota
apiVersion: v1
kind: Pod
metadata:
name: resources
spec:
containers:
- name: resource
image: centos
command: ["/bin/bash", "-c", "while true; do echo Technical-Guftgu; sleep 5 ; done"]
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
apiVersion: v1
kind: ResourceQuota
metadata:
name: myquota
spec:
hard:
limits.cpu: "400m"
limits.memory: "400Mi"
requests.cpu: "200m"
requests.memory: "200Mi"
kind: Deployment
apiVersion: apps/v1
metadata:
name: resource-quota-deployments
spec:
replicas: 3
selector:
matchLabels:
objtype: deployment
template:
metadata:
name: testpod8
labels:
objtype: deployment
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Technical-Guftgu; sleep 5 ; done"]
resources:
requests:
cpu: "200m"
Set the default value for CPU Limit and CPU Request for a pod
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
Request – undefined Limit – defined
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-2
spec:
containers:
- name: default-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "1"
Limits:
cpu: 1
Requests:
cpu: 1
Request – defined Limit – undefined
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-3
spec:
containers:
- name: default-cpu-demo-3-ctr
image: nginx
resources:
requests:
cpu: "0.75"
Limits:
cpu: 1
Requests:
cpu: 750m
Horizontal Pod Autoscaler(HPA)
wget -O metricserver.yml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
- --kubelet-insecure-tls
$ kubectl apply -f metricserver.yml
serviceaccount/metrics-server created
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeploy
spec:
replicas: 1
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod8
labels:
name: deployment
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
$ kubectl autoscale deployment mydeploy --cpu-percent=20 --min=1 --max=10
Go to a pod and call the apt update so that cpu usage increase
ubuntu@k8smaster:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydeploy-5c49c88d9f-68zls 1/1 Running 0 2m18s
mydeploy-5c49c88d9f-6g6m6 1/1 Running 0 3m39s
ubuntu@k8smaster:~$ kubectl exec mydeploy-5c49c88d9f-6g6m6 -it -- /bin/bash
root@mydeploy-5c49c88d9f-6g6m6:/usr/local/apache2
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mydeploy-5c49c88d9f-4l8nl 1/1 Running 0 8m23s
pod/mydeploy-5c49c88d9f-cbqp2 1/1 Running 0 3m19s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mydeploy 5/5 5 5 8m23s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mydeploy-5c49c88d9f 5 5 5 8m23s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/mydeploy Deployment/mydeploy 0%/20% 1 10 5 7m20s
Helm and Helm Chart
https://www.youtube.com/watch?v=FpJpHWe3Va4
To download helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
$ helm version
version.BuildInfo{Version:"v3.13.0", GitCommit:"825e86f6a7a38cef1112bfa606e4127a706749b1", GitTreeState:"clean", GoVersion:"go1.20.8"}
Helm repo list
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$ helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
$ helm repo list
NAME URL
stable https://charts.helm.sh/stable
$ helm search repo jenkins
NAME CHART VERSION APP VERSION DESCRIPTION
stable/jenkins 2.5.4 lts DEPRECATED - Open source continuous integration...
$ helm repo remove stable
"stable" has been removed from your repositories
Create helm chart
ubuntu@k8smaster:~/helm_files$ helm create helloworld
Creating helloworld
ubuntu@k8smaster:~/helm_files$ ls
helloworld
ubuntu@k8smaster:~/helm_files$ cd helloworld/
ubuntu@k8smaster:~/helm_files/helloworld$ ls
Chart.yaml charts templates values.yaml
ubuntu@k8smaster:~/helm_files/helloworld$ tree
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 10 files
ubuntu@k8smaster:~/helm_files$ helm install testjenkins stable/jenkins
WARNING: This chart is deprecated
....
ubuntu@k8smaster:~/helm_files$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mydeploy-5c49c88d9f-lq5zb 1/1 Running 1 (36m ago) 31h
pod/testjenkins-86f7fc84df-mpsts 0/2 Pending 0 62s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42h
service/testjenkins ClusterIP 10.101.64.238 <none> 8080/TCP 62s
service/testjenkins-agent ClusterIP 10.100.0.104 <none> 50000/TCP 62s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mydeploy 1/1 1 1 31h
deployment.apps/testjenkins 0/1 1 0 62s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mydeploy-5c49c88d9f 1 1 1 31h
replicaset.apps/testjenkins-86f7fc84df 1 1 0 62s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/mydeploy Deployment/mydeploy 0%/5% 1 10 1 42h
$ helm delete testjenkins
release "testjenkins" uninstalled
ubuntu@k8smaster:~/helm_files$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mydeploy-5c49c88d9f-lq5zb 1/1 Running 1 (41m ago) 31h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mydeploy 1/1 1 1 31h
NAME DESIRED CURRENT READY AGE
replicaset.apps/mydeploy-5c49c88d9f 1 1 1 31h
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/mydeploy Deployment/mydeploy 0%/5% 1 10 1 42h
List all the named releases
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
testtomcat default 1 2023-10-05 20:20:37.533286776 +0530 IST deployed tomcat-0.4.3 7.0
Helm upgrade and rollback
$ helm install testchart stable/tomcat --version 0.4.0
NAME: testchart
LAST DEPLOYED: Fri Oct 6 00:02:20 2023
...
$ helm history testchart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Oct 6 00:02:20 2023 deployed tomcat-0.4.0 7.0 Install complete
$ helm upgrade testchart stable/tomcat
WARNING: This chart is deprecated
false
Release "testchart" has been upgraded. Happy Helming!
...
$ helm history testchart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Oct 6 00:02:20 2023 superseded tomcat-0.4.0 7.0 Install complete
2 Fri Oct 6 00:03:26 2023 deployed tomcat-0.4.3 7.0 Upgrade complete
$ helm rollback testchart
Rollback was a success! Happy Helming!
$ helm history testchart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Oct 6 00:02:20 2023 superseded tomcat-0.4.0 7.0 Install complete
2 Fri Oct 6 00:03:26 2023 superseded tomcat-0.4.3 7.0 Upgrade complete
3 Fri Oct 6 00:12:08 2023 deployed tomcat-0.4.0 7.0 Rollback to 1
Download helm package
$ helm pull stable/tomcat
$ ls
helloworld tomcat-0.4.3.tgz
$ helm pull stable/tomcat --untar
$ ls
helloworld tomcat
$ tree tomcat/
tomcat/
├── Chart.yaml
├── README.md
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── appsrv-svc.yaml
│ └── appsrv.yaml
└── values.yaml
1 directory, 7 files