===================
Deployment
=================
1.) Deployment without any nodeName or nodeSelector, pod will spread among all of the available worker node as by default
schedular use spread policy for pod placment.
anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
//* Now you can see nginx pod placed on both worker node *//
anuj@maste01:~$
anuj@maste01:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-6qbg5 1/1 Running 0 6s 10.32.0.7 worker01 <none> <none>
nginx-deployment-66b6c48dd5-6wl7j 1/1 Running 0 6s 10.40.0.9 worker02 <none> <none>
nginx-deployment-66b6c48dd5-99xnc 1/1 Running 0 6s 10.40.0.8 worker02 <none> <none>
nginx-deployment-66b6c48dd5-hprz5 1/1 Running 0 6s 10.40.0.7 worker02 <none> <none>
nginx-deployment-66b6c48dd5-j9s72 1/1 Running 0 6s 10.32.0.9 worker01 <none> <none>
nginx-deployment-66b6c48dd5-r7lmf 1/1 Running 0 6s 10.32.0.8 worker01 <none> <none>
nginx-deployment-66b6c48dd5-rcqd5 1/1 Running 0 6s 10.40.0.6 worker02 <none> <none>
nginx-deployment-66b6c48dd5-xsjpq 1/1 Running 0 6s 10.40.0.5 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 16h 10.32.0.4 worker01 <none> <none>
============== pod placement using nodeName ============
using nodeName keyword, just before the target node where you want that your pod should run on that particular node only.
you can see in following output nginx pod are running on both worker nodes.
Now lets do some update to above used deployment config file so that all nginx-deployment pod will run only worker02 only
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-6qbg5 1/1 Running 0 5m27s 10.32.0.7 worker01 <none> <none> <===== worker01
nginx-deployment-66b6c48dd5-6wl7j 1/1 Running 0 5m27s 10.40.0.9 worker02 <none> <none> <===== worker02
nginx-deployment-66b6c48dd5-99xnc 1/1 Running 0 5m27s 10.40.0.8 worker02 <none> <none> <===== worker02
nginx-deployment-66b6c48dd5-hprz5 1/1 Running 0 5m27s 10.40.0.7 worker02 <none> <none> <===== worker02
nginx-deployment-66b6c48dd5-j9s72 1/1 Running 0 5m27s 10.32.0.9 worker01 <none> <none> <===== worker01
nginx-deployment-66b6c48dd5-r7lmf 1/1 Running 0 5m27s 10.32.0.8 worker01 <none> <none> <===== worker01
nginx-deployment-66b6c48dd5-rcqd5 1/1 Running 0 5m27s 10.40.0.6 worker02 <none> <none> <===== worker02
nginx-deployment-66b6c48dd5-xsjpq 1/1 Running 0 5m27s 10.40.0.5 worker02 <none> <none> <===== worker02
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 16h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
nodeName: worker02 <================ Secheduler is by pass and pod will be scheduled on node which is specified
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-6qbg5 1/1 Running 0 8m25s 10.32.0.7 worker01 <none> <none>
nginx-deployment-66b6c48dd5-6wl7j 1/1 Running 0 8m25s 10.40.0.9 worker02 <none> <none>
nginx-deployment-66b6c48dd5-99xnc 1/1 Running 0 8m25s 10.40.0.8 worker02 <none> <none>
nginx-deployment-66b6c48dd5-hprz5 1/1 Running 0 8m25s 10.40.0.7 worker02 <none> <none>
nginx-deployment-66b6c48dd5-j9s72 1/1 Running 0 8m25s 10.32.0.9 worker01 <none> <none>
nginx-deployment-66b6c48dd5-r7lmf 1/1 Running 0 8m25s 10.32.0.8 worker01 <none> <none>
nginx-deployment-66b6c48dd5-rcqd5 1/1 Running 0 8m25s 10.40.0.6 worker02 <none> <none>
nginx-deployment-66b6c48dd5-xsjpq 1/1 Running 0 8m25s 10.40.0.5 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 16h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment configured
anuj@maste01:~$
//* All pods deleted will be redploy to worker02 *//
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-6qbg5 1/1 Running 0 9m16s 10.32.0.7 worker01 <none> <none>
nginx-deployment-66b6c48dd5-6wl7j 1/1 Terminating 0 9m16s 10.40.0.9 worker02 <none> <none>
nginx-deployment-66b6c48dd5-hprz5 1/1 Terminating 0 9m16s 10.40.0.7 worker02 <none> <none>
nginx-deployment-66b6c48dd5-j9s72 1/1 Running 0 9m16s 10.32.0.9 worker01 <none> <none>
nginx-deployment-66b6c48dd5-r7lmf 1/1 Running 0 9m16s 10.32.0.8 worker01 <none> <none>
nginx-deployment-66b6c48dd5-xsjpq 1/1 Running 0 9m16s 10.40.0.5 worker02 <none> <none>
nginx-deployment-845f7554fc-57xkk 0/1 ContainerCreating 0 1s <none> worker02 <none> <none>
nginx-deployment-845f7554fc-fztw8 1/1 Running 0 24s 10.40.0.8 worker02 <none> <none>
nginx-deployment-845f7554fc-gpgms 0/1 ContainerCreating 0 1s <none> worker02 <none> <none>
nginx-deployment-845f7554fc-mp2rj 1/1 Running 0 24s 10.40.0.6 worker02 <none> <none>
nginx-deployment-845f7554fc-np8d4 1/1 Running 0 46s 10.40.0.11 worker02 <none> <none>
nginx-deployment-845f7554fc-zt7lk 1/1 Running 0 46s 10.40.0.10 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 16h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-6qbg5 1/1 Running 0 9m19s 10.32.0.7 worker01 <none> <none>
nginx-deployment-66b6c48dd5-6wl7j 0/1 Terminating 0 9m19s 10.40.0.9 worker02 <none> <none>
nginx-deployment-66b6c48dd5-hprz5 0/1 Terminating 0 9m19s 10.40.0.7 worker02 <none> <none>
nginx-deployment-66b6c48dd5-j9s72 1/1 Running 0 9m19s 10.32.0.9 worker01 <none> <none>
nginx-deployment-66b6c48dd5-r7lmf 1/1 Running 0 9m19s 10.32.0.8 worker01 <none> <none>
nginx-deployment-66b6c48dd5-xsjpq 1/1 Running 0 9m19s 10.40.0.5 worker02 <none> <none>
nginx-deployment-845f7554fc-57xkk 1/1 Running 0 4s 10.40.0.9 worker02 <none> <none>
nginx-deployment-845f7554fc-fztw8 1/1 Running 0 27s 10.40.0.8 worker02 <none> <none>
nginx-deployment-845f7554fc-gpgms 1/1 Running 0 4s 10.40.0.7 worker02 <none> <none>
nginx-deployment-845f7554fc-mp2rj 1/1 Running 0 27s 10.40.0.6 worker02 <none> <none>
nginx-deployment-845f7554fc-np8d4 1/1 Running 0 49s 10.40.0.11 worker02 <none> <none>
nginx-deployment-845f7554fc-zt7lk 1/1 Running 0 49s 10.40.0.10 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running
//* Can you see all nginx-deployment pods are now running on worker02 and no pod is present on worker01
beacuse under deployment config you have mentioned dedicated node to host all pod of nginx-deployment,
hence schedular will not come in to the picture to check where to place the pod *//
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-845f7554fc-57xkk 1/1 Running 0 2m57s 10.40.0.9 worker02 <none> <none>
nginx-deployment-845f7554fc-bj6mf 1/1 Running 0 2m17s 10.40.0.5 worker02 <none> <none>
nginx-deployment-845f7554fc-fztw8 1/1 Running 0 3m20s 10.40.0.8 worker02 <none> <none>
nginx-deployment-845f7554fc-gpgms 1/1 Running 0 2m57s 10.40.0.7 worker02 <none> <none>
nginx-deployment-845f7554fc-mp2rj 1/1 Running 0 3m20s 10.40.0.6 worker02 <none> <none>
nginx-deployment-845f7554fc-np8d4 1/1 Running 0 3m42s 10.40.0.11 worker02 <none> <none>
nginx-deployment-845f7554fc-wffjg 1/1 Running 0 2m17s 10.40.0.12 worker02 <none> <none>
nginx-deployment-845f7554fc-zt7lk 1/1 Running 0 3m42s 10.40.0.10 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 44h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 44h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 44h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 44h 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 16h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 16h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 16h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$
===========================
nodeSelector/node affinity
===========================
//* Now let say you have 6 worker node and you have grouped them in some types like UAT DEV PROD and each type have 2 worker node.
now you want to run you application on only nodes those in UAT group(type) note on other nodes.
For that we have to need following.
1.) label the nodes
2.) under the deployment config have to use nodeSelector:
env: UAT
anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
nodeSelector:
env: uat
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment configured
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-59bdc8fccc-bjs2g 1/1 Running 0 45s 10.32.0.9 worker01 <none> <none>
nginx-deployment-59bdc8fccc-bn4r9 1/1 Running 0 66s 10.32.0.7 worker01 <none> <none>
nginx-deployment-59bdc8fccc-cvm2h 0/1 ContainerCreating 0 0s <none> worker01 <none> <none>
nginx-deployment-59bdc8fccc-df4fl 1/1 Running 0 66s 10.32.0.8 worker01 <none> <none>
nginx-deployment-59bdc8fccc-gtwgd 1/1 Running 0 22s 10.32.0.12 worker01 <none> <none>
nginx-deployment-59bdc8fccc-nc8xd 1/1 Running 0 22s 10.32.0.11 worker01 <none> <none>
nginx-deployment-59bdc8fccc-nrcbj 0/1 ContainerCreating 0 0s <none> worker01 <none> <none>
nginx-deployment-59bdc8fccc-rqxbj 1/1 Running 0 45s 10.32.0.10 worker01 <none> <none>
nginx-deployment-845f7554fc-5fqtl 1/1 Running 0 2m54s 10.40.0.6 worker02 <none> <none>
nginx-deployment-845f7554fc-h4kfz 1/1 Terminating 0 2m32s 10.40.0.8 worker02 <none> <none>
nginx-deployment-845f7554fc-mklzx 1/1 Terminating 0 2m32s 10.40.0.7 worker02 <none> <none>
nginx-deployment-845f7554fc-nzkvp 1/1 Running 0 2m54s 10.40.0.5 worker02 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 2d 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 2d 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 2d 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 2d 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 20h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 20h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 20h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-59bdc8fccc-bjs2g 1/1 Running 0 2m30s 10.32.0.9 worker01 <none> <none>
nginx-deployment-59bdc8fccc-bn4r9 1/1 Running 0 2m51s 10.32.0.7 worker01 <none> <none>
nginx-deployment-59bdc8fccc-cvm2h 1/1 Running 0 105s 10.32.0.13 worker01 <none> <none>
nginx-deployment-59bdc8fccc-df4fl 1/1 Running 0 2m51s 10.32.0.8 worker01 <none> <none>
nginx-deployment-59bdc8fccc-gtwgd 1/1 Running 0 2m7s 10.32.0.12 worker01 <none> <none>
nginx-deployment-59bdc8fccc-nc8xd 1/1 Running 0 2m7s 10.32.0.11 worker01 <none> <none>
nginx-deployment-59bdc8fccc-nrcbj 1/1 Running 0 105s 10.32.0.14 worker01 <none> <none>
nginx-deployment-59bdc8fccc-rqxbj 1/1 Running 0 2m30s 10.32.0.10 worker01 <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 2d 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 2d 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 2d 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 2d 10.40.0.2 worker02 <none> <none>
web-699689dbd-8lhkr 1/1 Running 0 20h 10.32.0.2 worker01 <none> <none>
web-699689dbd-j6s87 1/1 Running 0 20h 10.32.0.3 worker01 <none> <none>
web-699689dbd-mt22j 1/1 Running 0 20h 10.32.0.4 worker01 <none> <none>
anuj@maste01:~$
//* Just see all nginx pods are running on worker01 that has label set env=uat *//
anuj@maste01:~$ kubectl describe pod/nginx-deployment-59bdc8fccc-cvm2h
Name: nginx-deployment-59bdc8fccc-cvm2h
Namespace: default
Priority: 0
Node: worker01/192.168.93.129
Start Time: Fri, 29 Apr 2022 02:35:42 -0700
Labels: app=nginx
pod-template-hash=59bdc8fccc
Annotations: <none>
Status: Running
IP: 10.32.0.13
IPs:
IP: 10.32.0.13
Controlled By: ReplicaSet/nginx-deployment-59bdc8fccc
Containers:
nginx:
Container ID: docker://ee430bfbc07a199214bbc0692c89d498cf996441d45e2bb9ea0c7a725ec74fc3
Image: nginx:1.14.2
Image ID: docker-pullable://nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 29 Apr 2022 02:35:44 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7tzjk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-7tzjk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: env=uat <=========================== nodeSelector is set for pod env=uat so it will schedule the pod on node
that has env=uat label is set.
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m55s default-scheduler Successfully assigned default/nginx-deployment-59bdc8fccc-cvm2h to worker01 <=== Scheduler
scheduled pod on worker01
Normal Pulled 8m55s kubelet Container image "nginx:1.14.2" already present on machine
Normal Created 8m55s kubelet Created container nginx
Normal Started 8m54s kubelet Started container nginx
anuj@maste01:~$
//* refer Follolwing output and you can see how replicaset create pod with app=nginx label using pod template *//
anuj@maste01:~$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-59bdc8fccc 8 8 8 17m
nginx-deployment-66b6c48dd5 0 0 0 4h5m
nginx-deployment-845f7554fc 0 0 0 3h56m
vote-5d548444c5 4 4 4 2d
vote-7fdc744cbf 0 0 0 2d
web-55d678bd48 0 0 0 20h
web-699689dbd 3 3 3 20h
anuj@maste01:~$
anuj@maste01:~$ kubectl describe rs/nginx-deployment-59bdc8fccc
Name: nginx-deployment-59bdc8fccc
Namespace: default
Selector: app=nginx,pod-template-hash=59bdc8fccc
Labels: app=nginx
pod-template-hash=59bdc8fccc
Annotations: deployment.kubernetes.io/desired-replicas: 8
deployment.kubernetes.io/max-replicas: 10
deployment.kubernetes.io/revision: 5
deployment.kubernetes.io/revision-history: 3
Controlled By: Deployment/nginx-deployment
Replicas: 8 current / 8 desired
Pods Status: 8 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
pod-template-hash=59bdc8fccc
Containers:
nginx:
Image: nginx:1.14.2
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 17m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-x499j
Normal SuccessfulCreate 17m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-ln9wf
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-zgpzw
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-97kpt
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-kjpjb
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-j62qz
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-5zqcg
Normal SuccessfulCreate 16m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-mm4nl
Normal SuccessfulDelete 14m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-5zqcg
Normal SuccessfulDelete 14m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-mm4nl
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-kjpjb
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-j62qz
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-97kpt
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-zgpzw
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-x499j
Normal SuccessfulDelete 13m replicaset-controller Deleted pod: nginx-deployment-59bdc8fccc-ln9wf
Normal SuccessfulCreate 12m replicaset-controller Created pod: nginx-deployment-59bdc8fccc-bn4r9
Normal SuccessfulCreate 11m (x7 over 12m) replicaset-controller (combined from similar events): Created pod: nginx-deployment-59bdc8fccc-cvm2h
anuj@maste01:~$ kubectl get pod -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-59bdc8fccc-bjs2g 1/1 Running 0 12m
nginx-deployment-59bdc8fccc-bn4r9 1/1 Running 0 13m
nginx-deployment-59bdc8fccc-cvm2h 1/1 Running 0 12m
nginx-deployment-59bdc8fccc-df4fl 1/1 Running 0 13m
nginx-deployment-59bdc8fccc-gtwgd 1/1 Running 0 12m
nginx-deployment-59bdc8fccc-nc8xd 1/1 Running 0 12m
nginx-deployment-59bdc8fccc-nrcbj 1/1 Running 0 12m
nginx-deployment-59bdc8fccc-rqxbj 1/1 Running 0 12m
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-59bdc8fccc-bjs2g 1/1 Running 0 13m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-bn4r9 1/1 Running 0 13m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-cvm2h 1/1 Running 0 12m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-df4fl 1/1 Running 0 13m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-gtwgd 1/1 Running 0 12m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-nc8xd 1/1 Running 0 12m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-nrcbj 1/1 Running 0 12m app=nginx,pod-template-hash=59bdc8fccc
nginx-deployment-59bdc8fccc-rqxbj 1/1 Running 0 13m app=nginx,pod-template-hash=59bdc8fccc
vote-5d548444c5-5tgbf 1/1 Running 0 2d app=python,pod-template-hash=5d548444c5,role=vote,version=v3
vote-5d548444c5-rb6rz 1/1 Running 0 2d app=python,pod-template-hash=5d548444c5,role=vote,version=v3
vote-5d548444c5-rsg4x 1/1 Running 0 2d app=python,pod-template-hash=5d548444c5,role=vote,version=v3
vote-5d548444c5-s5kps 1/1 Running 0 2d app=python,pod-template-hash=5d548444c5,role=vote,version=v3
web-699689dbd-8lhkr 1/1 Running 0 20h app=web,pod-template-hash=699689dbd
web-699689dbd-j6s87 1/1 Running 0 20h app=web,pod-template-hash=699689dbd
web-699689dbd-mt22j 1/1 Running 0 20h app=web,pod-template-hash=699689dbd
anuj@maste01:~$
//* now try to drain worker01 and see all pods those were meant to run on env=uat should go on pending state *//
as they will not get any node that has env=uat label is set
anuj@maste01:~$ kubectl drain worker01 --ignore-daemonsets
node/worker01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-7dlzd, kube-system/weave-net-qmkzn
evicting pod kube-system/coredns-558bd4d5db-zz7lx
evicting pod default/nginx-deployment-59bdc8fccc-nc8xd
evicting pod default/nginx-deployment-59bdc8fccc-bjs2g
evicting pod default/nginx-deployment-59bdc8fccc-bn4r9
evicting pod default/nginx-deployment-59bdc8fccc-cvm2h
evicting pod default/nginx-deployment-59bdc8fccc-df4fl
evicting pod default/nginx-deployment-59bdc8fccc-gtwgd
evicting pod default/web-699689dbd-8lhkr
evicting pod default/nginx-deployment-59bdc8fccc-nrcbj
evicting pod default/nginx-deployment-59bdc8fccc-rqxbj
evicting pod default/web-699689dbd-mt22j
evicting pod default/web-699689dbd-j6s87
evicting pod kube-system/coredns-558bd4d5db-jhxck
I0429 03:29:22.578066 1093416 request.go:668] Waited for 1.0612795s due to client-side throttling, not priority and fairness, request: GET:https://192.168.93.128:6443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-jhxck
pod/nginx-deployment-59bdc8fccc-cvm2h evicted
pod/web-699689dbd-j6s87 evicted
pod/nginx-deployment-59bdc8fccc-bjs2g evicted
pod/nginx-deployment-59bdc8fccc-nc8xd evicted
pod/web-699689dbd-mt22j evicted
pod/web-699689dbd-8lhkr evicted
pod/nginx-deployment-59bdc8fccc-gtwgd evicted
pod/nginx-deployment-59bdc8fccc-bn4r9 evicted
pod/nginx-deployment-59bdc8fccc-nrcbj evicted
pod/nginx-deployment-59bdc8fccc-rqxbj evicted
pod/nginx-deployment-59bdc8fccc-df4fl evicted
pod/coredns-558bd4d5db-zz7lx evicted
pod/coredns-558bd4d5db-jhxck evicted
node/worker01 evicted
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-59bdc8fccc-4pncp 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-6qb6p 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-8nscb 0/1 Pending 0 13s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-9cgwt 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-f7mfz 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-l6q7q 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-ljmfn 0/1 Pending 0 14s <none> <none> <none> <none>
nginx-deployment-59bdc8fccc-rpj9v 0/1 Pending 0 14s <none> <none> <none> <none>
vote-5d548444c5-5tgbf 1/1 Running 0 2d1h 10.40.0.4 worker02 <none> <none>
vote-5d548444c5-rb6rz 1/1 Running 0 2d1h 10.40.0.1 worker02 <none> <none>
vote-5d548444c5-rsg4x 1/1 Running 0 2d1h 10.40.0.3 worker02 <none> <none>
vote-5d548444c5-s5kps 1/1 Running 0 2d1h 10.40.0.2 worker02 <none> <none>
web-699689dbd-85kqx 0/1 Pending 0 14s <none> <none> <none> <none>
web-699689dbd-9p6w5 0/1 Pending 0 14s <none> <none> <none> <none>
web-699689dbd-k8nnz 0/1 Pending 0 14s <none> <none> <none> <none>
anuj@maste01:~$
// * Now lets see the logs of any pending pod and you will find node(s) didn't match Pod's node affinity/selector *//
anuj@maste01:~$ kubectl describe pod/nginx-deployment-59bdc8fccc-l6q7q | tail
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: env=uat
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m22s default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) were unschedulable.
Warning FailedScheduling 2m21s default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) were unschedulable.
anuj@maste01:~$
//* Now lets see the worker01 node status, its marked as unschedulable, hence no pod can be run on it *//
lets uncordon worker01 and all pods those are meant to run on this should come up by its own as they will get the node that have
env=uat set
anuj@maste01:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
maste01 Ready control-plane,master 24d v1.21.0
worker01 Ready,SchedulingDisabled <none> 23d v1.21.0
worker02 Ready <none> 23d v1.21.0
anuj@maste01:~$ kubectl uncordon worker01
node/worker01 uncordoned
anuj@maste01:~$
//** see container start coming up by its own on worker01, hence its prove that node that have nodeSelector env=uat is set in
its deployment on that node pod will run not on any other nodes **//
anuj@maste01:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-59bdc8fccc-4pncp 0/1 Pending 0 8m10s
nginx-deployment-59bdc8fccc-6qb6p 0/1 Pending 0 8m10s
nginx-deployment-59bdc8fccc-8nscb 0/1 ContainerCreating 0 8m9s
nginx-deployment-59bdc8fccc-9cgwt 0/1 ContainerCreating 0 8m10s
nginx-deployment-59bdc8fccc-f7mfz 0/1 ContainerCreating 0 8m10s
nginx-deployment-59bdc8fccc-l6q7q 0/1 Pending 0 8m10s
nginx-deployment-59bdc8fccc-ljmfn 0/1 ContainerCreating 0 8m10s
nginx-deployment-59bdc8fccc-rpj9v 0/1 ContainerCreating 0 8m10s
vote-5d548444c5-5tgbf 1/1 Running 0 2d1h
vote-5d548444c5-rb6rz 1/1 Running 0 2d1h
vote-5d548444c5-rsg4x 1/1 Running 0 2d1h
vote-5d548444c5-s5kps 1/1 Running 0 2d1h
web-699689dbd-85kqx 0/1 ContainerCreating 0 8m10s
web-699689dbd-9p6w5 0/1 Pending 0 8m10s
web-699689dbd-k8nnz 0/1 Pending 0 8m10s
anuj@maste01:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-59bdc8fccc-4pncp 0/1 Pending 0 8m16s
nginx-deployment-59bdc8fccc-6qb6p 1/1 Running 0 8m16s
nginx-deployment-59bdc8fccc-8nscb 1/1 Running 0 8m15s
nginx-deployment-59bdc8fccc-9cgwt 1/1 Running 0 8m16s
nginx-deployment-59bdc8fccc-f7mfz 1/1 Running 0 8m16s
nginx-deployment-59bdc8fccc-l6q7q 1/1 Running 0 8m16s
nginx-deployment-59bdc8fccc-ljmfn 1/1 Running 0 8m16s
nginx-deployment-59bdc8fccc-rpj9v 1/1 Running 0 8m16s
vote-5d548444c5-5tgbf 1/1 Running 0 2d1h
vote-5d548444c5-rb6rz 1/1 Running 0 2d1h
vote-5d548444c5-rsg4x 1/1 Running 0 2d1h
vote-5d548444c5-s5kps 1/1 Running 0 2d1h
web-699689dbd-85kqx 0/1 ContainerCreating 0 8m16s
web-699689dbd-9p6w5 0/1 ContainerCreating 0 8m16s
web-699689dbd-k8nnz 1/1 Running 0 8m16s
anuj@maste01:~$
//* so by doing above type of secheduling we can controll pod scheduling if each and eveary application developer use nodeSelector or nodeName
but if any of the developer not use such controlls then, that application pod will deploy on any of the node.
so to over come thing we can use taints and tolleration *//
====================================================
Taint and Tolerations
=================================================
Now just for lab lets remove the taint from master node and put taint on worker node
anuj@maste01:~$ kubectl describe node/worker01 | grep -i taint
Taints: <none>
anuj@maste01:~$ kubectl taint node maste01 node-role.kubernetes.io/master-
node/maste01 untainted
anuj@maste01:~$ kubectl describe node/maste01 | grep -i taint
Taints: <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl taint node worker01 team=uat:NoSchedule
node/worker01 tainted
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$ kubectl taint node worker02 team=sit:NoSchedule
node/worker02 tainted
/* Now lets create one deploeyment and see pod should not run on nodes those have taint applied */
anuj@maste01:~$ cat apache.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apache
name: apache
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: apache
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: apache
spec:
containers:
- image: httpd
imagePullPolicy: Always
name: httpd
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f apache.yaml
deployment.apps/apache created
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apache-5667776978-97dqv 0/1 ContainerCreating 0 5s <none> maste01 <none> <none>
apache-5667776978-swpwh 0/1 ContainerCreating 0 5s <none> maste01 <none> <none>
apache-5667776978-z9r54 0/1 ContainerCreating 0 5s <none> maste01 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apache-5667776978-97dqv 0/1 ContainerCreating 0 8s <none> maste01 <none> <none>
apache-5667776978-swpwh 0/1 ContainerCreating 0 8s <none> maste01 <none> <none>
apache-5667776978-z9r54 1/1 Running 0 8s 10.46.0.1 maste01 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apache-5667776978-97dqv 1/1 Running 0 11s 10.46.0.2 maste01 <none> <none>
apache-5667776978-swpwh 0/1 ContainerCreating 0 11s <none> maste01 <none> <none>
apache-5667776978-z9r54 1/1 Running 0 11s 10.46.0.1 maste01 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apache-5667776978-97dqv 1/1 Running 0 12s 10.46.0.2 maste01 <none> <none>
apache-5667776978-swpwh 1/1 Running 0 12s 10.46.0.3 maste01 <none> <none>
apache-5667776978-z9r54 1/1 Running 0 12s 10.46.0.1 maste01 <none> <none>
/* all the pods are scheduled on master node as there is no taint on it and on the worker node there is taint applied hence no pod schedule on it
untill we add pod tolleration that allow pod to schedule on tainted node */
now lets add pod tolleration so that can schedule the pod on tainted node.
anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
tolerations: <=========== under this section we are telling schedular to schedule the pod on nodes that has taint
- key: "team" team:sit:NoSchedule
operator: "Equal"
value: "sit"
effect: "NoSchedule"
anuj@maste01:~$
anuj@maste01:~$ kubectl describe node/worker01 | grep -i taint
Taints: team=uat:NoSchedule
anuj@maste01:~$
anuj@maste01:~$ kubectl describe node/worker02 | grep -i taint <======= worker02 has taint that is mentioned in deploeyment, so adding
Taints: team=sit:NoSchedule pod tolleration to node02 to allow to schedule pod on it.
anuj@maste01:~$
anuj@maste01:~$ kubectl describe node/maste01 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
anuj@maste01:~$
anuj@maste01:~$
//* Now lets apply the deploeyment and pod should be schedule on worker02 as for that we have added pod taint tolleration *//
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
anuj@maste01:~$
//* Successfully assigned default/nginx-deployment to worker02 *//
anuj@maste01:~$ kubectl describe pod/nginx-deployment-5d458f89b5-j7dc2 | tail -5
---- ------ ---- ---- -------
Normal Scheduled 6m40s default-scheduler Successfully assigned default/nginx-deployment-5d458f89b5-j7dc2 to worker02
Normal Pulled 6m27s kubelet Container image "nginx:1.14.2" already present on machine
Normal Created 6m27s kubelet Created container nginx
Normal Started 6m26s kubelet Started container nginx
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-47t2z 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-5vvgq 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-b9lc2 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-fjjgw 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-ghl8g 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-h5r5z 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-j7dc2 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-tjlll 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-47t2z 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-5vvgq 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-b9lc2 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-fjjgw 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-ghl8g 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-h5r5z 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-j7dc2 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-tjlll 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-47t2z 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-5vvgq 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-b9lc2 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-fjjgw 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-ghl8g 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-h5r5z 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-j7dc2 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-tjlll 0/1 ContainerCreating 0 13s <none> worker02 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-47t2z 0/1 ContainerCreating 0 16s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-5vvgq 0/1 ContainerCreating 0 16s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-b9lc2 1/1 Running 0 16s 10.40.0.1 worker02 <none> <none>
nginx-deployment-5d458f89b5-fjjgw 1/1 Running 0 16s 10.40.0.4 worker02 <none> <none>
nginx-deployment-5d458f89b5-ghl8g 1/1 Running 0 16s 10.40.0.2 worker02 <none> <none>
nginx-deployment-5d458f89b5-h5r5z 0/1 ContainerCreating 0 16s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-j7dc2 0/1 ContainerCreating 0 16s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-tjlll 0/1 ContainerCreating 0 16s <none> worker02 <none> <none>
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-47t2z 1/1 Running 0 22s 10.40.0.3 worker02 <none> <none>
nginx-deployment-5d458f89b5-5vvgq 1/1 Running 0 22s 10.40.0.7 worker02 <none> <none>
nginx-deployment-5d458f89b5-b9lc2 1/1 Running 0 22s 10.40.0.1 worker02 <none> <none>
nginx-deployment-5d458f89b5-fjjgw 1/1 Running 0 22s 10.40.0.4 worker02 <none> <none>
nginx-deployment-5d458f89b5-ghl8g 1/1 Running 0 22s 10.40.0.2 worker02 <none> <none>
nginx-deployment-5d458f89b5-h5r5z 1/1 Running 0 22s 10.40.0.10 worker02 <none> <none>
nginx-deployment-5d458f89b5-j7dc2 1/1 Running 0 22s 10.40.0.9 worker02 <none> <none>
nginx-deployment-5d458f89b5-tjlll 1/1 Running 0 22s 10.40.0.8 worker02 <none> <none>
anuj@maste01:~$
//* Now by adding pod tolleration we make taineted node as candidate to schedule the pod on the node that has taint, also the node that has
no taints applied also become candidate to schedule the pods, now lets see the practicale to prove this
as of now
-> we have taint on master and both worker node,
-> for the worker02 we have added tolleration, so this node is candidate to schedule the pod.
-> worker01 also have taint to team=sit
-> mastero1 also have noSechedule taint, so at this momment we dont have any node that has taint is not applied lets remove the taint from the
master node and you will see pod will schedule on master node as well as we have removed the taint and make it candidate to schedule the pods.
anuj@maste01:~$ kubectl describe node/maste01 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
anuj@maste01:~$
anuj@maste01:~$ kubectl describe node/worker01 | grep -i taint
Taints: team=uat:NoSchedule
anuj@maste01:~$
anuj@maste01:~$ kubectl describe node/worker02 | grep -i taint
Taints: team=sit:NoSchedule
/* now lets remove taint from master node *//
anuj@maste01:~$ kubectl taint node maste01 node-role.kubernetes.io/master-
node/maste01 untainted
anuj@maste01:~$
anuj@maste01:~$anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
tolerations: <=========== we have pod tolleration that matches for worker02.
- key: "team"
operator: "Equal"
value: "sit"
effect: "NoSchedule"
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
anuj@maste01:~$
anuj@maste01:~$
//* please refer following output you can see pods are scheduled on worker02 along with maste01 as well as we dont have any taint on master node.
hence this is a problem, as we want our pod should be schedule on node that has taint team=uat *//
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-7tb65 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-hsw2q 0/1 ContainerCreating 0 4s <none> maste01 <none> <none>
nginx-deployment-5d458f89b5-lfdhx 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-n8qw7 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-qqnn9 0/1 ContainerCreating 0 4s <none> maste01 <none> <none>
nginx-deployment-5d458f89b5-s8tww 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-sg9qf 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-zprm9 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5d458f89b5-7tb65 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-hsw2q 1/1 Running 0 9s 10.46.0.2 maste01 <none> <none>
nginx-deployment-5d458f89b5-lfdhx 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-n8qw7 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-5d458f89b5-qqnn9 1/1 Running 0 9s 10.46.0.1 maste01 <none> <none>
nginx-deployment-5d458f89b5-s8tww 1/1 Running 0 9s 10.40.0.2 worker02 <none> <none>
nginx-deployment-5d458f89b5-sg9qf 1/1 Running 0 9s 10.40.0.1 worker02 <none> <none>
nginx-deployment-5d458f89b5-zprm9 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
anuj@maste01:~$
//* Now to over come the above situation we have to add the nodeSelector as well along with pod toleration in the deploeyment config *//
lets see all of the above node taint and label reamin same and using nodeSelector pods should be sechedule on worker02 only not on
any other node now. *//
anuj@maste01:~$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
revisionHistoryLimit: 4
replicas: 8
minReadySeconds: 20
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
tolerations:
- key: "team"
operator: "Equal"
value: "sit"
effect: "NoSchedule"
nodeSelector:
team: sit
anuj@maste01:~$
anuj@maste01:~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
anuj@maste01:~$
anuj@maste01:~$
anuj@maste01:~$
//* Just refer the following output pods are scheduled on worker02 only this time, not on any other node, hence our problem is solved *//
anuj@maste01:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-fbcb7d9d9-4lj9m 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-j8pgr 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-lzrn7 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-mqljj 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-qcps4 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-rjfz5 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-zn4nl 0/1 ContainerCreating 0 4s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-znzbw 0/1 ContainerCreating 0 5s <none> worker02 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-fbcb7d9d9-4lj9m 0/1 ContainerCreating 0 10s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-j8pgr 0/1 ContainerCreating 0 10s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-lzrn7 0/1 ContainerCreating 0 10s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-mqljj 0/1 ContainerCreating 0 10s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-qcps4 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-rjfz5 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-zn4nl 0/1 ContainerCreating 0 9s <none> worker02 <none> <none>
nginx-deployment-fbcb7d9d9-znzbw 1/1 Running 0 10s 10.40.0.2 worker02 <none> <none>
anuj@maste01:~$
anuj@maste01:~$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-fbcb7d9d9-4lj9m 1/1 Running 0 13s 10.40.0.9 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-j8pgr 1/1 Running 0 13s 10.40.0.8 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-lzrn7 1/1 Running 0 13s 10.40.0.7 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-mqljj 1/1 Running 0 13s 10.40.0.1 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-qcps4 1/1 Running 0 12s 10.40.0.4 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-rjfz5 1/1 Running 0 12s 10.40.0.3 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-zn4nl 1/1 Running 0 12s 10.40.0.10 worker02 <none> <none>
nginx-deployment-fbcb7d9d9-znzbw 1/1 Running 0 13s 10.40.0.2 worker02 <none> <none>
anuj@maste01:~$
No comments:
Post a Comment