Exercise
This exercise covers comprehensive node maintenance procedures including cordon, drain, and uncordon operations that are critical for CKA exam success.
Scenario 1: Scheduled Maintenance
Create a Deployment named web-app with 6 replicas using nginx:1.26 image
Create a DaemonSet named monitoring using nginx:alpine image
Check how Pods are distributed across all Nodes
You need to perform maintenance on worker1. First,
cordonthis Node to prevent new Pods from being scheduled on itVerify that worker1 is cordoned, but existing Pods are still running
Drain worker1 to move all Pods to other nodes
Verify all web-app Pods have been moved out of worker1, while DaemonSet Pods are still running
Make worker1 available for scheduling again
Force the restart of the web-app deployment and verify some Pods are scheduled on worker1
Clean up all resources
Documentation
- https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
- https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration
Solution
Scenario 1: Scheduled Maintenance
- Create a Deployment named web-app with 6 replicas using nginx:1.26 image
kubectl create deployment web-app --image=nginx:1.26 --replicas=6- Create a DaemonSet named monitoring using nginx:alpine image
kubectl create -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: monitoring
template:
metadata:
labels:
app: monitoring
spec:
containers:
- name: monitoring
image: nginx:alpine
ports:
- containerPort: 80
EOF- Check how Pods are distributed across all Nodes
kubectl get pods -o wideYou should see Pods are distributed across worker Nodes, and one monitoring Pod per Node.
- You need to perform maintenance on worker1. First,
cordonthis Node to prevent new Pods from being scheduled on it
kubectl cordon worker1- Verify that worker1 is cordoned, but existing Pods are still running
- worker1 should show as Ready,SchedulingDisabled
kubectl get nodes- existing Pods on worker1 should still be running
kubectl get pods -o wide- Drain worker1 to move all Pods to other nodes
kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data --force--ignore-daemonsetsflag is needed because DaemonSet Pods cannot be moved--delete-emptydir-dataflag ensures that Pods with emptydir volumes can be deleted--forceflag ensures Pods that are not managed by a controller can be destroyed
- Verify all web-app Pods have been moved out of worker1, while DaemonSet Pods are still running
kubectl get pods -o wide- Make worker1 available for scheduling again
kubectl uncordon worker1- Force the restart of the web-app deployment and verify some Pods are scheduled on worker1
kubectl rollout restart deployment/web-app
kubectl get pods -o wideYou should notice that some Pods are scheduled on worker1.
- Clean up all resources
kubectl delete deployment web-app
kubectl delete daemonset monitoring