Exercise
Add the label disktype=ssd on the worker1 Node
Create the specification of a Pod named nginx based on the nginx:1.26 image.
Modify the specification making sure the Pod is scheduled on the Node worker1 and requests 1.5Gi of memory. Then create the resource and verify the Pod is running.
Note: if your Pod stays in Pending you can use a lower value for the memory request to make sure it get deployed.
Get the Pod’s priority and priorityClassName
Create a new PriorityClass named high with value 100000
Create a new Pod named apache based on the httpd:2.4 image, making sure it is scheduled on the Node worker1, it has the same memory request as the Pod nginx and it uses the priorityClass high. Once created, get the Pod’s priority.
What happened ?
Delete the Pods, the PriorityClass, and remove the disktype label from worker1
Documentation
https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
Solution
- Add the label disktype=ssd on the worker1 Node
kubectl label node worker1 disktype=ssd- Create the specification of a Pod named nginx based on the nginx:1.26 image.
kubectl run nginx --image=nginx:1.26 --dry-run=client -o yaml > pod.yaml- Modify the specification making sure the Pod is scheduled on the Node worker1 and requests 1.5Gi of memory. Then create the resource and verify the Pod is running.
Modification of the specification to add the specific constraints:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
nodeSelector:
disktype: ssd
containers:
- image: nginx:1.26
name: nginx
resources:
requests:
memory: 1.5GiCreation of the resource:
kubectl apply -f pod.yamlMake sure the Pod is running:
kubectl get po/nginx- Get the Pod’s priority and priorityClassName
Pod’s priority is 0:
kubectl get po/nginx -o jsonpath={.spec.priority}
0Pod’s priorityClassName is not defined
kubectl get po/nginx -o jsonpath={.spec.priorityClassName}- Create a new PriorityClass named high with value 100000
cat <<EOF | kubectl apply -f -
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high
value: 1000000
globalDefault: false
EOF- Create a new Pod named apache based on the httpd:2.4 image, making sure it is scheduled on the Node worker1, it has the same memory request as the Pod nginx and it uses the priorityClass high. Once created, get the Pod’s priority.
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
labels:
run: apache
name: apache
spec:
priorityClassName: high
nodeSelector:
disktype: ssd
containers:
- image: httpd:2.4
name: apache
resources:
requests:
memory: 1.5Gi
EOFGet the Pod’s priority:
kubectl get po/apache -o jsonpath={.spec.priority}
1000000- What happened ?
Listing the Pods we can see the nginx one is not present anymore:
kubectl get po
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 14sAs the apache Pod has a higher priority than the nginx once, and because the Node worker1 does not have enough resources to run both of them, the nginx Pod has been evicted and replaced by the apache one.
We can see the preemption in the events as well:
kubectl get events
...
27s Normal Preempted pod/nginx Preempted by default/apache on node worker1- Delete the Pods, the PriorityClass, and remove the disktype label from worker1
The nginx Pod has already been removed
kubectl delete po/apache priorityClass/high
kubectl label node worker1 disktype-