Exercise
In this exercise, you’ll learn how to check the validity of the cluster’s certificates and renew them using kubeadm.
Check the current certificate expiration dates
Renew the certificates
Restart the control plane components
Verify the cluster is working correctly
Documentation
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
Solution
- Check the current certificate expiration dates
Run the following command from the control plane node.
sudo kubeadm certs check-expirationThis command shows all certificates managed by kubeadm and their expiration dates:
- admin.conf (client certificate for kubectl)
- apiserver (API server serving certificate)
- apiserver-etcd-client (API server to etcd client certificate)
- apiserver-kubelet-client (API server to kubelet client certificate)
- controller-manager.conf (controller manager client certificate)
- etcd-healthcheck-client (etcd health check client certificate)
- etcd-peer (etcd peer certificate for cluster communication)
- etcd-server (etcd server certificate)
- front-proxy-client (front proxy client certificate)
- scheduler.conf (scheduler client certificate)
- super-admin.conf (client certificate bypassing authorization layer)
- By default, kubeadm certificates are valid for 1 year. You should renew them before they expire to avoid cluster outages.
- Certificate are automatically renewed during each upgrade of the control plane.
- Renew the certificates using kubeadm
You can renew all certificates at once.
sudo kubeadm certs renew allOr, you can renew specific certificates, for example:
sudo kubeadm certs renew apiserver
sudo kubeadm certs renew apiserver-kubelet-client
sudo kubeadm certs renew controller-manager.conf
sudo kubeadm certs renew scheduler.conf
sudo kubeadm certs renew admin.conf
sudo kubeadm certs renew super-admin.confOnce, you have renewed the certificates, you can verify the expiration dates have been updated.
sudo kubeadm certs check-expiration- Restart the control plane components
First, restart kubelet.
sudo systemctl restart kubeletNext, move the static Pods manifests temporarily to stop the processes.
sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
sudo mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/
sudo mv /etc/kubernetes/manifests/kube-scheduler.yaml /tmp/
sudo mv /etc/kubernetes/manifests/etcd.yaml /tmp/Next, wait a couple of seconds to ensure the control plane Pods are terminated
Next, move the manifests back to restart the control plane Pods
sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
sudo mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
sudo mv /tmp/kube-scheduler.yaml /etc/kubernetes/manifests/
sudo mv /tmp/etcd.yaml /etc/kubernetes/manifests/Then, wait a few tens of seconds for the Pods to restart.
- Verify the cluster is working correctly
First, update your local kubeconfig with the renewed admin certificate
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/configNext, check the cluster’s status.
kubectl cluster-infoYou can also verify the Nodes are ready, and run a couple of basic commands.
kubectl get nodes
kubectl get pods -n kube-system
kubectl get namespaces