Exercise
In this exercise, you’ll create the simplest possible StorageClass, based on hostPath, for your kubeadm cluster and use it to provision storage.
Check the existing StorageClasses
Create a hostPath-based StorageClass
Create a PersistentVolume manually
Create a PersistentVolumeClaim using your StorageClass
Create a Pod that uses the PVC and write some data to the volume
Verify the data persists by deleting and recreating the Pod
Clean up all resources
Documentation
https://kubernetes.io/docs/concepts/storage/storage-classes/
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Solution
- Check the existing StorageClasses
kubectl get storageclass
kubectl get sc -o wideYou should see no StorageClasses:
No resources found- Create a hostPath-based StorageClass
Since we don’t have a dynamic provisioner, we’ll create a simple StorageClass that we can use with manually created PersistentVolumes:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual-hostpath
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
reclaimPolicy: Delete
EOFVerify the StorageClass was created:
kubectl get storageclass manual-hostpathkubernetes.io/no-provisioner since we don’t have a dynamic provisioner. This means we need to create PersistentVolumes manually. In a real world scenario, the StorageClass triggers the creation of PersistentVolumes.- Create a PersistentVolume manually
Let’s create the following PersistentVolume:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: manual-pv-001
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: manual-hostpath
hostPath:
path: /tmp/k8s-storage
type: DirectoryOrCreate
EOFCheck the PV status:
kubectl get pv manual-pv-001The PV should be in “Available” status.
- Create a PersistentVolumeClaim using your StorageClass
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: manual-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual-hostpath
resources:
requests:
storage: 1Gi
EOFCheck the PVC status:
kubectl get pvc manual-pvcWaitForFirstConsumer volume binding mode.- Create a Pod that uses the PVC and write some data to the volume
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: demo
image: alpine:3.22
command:
- sleep
- "3600"
volumeMounts:
- name: storage-volume
mountPath: /data
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: manual-pvc
EOFWait for the Pod to be running:
kubectl wait --for=condition=Ready pod/demo --timeout=60sCheck that the PVC and the PV are now bound:
kubectl get pvc manual-pvc
kubectl get pv manual-pv-001Write some data to the volume:
kubectl exec demo -- sh -c "echo Hello from manual hostPath storage > /data/test.txt"
kubectl exec demo -- sh -c "date >> /data/test.txt"
kubectl exec demo -- sh -c "hostname >> /data/test.txt"
kubectl exec demo -- cat /data/test.txtCheck which Node the Pod is running on:
kubectl get pod demo -o wide- Verify the data persists by deleting and recreating the Pod
First, delete the demo Pod.
kubectl delete pod demoNext, recreate the Pod using the same PVC.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: demo2
spec:
containers:
- name: demo
image: alpine:3.22
command:
- sleep
- "3600"
volumeMounts:
- name: storage-volume
mountPath: /data
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: manual-pvc
EOFWait for the new Pod to be ready and verify the data persists:
kubectl wait --for=condition=Ready pod/demo2 --timeout=60s
kubectl exec demo2 -- cat /data/test.txtYou should see the same data that was written by the first Pod.
- Clean up all resources
kubectl delete pod demo2
kubectl delete pvc manual-pvc
kubectl delete pv manual-pv-001
kubectl delete storageclass manual-hostpathThis exercise demonstrates the basic concepts of StorageClass, PV, and PVC without requiring a complex storage provisioner. In a production kubeadm cluster, you would typically use:
- Dynamic provisioners like local-path-provisioner, NFS, …
- Cloud provider CSI drivers (if running on cloud infrastructure)
- Network-attached storage solutions
hostPath storage limitations:
- Data tied to a specific Node
- No replication or high availability
- Manual PV creation required
- Node failure means data loss
- Not suitable for production workloads