This section gives an overview of the essential tools for managing complex Kubernetes configurations: create custom resources with CRDs, deploy applications with Helm, or using Kustomize.
These three tools address different aspects of Kubernetes configuration management:
- CRDs extend Kubernetes by creating new resource types for custom applications and operators
- Helm packages and deploys complete applications with templating and lifecycle management
- Kustomize customizes existing manifests through overlays without templates
Together, they provide a comprehensive toolkit for managing configurations from simple deployments to complex, multi-environment applications.
CustomResourceDefinition (CRDs)
Kubernetes can be extended in many ways. A CustomResourceDefinition (a.k.a. CRD) is a specific Kubernetes resource which allows creating new resources. For example, CRDs are used a lot in operators which are controllers acting on their own resources.
If we try to list the resources of type Database, we’ll get an error as this type is not known.
$ k get database
error: the server doesn't have a resource type "database"Using a CRD we can define a Database resource and manipulate it using kubectl as any other resources. The following specification is an example of CRD defining a Database resource.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.learning.cka
spec:
group: learning.cka
scope: Namespaced
names:
kind: Database
plural: databases
singular: database
shortNames:
- db
- dbs
versions:
- name: v1
served: true # This version can be served via the API
storage: true # This version is used for storing objects in etcd
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
type:
type: stringThe served and storage fields are required for each version:
served: trueenables the API version for client requests (can be disabled for deprecated versions)storage: truedesignates which version is used for persistence in etcd (exactly one version must have this set to true)
These fields cannot be omitted and are essential for version management when evolving your CRD schema.
Basically, it defines a new resource of type Database in the learning.cka group, which contains a simple type property of type string under its spec field.
We create it as any other Kubernetes resources.
$ k apply -f db-crd.yaml
customresourcedefinition.apiextensions.k8s.io/databases.learning.cka createdNext, we verify Kubernetes recognizes the Database resource type.
$ kubectl api-resources | grep database
databases db,dbs learning.cka/v1 true DatabaseNext, we create the specification of a Database resource, as follows.
apiVersion: learning.cka/v1
kind: Database
metadata:
name: db
spec:
type: mongoThen, we create it.
$ kubectl apply -f db.yaml
database.learning.cka/db createdA project can use CRDs to define and manipulate its own structures (similar to a Java class, to some extent), but also to provide a simple interface to the user hiding some internal complexity.
Helm
Main concepts
Helm is a Package Manager for Kubernetes, it allows packaging, installing, upgrading and deleting applications.
A Helm Chart is an application packaged with Helm, it has a strict folder structure similar to the following one.
$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yamlThis structure is mainly composed of:
- YAML template: Kubernetes manifests containing instructions, like conditional logic, in Go templating language
- value files: containing configuration properties
The Helm Client is a binary used to manipulate application packaged with Helm, it can be installed following these instructions. The schema below is an overview showing that helm can create Kubernetes manifests from YAML templates (containing instructions in the templating language) and value files containing configuration properties.

ArtifactHub
The ArtifactHub is the place where many applications are distributed and ready to be deployed, using Helm, in a Kubernetes cluster.

Searching repository
The helm client allows searching for existing Charts, within the ArtifactHub or in the local repositories. The command below search for Chart containing the grafana keyword in the ArtifactHub.
helm search hub grafanaInstalling a chart
An application installed from a Chart is called a release. In this example, we’ll install the official Grafana Chart. Before installing a Chart, we need to add the repository it is living in. First, we add the grafana repo.
helm repo add grafana https://grafana.github.io/helm-chartsNext, we install the Chart.
helm install my-grafana grafana/grafana --version 9.3.0Many applications can be installed directly from an OCI registry, removing the need to add the helm repository. The example below installs the redis Chart that way.
helm install redis oci://registry-1.docker.io/bitnamicharts/redisBase commands
- Creating (or upgrading) a release
helm upgrade --install my-release CHART_URL [-f values.test.yaml]- Getting the status of a release
helm status my-release- Getting the rollout history of a release
helm history my-release- Rolling back a release
helm rollback my-release REVISION- Getting information about a given release
helm get all/hooks/manifest/notes/values my-release- Listing existing releases
helm list- Removing a release
helm delete my-releaseCreating a Helm Chart to package an application
The following command creates a sample Chart containing resources to deploy a nginx server:
helm create my-appThe folder structure created is as follows.
$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yamlLet’s describe the files/folders some more:
- Chart.yaml contains the application metadata and the dependencies list
- charts is a folder used to store the dependent Charts
- NOTES.txt provides the post install instructions to the user
- _helpers.tpl contains functions and variables to simplify the templating
- the YAML files in the templates’ folder contain specifications with templating code
- values.yaml defines the configuration values for the application
To package your application, you can start from this scaffold, replacing the manifests within the templates folder with the ones of your application.
Kustomize
Kustomize allows you to customize application configuration without using templates. It allows to define a base layer and to apply overlays on top of it.
kubectl can manage Kustomize applications using kubectl apply -k, which is very handy.

In the next sections, we’ll see how to package a simple application with Kustomize and to configure it for various environments.
Creating a base layer
We will consider a Deployment managing one nginx Pod and a Service exposing it. The specification of these resources are defined below.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.28
name: nginx apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginxFirst, we create a base folder and add both specification inside it.
$ tree .
.
└── base
├── deploy.yaml
└── service.yamlNext, we create a file named kustomization.yaml, which defines the resources Kustomize should take into account.
resources:
- deploy.yaml
- service.yaml$ tree .
.
└── base
├── deploy.yaml
├── kustomization.yaml
└── service.yamlThen, we use kubectl to deploy this content, pointing the folder containing the kustomization.yaml file
$ kubectl apply -k base
service/nginx created
deployment.apps/nginx createdKustomize really shines when it comes to applying changes to the configuration without modifying the content in the base layer.
Creating an overlay per environment
Let’s consider 2 environments, dev and test, with the following requirements:
- dev should use nginx:1.29
- test should run 3 replicas of the nginx Pod
For each environment, we’ll create a new folder with the changes we need to apply on top of the base layer.
Dev
First, we create the dev folder next to the base one. In this folder we will define:
- kustomization.yaml, which specifies the base layer to use and the patches to apply
- deploy.yaml, which contains the new image to use
resources:
- ../base
patches:
- path: deploy.yaml
target:
kind: Deployment
name: nginx apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
template:
spec:
containers:
- image: nginx:1.29
name: nginxThe folder structure is now:
$ tree .
.
├── base
│ ├── deploy.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── dev
├── deploy.yaml
└── kustomization.yamlBefore applying, you can preview the changes using kubectl kustomize dev to see the final manifests.
Next, we apply the dev overlay on the base resources.
$ k apply -k dev
service/nginx unchanged
deployment.apps/nginx configuredThen, we can verify the Deployment is now based on the nginx:1.29 image.
$ k get deploy nginx -o jsonpath='{.spec.template.spec.containers[0].image}'
nginx:1.29Test
We’ll follow the same approach for the test environment.
First, we create the test folder next to the base one. In this folder we will define:
- kustomization.yaml, which specifies the base layer to use and the patches to apply
- deploy.yaml, which contains the new number of replicas
resources:
- ../base
patches:
- path: deploy.yaml
target:
kind: Deployment
name: nginx apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3The folder structure is now:
$ tree .
.
├── base
│ ├── deploy.yaml
│ ├── kustomization.yaml
│ └── service.yaml
├── dev
│ ├── deploy.yaml
│ └── kustomization.yaml
└── test
├── deploy.yaml
└── kustomization.yamlNext, we apply the test overlay on the base resources.
$ k apply -k test
service/nginx unchanged
deployment.apps/nginx configuredThen, we can verify the Deployment now manages 3 Pods.
$ k get po -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-7b95dc7d4d-cskm4 1/1 Running 0 15s
nginx-7b95dc7d4d-g6q4m 1/1 Running 0 18s
nginx-7b95dc7d4d-zpwld 1/1 Running 0 16s— Practice —
You can now jump to the Exercises part to learn and practice the concepts above.