In this Operator pattern, we expand on the Controller pattern and introduce a Custom Resource Definition for our ConfiMap watch controller. So if you haven't run the Controller scenario, we recommend doing so now.
So, instead of annotating a ConfigMap with a Pod selector we use a dedicated custom resource
ConfigWatcher which contains a reference to the ConfigMap to watch and a selector for the Pods to restart in case the ConfigMap has changed.
The functionality remains the same, but you have an excellent decoupling of ConfigMap and the applications which are using it, and you can have multiple different Pods be restarted by creating multiple That way, you can support hot-pickup of your ConfigMap changes for an application that doesn't help hot reload of configuration or use the ConfigMap's values as environment variables.
In this scenario, you will learn:
- how to create a CustomResourceDefinition
- how to implement an operator in shell-script that watches multiple resources
In this scenario, we give a brief example of how Operator works by introducing CRDs and watching the current status of the cluster periodically. The benefit of using a dedicated Operator instead of a ConfigMap watching controller is that the Operator is less intrusive and nicely decouples the components being managed.
We have learned ...
- ... how to define a custom resource with a validation schema and other meta information.
- ... that you can also write operators in a shell script to watch multiple resources.
- ... how to access the Kubernetes API server from within a Pod, including the security setup
- ... how to run an HTTP server with
More background information about the Operator pattern can be found in our Kubernetes Patterns book.
But be aware that for a real operator, you need more than this simple example script:
- Improve the resilience of the Operator by reconnecting to the event stream when the connections break and fetching the full state periodically. If you are writing your applications in Golang or Java, the supporting libraries support so-called
Informerswhich can do this transparently for you.
- We could update
status:field on the custom resource to indicate whether it's, e.g. active or not. how many pods are matching, etc.
- Also, we could introduce a validating admission webhook which could verify that the ConfigMap that is referenced in the
ConfigWatcherindeed exists when it is created. This kind of validation goes beyond simple OpenAPI validation and is recommended for advanced use cases.
ConfigWatcher custom resource
To expand on our Controller, let's introduce a dedicated CRD for describing the relationship between the ConfigMap to watch and the Pods to restart when that ConfigMap changes.
For this, we will introduce a custom resource that looks like
kind: ConfigWatcher apiVersion: k8spatterns.io/v1 metadata: name: webapp-config-watcher spec: # The config map's name which should be watched configMap: webapp-config # A label selector for the pods to delete in case podSelector: app: webapp
The first step is to register a CustomResourceDefinition (CRD) which can be found in
This CRD contains the kind, group and version and some other extra information like an OpenAPI schema for validation and also additional columns to print with
Let's apply it to our cluster:
kubectl apply -f crd.yml
Also, we create a role
config-watcher-crd which grants access to this
ConfigWatcher custom resources, so that later our operator can monitor changes on these resources:
kubectl apply -f crd-role.yml
In the next step, let's check out the logic of our operator, which uses instances of the
ConfigWatcher CRD as input.