In this scenario we see how the Health Probe pattern from Kubernetes Patterns is implemented by Kubernetes.
In this example we create a Deployment which exposes a liveness and a readiness probe for allowing Kubernetes to detect the health of your application and whether it's ready to serve its business functionality or whether it might need a restart for restoring from a failure.
After this scenario you know
- how a failing liveness probe triggers Kubernetes to restart your application.
- how a readiness probe enables and disables the access to your service.
In this scenario you learned about two ways how you can implement the different Health Probes which have meaning to Kubernetes:
- A readiness probes for indicating when your application is ready to serve or whether is should be excluded from the route.
- A liveness probe which causes a container to be restart if it fails.
Much more information, like how to to tune the check ramp up time or the check intervals can be found in the Kubernetes Patterns book. Also don't forget to check out the examples at the books' example GitHub repository.
Deployment with liveness and readiness checks
While you are reading this we are starting a simple single node Kubernetes cluster for you. Please be patient and wait until the
launch.sh script has finished.
Now let's create a simple Deployment which defines a liveness and readiness probe. The application itself is a simple REST service which just returns a freshly generated random number each time it is called.
deployment.yml you find the definition for this Deployment.
Check the content of this declaration with
Please look specifically at the
readinessProbe declaration which do a HTTP based and file based health check, respectively.
Now it's time to create that Pod with
kubectl create -f deployment.yml
and watch how it starts up the application pods:
kubectl get pods -w
(you can stop this with CTRL-C).
When the pod is up and running (status is
1/1 Running), let's create a Service to access the application.
We are using here a
NodePort service with our application listening on a fixed port on every node of our cluster:
kubectl create -f service.yml
The random number service can now be accessed in Katacoda with
curl -s http://[[HOST_IP]]:31667/ | jq .
or you can also reach it externally via http://[[HOST_SUBDOMAIN]]-31667-[[KATACODA_HOST]].environments.katacoda.com/
To access the health check which is used in the liveness probe, try
curl -s http://[[HOST_IP]]:31667/actuator/health | jq .