Warning: Under Construction
Simplify your ops-life by using the Cassandra Kubernetes Operator to run your Cassandra cluster!
In this scenario, we'll learn how to:
- Create a Kubernetes cluster
- Deploy an ingress controller
- Install the Cassandra Kubernetes operator
- Create a single node Cassandra cluster
- Deploy the example Pet Clinic app
- Explore metrics with Prometheus and Grafana
You're going to love the Prometheus and Grafana!
In this scenario, we learned how to:
- Create a Kubernetes cluster
- Deploy an ingress controller
- Install the Cassandra Kubernetes operator
- Create a single node Cassandra cluster
- Deploy the example Pet Clinic app
We've only scratched the surface or the Cassandra Kubernetes operator in this scenario. You can use the operator for so many other things.
Check out the docs here for more info.
Deploying and maintaining a Cassandra cluster never looked easier!

Steps
Monitoring - Prometheus and Grafana
Create a Kubernetes Cluster
In this step we'll deploy and configure the following:
- KinD
- Helm
- Four-node Kubernetes cluster using KinD
- Nginx ingress
- Cassandra operator
- One-node Cassandra cluster
- Java Spring Pet Clinic example app
- Connect the Pet Clinic app to the Cassandra cluster
We'll do all the work, you can sit back a relax. However, if you want to understand what all these pieces are, check out the following.
What is KinD
?
KinD
is a Kubernetes cluster running inside a Docker container.
As you know, most people use Kubernetes to manage systems of Docker containers.
So, KinD
is a Docker container that runs Kubernetes to manage other Docker containers - it's a bit recursive.
We use
KinD
so we can create a many-node Kubernetes cluster on a single machine.
KinD
is great because it's relatively light-weight, easy to install and easy to use.
We've already installed
KinD
for you.
For your reference, here are the commands we used to install KinD
.
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind
What is Helm
?
Helm is a package manager (like
apt
or yum
) for Kubernetes.
Helm allows you to install and update Kubernetes applications.
You will notice in the previous command, the sub-command is install followed what we will name the installed package and the chart used to specify the installation.
A chart is a specification file we use to tell Helm how to do the installation.
Helm downloads the charts from a Helm repo.
We'll see more about this later in the course, but you can read more here.
Why do we need a four-node Kubernetes cluster?
First it is important to understand we are working with two types of clusters: a Kubernetes cluster and a Cassandra cluster. The Kubernetes cluster is a set of machines (in this case pseudo-virtual machines inside a Docker container) called Kubernetes nodes. The Cassandra cluster is a set of those Kubernetes nodes that host and run the Cassandra software. From the Kubernetes website: A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. We are setting up a four-node Kubernetes cluster so that we have one admin node and three worker nodes. We are using KinD. You can review the KinD cluster configuration in this file.
kind-config.yaml
What is an ingress controller and how does it fit into the Kubernetes architecture?
An ingress controller provides access from outside a Kubernetes cluster to the components inside the cluster. We think of a pod as a private network where the containers within the pod are the machines on the pod's network. A common architecture uses a Kubernetes service as a load balancer for one or more pods, where the service has access to the pods' private networks.
Kubernetes maps services' ports to the pods' ports. However, the services' ports are not accessible outside the Kubernetes cluster (with the exception of NodePorts and LoadBalancers, which we will not discuss here). An ingress maps a port on the host machine to a service and its port, thus exposing the service to the outside world.
In our situation, we are running on a Katacoda VM and Katacoda uses a proxy, which gives us a single machine endpoint. So, we will also use the ingress to create URL paths for our services. For example, in this scenario we will use the root path (i.e.,
/
) for the frontend Pet Clinic UI, and /petclinic/api
for the Pet Clinic backend microservice.
What is the Cassandra Operator?
The Cassandra Operator simplifies managing your Cassandra cluster in a couple of ways. First, the operator introduces higher level constructs such as datacenters that don't occur in native Kubernetes. Second, the operator monitors your Cassandra cluster to keep it running in a state you specify. If a Cassandra node were to fail, the operator would create a replacement node. If you decide to change the number of nodes in you Cassandra cluster (either increasing or decreasing), the operator manages the change for you. Read more about the Cassandra operator here. Kubernetes operators use two Kubernetes constructs: Custom Resource Definitions (CRD) and Operators. Kubernetes uses CRDs to introduce higher level abstractions (e.g., datacenters) into the Kubernetes system. Operators use these CRDs with domain specific logic to allow you to perform domain specific tasks. Additionally, operators have a control loop which monitors the operators' resources to keep them running in the specified state.
Is the Cassandra Operator the same thing as a Cassandra cluster?
The Kubernetes Cassandra operator is NOT a Cassandra cluster - it's a Kubernetes construct that controls Cassandra clusters. The Kubernetes Cassandra operator performs tasks like monitoring the Cassandra nodes so that if one fails, the operator can replace it. So, once you have Kubernetes Cassandra operator installed, you still need to deploy a Cassandra cluster for the operator to manage. In this example we are deploying a one-node Cassandra cluster. Normally, you would have at least three-nodes in your Cassandra cluster, but for educational purposes (and because of the limited resources within the Katacoda environment) we will only provision a single node.
What is the Pet Clinic app?
The Pet Clinic app is the reference app for the Java Spring frame work. This app manages data for a simple pet clinic business. You can read more here.
What is connecting the Pet Clinic app to Cassandra?
Within the Kubernetes environment, the Cassandra nodes run within their own pods, which are separate from the app pods. So, as the Pet Clinic app initializes, it must establish a connection between its pods and the Cassandra pod. This can take a little bit of time when the Pet Clinic app first starts talking to Cassandra, but after that, the connection is fast.
It's a fair amount of work to configure and deploy all the resources necessary for this scenario, so please be patient as it completes.
Woot! We have the complete Pet Clinic app running.
Click the following to create the KinD cluster.
kind create cluster --name cassandra-kub-cluster --config kind-config.yaml
We use kubectl
to interact with the Kubernetes cluster.
Let's try it out by inspecting the cluster nodes.
Click the following.
watch kubectl get nodes
Once all nodes are Ready, click the following to send a Ctrl-C
to exit the watch loop.
^C