Difficulty: Intermediate
Estimated Time: 20 minutes

Warning: Under Construction

For cloud native apps, you want API access to Cassandra. Stargate gives you secure REST and GraphQL APIs.

This scenario picks up where the scenario named Managing Cassandra Clusters in Kubernetes Using the Cass-Operator leaves off. If you have not yet worked through that scenario, you may want to check it out here.

In this scenario, we'll:

  • Set up a Cassandra cluster running on Kubernetes and install the example Pet Clinic app
  • Install Stargate and get an auth token for accessing the APIs
  • Use Stargate's REST API to access Cassandra
  • Use Stargate's GraphQL Playground to explore the GraphQL interface
  • Use cURL to access Stargate's GraphQL interface

Note: As shown in previous scenarios, the Pet Clinic example app provides its own backend microservice. To keep this scenario relatively brief, we won't modify Pet Clinic app to replace the backend with the Stargate APIs. But, given more time, we could!
However, we will use the Pet Clinic table named petclinic_reference_lists. One row of this table holds the list of pet types as a Cassandra set. The primary key value for this row is pet_type. We'll use this table so you get the intuition that the Stargate APIs could provide a viable microservice for the app.


You're going to love Stargate and its APIs!

In this scenario, we learned how to:

  • Set up a Cassandra cluster running on Kubernetes and install the example Pet Clinic app
  • Install Stargate and get an auth token for accessing the APIs
  • Use Stargate's REST API to access Cassandra
  • Use Stargate's GraphQL Playground to explore the GraphQL interface
  • Use cURL to access Stargate's GraphQL interface

If you want to know more about Stargate, here's the documentation.
Here are more scenarios on Stargate's

What could be better than using the Stargate APIs?

Well, how about if Stargate were already installed!
Hold on! Is all this just a set up so we see how really cool K8ssandra is?

Access Cassandra Via Stargate APIs

Step 1 of 8

Install Cassandra and the PetClinic App

In this step, we'll get set up by creating a Kubernetes cluster, and installing a Cassandra cluster (using Cassandra Operator), then installing the example Pet Clinic app.

Here are the specific pieces we will set up (FYI - it takes about five minutes to complete all eight steps):

  1. KinD
    What is KinD?
    KinD is development tool we are using to create a Kubernetes cluster running inside a Docker container. As you know, most people use Kubernetes to manage systems of Docker containers. So, KinD is a Docker container that runs Kubernetes to manage other Docker containers - it's a bit recursive.

    We use KinD so we can create a many-node Kubernetes cluster on a single machine. KinD is great because it's relatively light-weight, easy to install and easy to use.

    For your reference, here are the commands we used to install KinD.

    curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64 chmod +x ./kind mv ./kind /usr/local/bin/kind

    Read more here.
  2. Helm
    What is Helm?
    Helm is a package manager (like apt or yum) for Kubernetes. Helm allows you to install and update Kubernetes applications. Helm uses charts. A chart is a specification file we use to tell Helm how to do the installation. Helm downloads the charts from a Helm repo. You can read more here.
  3. Four-node Kubernetes cluster using KinD
    Why do we need a four-node Kubernetes cluster?
    First, it is important to understand we are working with two types of clusters: a Kubernetes cluster and a Cassandra cluster. The Kubernetes cluster is a set of machines called Kubernetes nodes. The Cassandra cluster is a set of those Kubernetes nodes that host and run the Cassandra software.

    From the Kubernetes website: A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

    We are setting up a four-node Kubernetes cluster so that we have one admin node and three worker nodes. We'll use one worker node for the single-node Cassandra cluster, another worker node for the Pet Clinic frontend software and the third node for the Pet Clinic backend software. We are using KinD to create the Kubernetes cluster. You can review the KinD cluster configuration in this file.
    Open kind-config.yaml

    In this file you see that we are creating four nodes: a single control-plane node and three worker nodes. Every Kubernetes cluster needs at least one control-plane node to manage the cluster. The worker nodes are where we deploy our Kubernetes resources. The other details in the file are specific to KinD.
  4. Nginx ingress controller
    What is an ingress and how does it fit into the Kubernetes architecture?
    An ingress provides access from outside a Kubernetes cluster to the components inside the cluster. The controller we are deploying manages instances of ingresses. We'll deploy an instance of an ingress when we install the app.

    An ingress usually sits in front of a Kubernetes service.
    As a brief refresher, the Kubernetes architecture consists of:
    • Containers - usually Docker containers that provide an isolated environment for a program
    • Pods - encapsulate one or more containers
    • Deployments - encapsulate the replication of pods
    • Services - often work as load balancers for a deployment of pods
    • Nodes - machines for hosting Pods

    Here's a diagram of these components that shows the position of the ingress. Note that we left out the nodes because the diagram gets too cluttered, but you can imagine that Kubernetes maps the various components to nodes/machines within the cluster (you can click on the image to enlarge it).

    Kubernetes Architecture
  5. Cassandra Operator (and associated StorageClass)
    What is the Cassandra Operator and the StorageClass?
    The Cassandra Operator is a Kubnernetes package that simplifies managing your Kubernetes Cassandra cluster in a couple of ways.
    1. The operator introduces higher level constructs such as datacenters that don't occur in native Kubernetes
    2. The operator monitors your Cassandra cluster to keep it running in a state you specify
    If a Cassandra node were to fail, the Cassandra Operator would create a replacement node. Also, if you decide to change the number of nodes in your Cassandra cluster (either increasing or decreasing), the operator manages the change for you.
    Read more about the Cassandra Operator here.
    Kubernetes operators use two Kubernetes constructs:
    • Custom Resource Definitions (CRD) - these introduce higher level abstractions (e.g., datacenters)
    • Custom control-loop logic - introduces domain-specific logic for managing domain specific resources (e.g., Cassandra rolling restarts)

    Many Kubernetes resources are stateless and ephemeral, which allows Kubernetes to replace instances of these resources at will. However, this approach cannot support a database. Databases require statefulness/persistence, which is what a StorageClass provides. StorageClasses allow you to specify the quality of service for your stateful storage.

  6. One-node Cassandra cluster
    Is the Cassandra Operator the same thing as a Cassandra cluster?
    The Kubernetes Cassandra Operator is NOT a Cassandra cluster - it's a Kubernetes construct that controls Cassandra clusters. The Kubernetes Cassandra Operator performs tasks like monitoring the Cassandra nodes so that if one fails, the operator can replace it. So, once you have Kubernetes Cassandra Operator installed, you still need to deploy a Cassandra cluster for the operator to manage.

    In this example we are deploying a one-node Cassandra cluster. Normally, you would have at least three-nodes in your Cassandra cluster, but for educational purposes (and because of the limited resources within the Katacoda environment) we will only provision a single node.
  7. Java Spring Pet Clinic example app
    What is the Pet Clinic app?
    The Pet Clinic app is the reference app for the Java Spring frame work. This app manages data for a simple pet clinic business. You can read more here.
    The Kubernetes Pet Clinic app consists of five components:
    • Frontend deployment - provides a UI for the app
    • Frontend service - directs traffic to the frontend pods and works as a load balancer
    • Backend deployment - provides a data microservice for the frontend
    • Backend service - directs traffic to the backend pods and can work as a load balancer
    • Ingress instance - Directs traffic from outside the Kubernetes cluster to the services
    You can check out the manifest for deploying these components.
    Open petclinic.yaml

  8. Connect the Pet Clinic app to the Cassandra cluster
    How does the Pet Clinic app connect to Cassandra?
    Within the Kubernetes environment, the Cassandra nodes run in their own pods, which are separate from the app pods. So, as the Pet Clinic app initializes, it must establish a connection between its pods and the Cassandra pod. This can take a little bit of time when the Pet Clinic app first starts talking to Cassandra, but after that, the connection is fast.

It's a fair amount of work to configure and deploy all the resources necessary for this scenario, so please be patient as it completes.

When all eight installation steps are complete, you can launch the Pet Clinic app to see that it works.


You've seen the Pet Clinic app in previous scenarios, so just click on the Pet Types tab to show that the frontend UI is talking to the backend microservice, and that the backend is connected to the Cassandra database. When you see the list of Pet Types, you know everything is working correctly!

Woot! We have the complete Pet Clinic app running.