This scenario will show how to deploy and connecto to Apache Kafka on Kubernetes.
What is Apache Kafka?
Apache Kafka has become the leading platform for building real-time data pipelines. Today, Kafka is heavily used for developing event-driven applications, where it lets services communicate with each other through events. Using Kubernetes for this type of workload requires adding specialized components such as Kubernetes Operators and connectors to bridge the rest of your systems and applications to the Kafka ecosystem.
Apache Kafka is a distributed data streaming platform that is a popular event processing choice. It can handle publishing, subscribing to, storing, and processing event streams in real-time. Apache Kafka supports a range of use cases where high throughput and scalability are vital, and by minimizing the need for point-to-point integrations for data sharing in certain applications, it can reduce latency to milliseconds.
Strimzi: Kubernetes Operator for Apache Kafka
Strimzi simplifies the process of running Apache Kafka in a Kubernetes cluster. Strimzi is a CNCF Sandbox project which provides the leading community Operators for deploying and managing the components to run an Apache Kafka cluster on Kubernetes in various deployment configurations. This includes the Kafka brokers, Apache ZooKeeper, MirrorMaker and Kafka Connect.
Red Hat Integration
To respond to business demands quickly and efficiently, you need a way to integrate applications and data spread across your enterprise. Red Hat AMQ — based on open source communities like Apache ActiveMQ and Apache Kafka — is a flexible messaging platform that delivers information reliably, enabling real-time integration, and connecting the Internet of Things (IoT).
AMQ streams is a Red Hat Integration component that supports Apache Kafka on OpenShift. Through AMQ Streams, Kafka operates as an “OpenShift-native” platform through the use of powerful AMQ Streams Operators that simplify the deployment, configuration, management, and use of Apache Kafka on OpenShift.
- Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator
- Entity Operator Comprises the Topic Operator and User Operator
- Topic Operator Manages Kafka topics
- User Operator Manages Kafka users
In this scenario, you created and started an Apache Kafka cluster using Red Hat AMQ streams Operators, and sent and received messages on a Kafka topic using the scripts.
New organizations are adopting Apache Kafka as an event backbone every day. CNCF projects like Strimzi make it easier to access the benefits of Kubernetes, and deploy Apache Kafka workloads in a cloud-native way.
For those who want an open source development model with enterprise support, Red Hat Integration lets you deploy your Kafka-based event-driven architecture on Red Hat OpenShift, the enterprise Kubernetes. Red Hat AMQ Streams, Debezium, and the Apache Camel Kafka Connect connectors are all available with a Red Hat Integration subscription.
To learn more about and getting started:
Apache Kafka Basics
Installing Red Hat AMQ Streams Operators
Red Hat AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This tutorial provides instructions for deploying a working environment of AMQ Streams.
Logging in to the Cluster via OpenShift CLI
Before creating any applications, login as admin. This is required if you want to log in to the web console and use it.
To log in to the OpenShift cluster from the Terminal run:
oc login -u admin -p admin
This will log you in using the credentials:
Use the same credentials to log into the web console.
Creating your own namespace
To create a new (project) namespace called
kafka for the AMQ Streams Kafka Cluster Operator run the command:
oc new-project kafka
Install AMQ Streams Operators
AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka.
Deploy the Operator Lifecycle Manager Operator Group and Susbcription to install the Operator in the previously created namespace:
oc -n kafka apply -f /opt/operator-install.yaml
You will see the following result:
operatorgroup.operators.coreos.com/streams-operatorgroup created subscription.operators.coreos.com/amq-streams created
You can also deploy the AMQ streams Operator from the OpenShift OperatorHub from within the OpenShift administration console.
Check the Operator deployment
Check the Operator deployment is running.
To watch the status of the pods run the following command:
oc -n kafka get pods -w
You will see the status of the pod for the Cluster Operator changing to
NAME READY STATUS RESTARTS AGE amq-streams-cluster-operator-v1.5.3-59666d98cb-8ptlz 0/1 ContainerCreating 0 10s amq-streams-cluster-operator-v1.5.3-59666d98cb-8ptlz 0/1 Running 0 18s amq-streams-cluster-operator-v1.5.3-59666d98cb-8ptlz 1/1 Running 0 34s
Press Ctrl+C to stop the process.