Difficulty: Intermediate
Estimated Time: 15 minutes

Apache Kafka uses a custom protocol on top of TCP/IP for communication between applications and the Kafka cluster. Clients are supported in many different programming languages, but there are certain scenarios where it is not possible to use such clients. In this situation, you can use the standard HTTP/1.1 protocol to access Kafka instead.

The Red Hat AMQ Streams Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster running on AMQ Streams. Applications can perform typical operations such as:

  • Sending messages to topics
  • Subscribing to one or more topics
  • Receiving messages from the subscribed topics
  • Committing offsets related to the received messages
  • Seeking to a specific position

As with AMQ Streams, the Kafka Bridge is deployed into an OpenShift cluster using the AMQ Streams Cluster Operator, or installed on Red Hat Enterprise Linux using downloaded files.

HTTP integration

In the following tutorial, you will deploy the Kafka Bridge and use it to connect to your Apache Kafka cluster using HTTP.

In this quickstart, you have used the AMQ Streams Kafka Bridge to perform several common operations on a Kafka cluster.

Exposing the Apache Kafka cluster to clients using HTTP enables scenarios where use of the native clients is not desirable. Such situations include resource constrained devices, network availability and security considerations. Interaction with the bridge is similar to the native Apache Kafka clients but using the semantics of an HTTP REST API.

The general availability of the HTTP Bridge in Red Hat Integration enhances the options available to developers when building applications with Apache Kafka.

Additional Resources

To learn more about and getting started:

Exposing Apache Kafka through the HTTP Bridge

Step 1 of 4

Deploying the HTTP Bridge

Deploying the bridge on OpenShift is really easy using the new KafkaBridge custom resource provided by the Red Hat AMQ Streams Cluster Operator.

Logging in to the Cluster using the OpenShift CLI

Before creating any applications, log in as admin. This is required if you want to log in to the web console and use it.

To log in to the OpenShift cluster from the Terminal run:

oc login -u developer -p developer

This will log you in using the credentials:

  • Username: developer
  • Password: developer

Use the same credentials to log in to the web console.

Switch your own namespace

Switch to the (project) namespace called kafka where the Cluster Operator manages the Kafka resources:

oc project kafka

Deploying Kafka Bridge to your OpenShift cluster

The deployment uses a YAML file to provide the specification to create a KafkaBridge resource.

Click the link below to open the custom resource (CR) definition for the bridge:

  • kafka-bridge.yaml

The bridge has to connect to the Apache Kafka cluster. This is specified in the bootstrapServers property. The bridge then uses a native Apache Kafka consumer and producer for interacting with the cluster.

For information about configuring the KafkaBridge resource, see Kafka Bridge configuration.

Deploy the Kafka Bridge with the custom image:

oc -n kafka apply -f /root/projects/http-bridge/kafka-bridge.yaml

The Kafka Bridge node should be deployed after a few moments. To watch the pods status run the following command:

oc get pods -w -l app.kubernetes.io/name=kafka-bridge

You will see the pods changing the status to running. It should look similar to the following:

NAME                                READY   STATUS              RESTARTS   AGE
my-bridge-bridge-6b6d9f785c-dp6nk   0/1     ContainerCreating   0          5s
my-bridge-bridge-6b6d9f785c-dp6nk   0/1     ContainerCreating   0          12s
my-bridge-bridge-6b6d9f785c-dp6nk   0/1     Running             0          27s
my-bridge-bridge-6b6d9f785c-dp6nk   1/1     Running             0          45s

This step might take a couple minutes.

Press Ctrl+C to stop the process.

^C

Creating an OpenShift route

After deployment, the Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by using one of the following features:

  • Services of types LoadBalancer or NodePort
  • Kubernetes Ingress
  • OpenShift Routes

An OpenShift route is an OpenShift resource for allowing external access through HTTP/HTTPS to internal services such as the Kafka bridge. We will use this approach for our example.

Run the following command to expose the bridge service as an OpenShift route:

oc expose svc my-bridge-bridge-service

When the route is created, the Kafka Bridge is reachable through the https://my-bridge-bridge-service-kafka.[[HOST_SUBDOMAIN]]-80-[[KATACODA_HOST]].environments.katacoda.com host. You can now use any HTTP client to interact with the REST API exposed by the bridge to send and receive messages without needing to use the native Kafka protocol.