Difficulty: Beginner
Estimated Time: 20 minutes

This scenario takes you through the basics of deploying a logging solution on Kubernetes. The premise is all the log streams generated by the containers are aggregated into a central datastore. From that datastore, queries and filters produce views from the aggregated logs.

Containers should only produce logs as event streams and leave the aggregation and routing to other services on Kubernetes. This pattern is emphasized as factor 11 Logs of the The Twelve Factors App methodology.

Commonly the three components ElasticSearch, Fluentd, and Kibana (EFK) are combined for the stack. Sometimes stack use Fluent Bit instead of Fluentd. Fluent Bit is mostly functionally the same, but lighter in features and size. Other solutions sometimes use Logstash (ELK) instead of Fluentd.

In the following steps you will learn:

  • How to deploy ElasticSearch, Fluentd, and Kibana
  • How to generate log events and query then in Kibana

Forwarding: Fluent Bit

Fluent Bit

- fluentbit.io

Fluentd is an open source data collector, that lets you unify the data collection and consumption for better use and understanding of data. In this stack Fluent Bit runs on each node (DaemonSet) and collects all the logs from /var/logs and routes them to ElasticSearch.

This example could use a lighter variation of Fluentd called Fluent Bit. Perhaps EfK, with a lower case 'f' is apropos. Alen Komljen covers the reason why in his blog.

Another variation for logging is the ELK stack that includes Logstash as a substitution for the Fluent aggregation solution.

Aggregation: ElasticSearch

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

Viewing: Kibana

Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

For Kubernetes, there are a wide variety of ways to assemble an EFK stack, especially with production or business-critical clusters. Some solutions may leverage an ElasticSearch service outside the cluster, perhaps offered by a cloud provider. For any solution that's deployed to Kubernetes, it's recommended to use Helm charts. Even with Helm charts there are a variety of solutions evolving and competing with each other.

However, this scenario is aimed to show how you can get a working stack up with reasonable ease so you can see how the components are installed and work with each other.

This stack is a good example of how Kubernetes can be used to bring distinct tools together so they can work in concert for a larger solution. In this case for log aggregation. Because Fluent Bit is installed as a DaemonSet it will be on every node dutifully collection the log streams and sending them to ElasticSearch where in turn Kibana offers a viewport into specific data based on your queries.

It's important your application also logs the transaction correlation IDs as a way to gather log events from a known transaction. This is also true for transaction tracing (a separate Katacoda scenario, Transaction Tracing).

Each one of the three components is highly configurable and this scenario provides a starting point for getting this observability pattern ready for production.

Lessons Learned

With these steps you have learned:

  • ✔ how to configure and deploy ElasticSearch, Fluent Bit, and Kibana on Kubernetes
  • ✔ how to generate log events and query them with Kibana


For a deeper understanding of these topics and more join
Jonathan Johnson
at various conferences, symposiums, workshops, and meetups.

Software Architectures ★ Speaker ★ Workshop Hosting ★ Kubernetes & Java Specialist

Logging with EFK

Step 1 of 7

Your Kubernetes Cluster

For this scenario, Katacoda has just started a fresh Kubernetes cluster for you. Verify it's ready for your use.

kubectl version --short && \ kubectl get componentstatus && \ kubectl get nodes && \ kubectl cluster-info

The Helm package manager used for installing applications on Kubernetes is also available.

helm version --short

Kubernetes Dashboard

You can administer your cluster with the kubectl CLI tool or use the visual Kubernetes Dashboard. Use this script to access the protected Dashboard.


Kubernetes Dashboard