Difficulty: Beginner
Estimated Time: 20 minutes

Digital Academy Logo

Welcome to the Digital Academy "Kubernetes CNCF" series. This is Module 4 - Logging with EFK.

This scenario takes you through the basics of deploying a logging solution on Kubernetes. The premise is all the log streams generated by the containers are aggregated into a central datastore. From that datastore, queries and filters produce views from the aggregated logs.

Containers should only produce logs as event streams and leave the aggregation and routing to other services on Kubernetes. This pattern is emphasized as factor 11 Logs of the The Twelve Factors App methodology.

Commonly the three components ElasticSearch, Fluentd, and Kibana (EFK) are combined for the stack. Sometimes stack use Fluent Bit instead of Fluentd. Fluent Bit is mostly functionally the same, but lighter in features and size. Other solutions sometimes use Logstash (ELK) instead of Fluentd.

In the following steps you will learn:

  • How to deploy ElasticSearch, Fluentd, and Kibana
  • How to generate log events and query then in Kibana

Forwarding: Fluent Bit

Fluent Bit

- fluentbit.io

Fluentd is an open source data collector, that lets you unify the data collection and consumption for a better use and understanding of data. In this stack Fluent Bit runs on each node (DaemonSet) and collects all the logs from /var/logs and routes them to ElasticSearch.

This example could use a lighter variation of Fluentd called Fluent Bit. Perhaps EfK, with a lower case 'f' is apropos. Alen Komljen covers the reason why in his blog.

Another variation for logging is the ELK stack that includes Logstash as a substitution for the Fluent aggregation solution.

Aggregation: ElasticSearch

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

Viewing: Kibana

Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

For Kubernetes there are a wide variety of ways to assemble EFK together, especially with a production or business critical clusters. Some solutions may leverage an ElasticSearch service outside the cluster, perhaps offered by a cloud provider. For any solution that's deployed to Kubernetes it's recommended to use Helm charts. Even with Helm charts there are a variety of solutions evolving and competing with each other.

However, this scenario is aimed to show how you can get a working stack up with reasonable ease so you can see how the components are installed and work with each other.

For more information, see the EFK documentation.

Developer(s): William Hearn and Zachary Seguin


This stack is a good example of how Kubernetes can be used to bring distinct tools together so they can work in concert for a larger solution. In this case for log aggregation. Because Fluent Bit is installed as a DaemonSet it will be on every node dutifully collection the log streams and sending them to ElasticSearch where in turn Kibana offers a viewport into specific data based on your queries.

It's important your application also logs the transaction correlation IDs as a way to gather log events from a known transaction. This is also true for transaction tracing (a separate Katacoda scenario, Transaction Tracing).

Each one of the three components is highly configurable and this scenario provides a starting point for getting this observability pattern ready for production.

Lessons Learned

With these steps you have learned:

  • How to configure and deploy ElasticSearch, Fluent Bit, and Kibana on Kubernetes
  • How to generate log events and query them with Kibana


Module 4 - Logging with EFK

Step 1 of 6

Your Kubernetes Cluster

As you see, your Kubernetes cluster is started. Verify it's ready for your use.

kubectl version && kubectl cluster-info && kubectl get nodes

Verify the Kubernetes cluster is empty.

kubectl get deployments,pods,services

The Helm package manager used for installing applications on Kubernetes is also available.

helm version

The Kubernetes dashboard is also available, but you will need the secret access token to login. reveal the token

echo -e "\n--- Copy and paste this token for dashboard access ---" && kubectl describe secret $(kubectl get secret | awk '/^dashboard-token-/{print $1}') | awk '$1=="token:"{print $2}' && echo "---"

then paste this the token into the login prompt.

This tab will not be visible to users and provides only information to help authors when creating content.

Creating Katacoda Scenarios

Thanks for creating Katacoda scenarios. This tab is designed to help you as an author have quick access the information you need when creating scenarios.

Here are some useful links to get you started.

Running Katacoda Workshops

If you are planning to use Katacoda for workshops, please contact [email protected] to arrange capacity.

Debugging Scenarios

Below is the response from any background scripts run or files uploaded. This stream can aid debugging scenarios.

If you still need assistance, please contact [email protected]