Difficulty: Intermediate
Estimated Time: 15-20 minutes

As your system grows, it becomes more important to have a centralised logging approach. Centralised loggings enables you to see an overview of your entire system. A common approach is to use the ELK stack. This stack is a combination of modern open source tools ElasticSearch, Logstash and Kibana.

Each project has a defined role. ElasticSearch stores the logs. Logstash manages transforming logs into a consistent format. Kibana adds a great visualisation layer of your logs.

The ELK stack can be combined with Logspout to aggregate container logs.

This scenario will explain how to deploy ELK and start aggregating logs from containers. In future scenarios, we'll go deeper into using Kibana to visualise your logs and identify application issues.

This scenario has explored how to deploy the ELK stack and aggregate container logs using LogSpout.

The script to launch this on your own cluster can be found at https://github.com/BenHall/docker-elk

Don’t stop now! The next scenario will only take about 10 minutes to complete.

Deploy ELK stack and aggregate container logs

Step 1 of 7

Step 1 - Start Elasticsearch

The first stage is to launch an ElasticSearch instance for storing the collected log lines. Using the official image, we need to expose the two ports required for ElasticSearch. It's important to ensure that the port 9200 and 9300 are only accessible to the appropriate machines via firewall settings. Ensure you add IP restrictions and networking to ensure you restrict access to the public.

It's important to set the environment variable LOGSPOUT=ignore when launching your ELK infrastructure. This indicates that LogSpout shouldn't be aggregating the logs for this container.

Task

Launch Elasticsearch with the following command

docker run -d \
  -p 9200:9200 \
  -p 9300:9300 \
  --name elk_es \
  -e LOGSPOUT=ignore \
  elasticsearch:1.5.2