Difficulty: Beginner
Estimated Time: 10-15 Minutes

Welcome to the Thanos introduction!

Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity. It can be added seamlessly on top of existing Prometheus deployments.

Thanos provides a global query view, data backup, and historical data access as its core features. All three features can be run independently of each other. This allows you to have a subset of Thanos features ready for immediate benefit or testing, while also making it flexible for gradual adoption in more complex environments.

Thanos will work in cloud native environments like Kubernetes as well as more traditional ones. However, this course uses docker containers which will allow us to use pre-built docker images available here

This tutorial will take us from transforming vanilla Prometheus to basic Thanos deployment enabling:

  • Reliable querying multiple Prometheus instances from single Prometheus API endpoint.
  • Seamless handling of Highly Available Prometheus (multiple replicas)

Let's jump in! 🤓

Summary

Congratulations! 🎉🎉🎉 You completed our very first Thanos tutorial. Let's summarize what we learned:

  • The most basic installation of Thanos with Sidecars and Querier allows global view for Prometheus queries.
  • Querier operates on StoreAPI gRPC API. It does not know if it's Prometheus, OpenTSDB, another Querier or any other storage, as long as API is implemented.
  • With Thanos you can (and it's recommended to do so!) run multi-replica Prometheus servers. Thanos Querier --query.replica-label flag controls this behaviour.
  • Sidecar allows to dynamically reload configuration for Prometheus and recording & alerting rules in Prometheus.

See next courses for other tutorials about different deployment models and more advanced features of Thanos!

Feedback

Do you see any bug, typo in the tutorial or you have some feedback for us?

let us know on https://github.com/thanos-io/thanos or #thanos slack channel linked on https://thanos.io

Intro: Global View and seamless HA for Prometheus

Step 1 of 3

Initial Prometheus Setup

Step 1 - Start initial Prometheus servers

Thanos is meant to scale and extend vanilla Prometheus. This means that you can gradually, without disruption, deploy Thanos on top of your existing Prometheus setup.

Let's start our tutorial by spinning up three Prometheus servers. Why three? The real advantage of Thanos is when you need to scale out Prometheus from a single replica. Some reason for scale-out might be:

  • Adding functional sharding because of metrics high cardinality
  • Need for high availability of Prometheus e.g: Rolling upgrades
  • Aggregating queries from multiple clusters

For this course, let's imagine the following situation:

initial-case

  1. We have one Prometheus server in some eu1 cluster.
  2. We have 2 replica Prometheus servers in some us1 cluster that scrapes the same targets.

Let's start this initial Prometheus setup for now.

Prometheus Configuration Files

Now, we will prepare configuration files for all Prometheus instances.

Click Copy To Editor for each config to propagate the configs to each file.

First, for the EU Prometheus server that scrapes itself:

global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    cluster: eu1
    replica: 0

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['127.0.0.1:9090']

For the second cluster we set two replicas:

global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    cluster: us1
    replica: 0

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['127.0.0.1:9091','127.0.0.1:9092']
global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    cluster: us1
    replica: 1

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['127.0.0.1:9091','127.0.0.1:9092']

Starting Prometheus Instances

Let's now start three containers representing our three different Prometheus instances.

Please note the extra flags we're passing to Prometheus:

  • --web.enable-admin-api allows Thanos Sidecar to get metadata from Prometheus like external labels.
  • --web.enable-lifecycle allows Thanos Sidecar to reload Prometheus configuration and rule files if used.

Execute following commands:

Prepare "persistent volumes"

mkdir -p prometheus0_eu1_data prometheus0_us1_data prometheus1_us1_data

Deploying "EU1"

docker run -d --net=host --rm \
    -v $(pwd)/prometheus0_eu1.yml:/etc/prometheus/prometheus.yml \
    -v $(pwd)/prometheus0_eu1_data:/prometheus \
    -u root \
    --name prometheus-0-eu1 \
    quay.io/prometheus/prometheus:v2.14.0 \
    --config.file=/etc/prometheus/prometheus.yml \
    --storage.tsdb.path=/prometheus \
    --web.listen-address=:9090 \
    --web.external-url=https://[[HOST_SUBDOMAIN]]-9090-[[KATACODA_HOST]].environments.katacoda.com \
    --web.enable-lifecycle \
    --web.enable-admin-api && echo "Prometheus EU1 started!"

NOTE: We are using the latest Prometheus image so we can take profit from the latest remote read protocol.

Deploying "US1"

docker run -d --net=host --rm \
    -v $(pwd)/prometheus0_us1.yml:/etc/prometheus/prometheus.yml \
    -v $(pwd)/prometheus0_us1_data:/prometheus \
    -u root \
    --name prometheus-0-us1 \
    quay.io/prometheus/prometheus:v2.14.0 \
    --config.file=/etc/prometheus/prometheus.yml \
    --storage.tsdb.path=/prometheus \
    --web.listen-address=:9091 \
    --web.external-url=https://[[HOST_SUBDOMAIN]]-9091-[[KATACODA_HOST]].environments.katacoda.com \
    --web.enable-lifecycle \
    --web.enable-admin-api && echo "Prometheus 0 US1 started!"

and

docker run -d --net=host --rm \
    -v $(pwd)/prometheus1_us1.yml:/etc/prometheus/prometheus.yml \
    -v $(pwd)/prometheus1_us1_data:/prometheus \
    -u root \
    --name prometheus-1-us1 \
    quay.io/prometheus/prometheus:v2.14.0 \
    --config.file=/etc/prometheus/prometheus.yml \
    --storage.tsdb.path=/prometheus \
    --web.listen-address=:9092 \
    --web.external-url=https://[[HOST_SUBDOMAIN]]-9092-[[KATACODA_HOST]].environments.katacoda.com \
    --web.enable-lifecycle \
    --web.enable-admin-api && echo "Prometheus 1 US1 started!"

Setup Verification

Once started you should be able to reach all of those Prometheus instances:

Additional info

Why would one need multiple Prometheus instances?

  • High Availability (multiple replicas)
  • Scaling ingestion: Functional Sharding
  • Multi cluster/environment architecture

Problem statement: Global view challenge

Let's try to play with this setup a bit. You are free to query any metrics, however, let's try to fetch some certain information from our multi-cluster setup: How many series (metrics) we collect overall on all Prometheus instances we have?

Tip: Look for prometheus_tsdb_head_series metric.

🕵️‍♂️

Try to get this information from the current setup!

To see the answer to this question click SHOW SOLUTION below.

Next

Great! We have now running 3 Prometheus instances.

In the next steps, we will learn how we can install Thanos on top of our initial Prometheus setup to solve problems shown in the challenge.

prometheus0_eu1.yml
prometheus0_us1.yml
prometheus1_us1.yml
Terminal
Prometheus 0 EU1
Prometheus 0 US1
Prometheus 1 US1
Thanos Query