Difficulty: beginner
Estimated Time: 15 minutes

This exercise demonstrates how your Quarkus application can utilize the Micrometer Metrics extension to produce and observe metrics generated by the application.

Micrometer allows applications to gather various metrics and statistics that provide insights into what is happening inside the application. They serve to pinpoint issues, provide long term trend data for capacity planning and pro-active discovery of issues (e.g. disk usage growing without bounds). Metrics can also help those scheduling systems decide when to scale the application to run on more or fewer machines.

Micrometer defines a core library and a set of additional libraries that support different monitoring systems. Quarkus Micrometer extensions are structured similarly: quarkus-micrometer provides core micrometer support and runtime integration and other supporting Quarkus and Quarkiverse extensions bring in additional dependencies and requirements to support specific monitoring systems.

Other possibilities

Learn more at quarkus.io, or just drive on and get hands-on!

This exercise demonstrates how your Quarkus application can utilize the Micrometer Metrics extension specification through the SmallRye Metrics extension. You also consumed these metrics using a popular monitoring stack with Prometheus and Grafana.

There are many more possibilities for application metrics, and it's a useful way to not only gather metrics, but act on them through alerting and other features of the monitoring stack you may be using.

Additional Resources

If you’re interested in helping continue to improve Quarkus, developing third-party extensions, using Quarkus to develop applications, or if you’re just curious about it, please check out these links:

Monitoring Quarkus with Micrometer

Step 1 of 7

Step 1

Login to OpenShift

Red Hat OpenShift Container Platform is the preferred container orchestration platform for Quarkus. OpenShift is based on Kubernetes which is the most used Orchestration for containers running in production.

In order to login, we will use the oc command and then specify the server that we want to authenticate to:

oc login -u developer -p developer

Congratulations, you are now authenticated to the OpenShift server.

Access OpenShift Project

Projects are a top level concept to help you organize your deployments. An OpenShift project allows a community of users (or a user) to organize and manage their content in isolation from other communities.

For this scenario, let's create a project that you will use to house your applications. Click:

oc new-project quarkus --display-name="Sample Monitored Quarkus App"

Create Prometheus Configuration

Next, let’s install Prometheus. Prometheus is an open-source systems monitoring and alerting toolkit featuring:

  • a multi-dimensional data model with time series data identified by metric name and key/value pairs

  • PromQL, a flexible query language to leverage this dimensionality

  • time series collection happens via a pull model over HTTP

To install it, first create a Kubernetes ConfigMap that will hold the Prometheus configuration. Click on the following command to create this file:

cat <<EOF > /tmp/prometheus.yml
global:
  scrape_interval:     15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
  - static_configs:
    - targets:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'hello-app'
    metrics_path: '/q/metrics'
    static_configs:
    - targets: ['primes']
EOF

This file contains basic Prometheus configuration, plus a specific scrape_config which instructs Prometheus to look for application metrics from both Prometheus itself, and a Quarkus app called primes (which we'll create later) at the /q/metrics endpoint.

Next, click this command to create a ConfigMap with the above file:

oc create configmap prom --from-file=prometheus.yml=/tmp/prometheus.yml

Deploy Prometheus

Next, deploy and expose Prometheus using its public Quay.io image:

oc new-app quay.io/prometheus/prometheus && oc expose svc/prometheus

And finally, mount the ConfigMap into the running container:

oc set volume deployment/prometheus --add -t configmap --configmap-name=prom -m /etc/prometheus/prometheus.yml --sub-path=prometheus.yml

This will cause the contents of the ConfigMap data to be mounted at /etc/prometheus/prometheus.yml inside its container where Prometheus is expecting it.

Verify Prometheus is up and running:

oc rollout status -w deployment/prometheus

You should see deployment "prometheus" successfully rolled out.

If this command appears to hang, just press CTRL-C and click it again.