Difficulty: beginner
Estimated Time: 20-30 minutes

Applications deployed on top of OpenShift can benefit on having metrics deployed to be able to watch pods resource consumption in the web console and use the horizontally pod autoscaler to create autoscale rules for pods so a consumption threshold is reached, the pods will be automatically scaled up.

This course will show the metrics deployment based on hawkular-metrics, how to gather information from the web console and using the oc tool and how to create a sample horizontal pod autoscaler and test it.

NOTE: The metrics deployment can take a few minutes to be ready as there are some components that needs to be deployed at run time. If you are curious, take a look at the process involved on deploying metrics while they are provisioned.

Congratulations! You just finished learning the basics on OpenShift Container Platform metrics and HorizontalPodAutoscaler. Feels good doesn't it?

If you are curious enough, see the official documentation about the hpa and how to configure the autoscaling procedure for memory limits.

What's next?

At this point you are probably itching to keep working with OpenShift as you have had a glimpse of the power this can bring to your own applications. We are currently working on more advanced tutorials that will be hosted here but in the meantime, you can certainly run your own version of OpenShift or use a hosted model. You are welcome to use one of the following options:

Minishift

Minishift is a complete OpenShift environment that you can run on your local machine. The project supports Windows, OS X, and the Linux operating system. To find more about minishift, visit http://www.openshift.org/vm

oc cluster up

oc cluster up is a command provided by the oc client tool. It configures and runs an openshift environment running inside of the native docker system for your operating system. It supports Windows, OS X, and the Linux operating sytems. For more information, visit https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md

If you decide to try out oc cluster up, and you should, I would also suggest that you take a look at a wrapper script that was created to make life a little bit easier for you called oc cluster wrapper. This wrapper provides functionality such as the ability to have different profiles, persistent volume management and other great features. You can find more information at the official git repository at https://github.com/openshift-evangelists/oc-cluster-wrapper

OpenShift Online

The OpenShift team provides a hosted environment which includes a free starter plan which you can use to develop and test applications for OpenShift. You can find details for OpenShift Online and sign up at https://www.openshift.com/pricing/index.html

OpenShift Dedicated

You can also let Red Hat host an OpenShift instance for you on a public cloud. This is an ideal scenario for larger teams that doesn't want to deal with the operational aspects or running a full environment. To find out more, visit https://www.openshift.com/dedicated/

Don’t stop now! The next scenario will only take about 10 minutes to complete.

Using OpenShift metrics

Step 1 of 5

Step 1 - Check metrics

The OpenShift metrics is composed by a few pods running on the OpenShift environment:

  • Heapster: Heapster scrapes the metrics for CPU, memory and network usage on every pod, then exports them into Hawkular Metrics.
  • Hawkular Metrics: A metrics engine that stores the data persistently in a Cassandra database.
  • Cassandra: Database where the metrics data is stored.

Metrics components can be customized for longer data persistence, pods limits, replicas of individual components, custom certificates, etc. The customization is provided by the Ansible variables as part of the deployment process.

NOTE: For more information about the metrics components see the official documentation

Metrics components such as heapster should gather information from all hosts therefore they should be protected from regular users. That's why the metrics components such as pods, secrets, etc. are deployed in the openshift-infra namespace where only the cluster-admin roles can use.

In order to see the running pods for metrics components, it is required to be logged as the system:admin user. The system:admin user is a special user created in OpenShift that doesn't use password for authentication but certificates.

Type the following in the Terminal to login as system:admin:

oc login -u system:admin

You should see an output message with a confirmation you are logged as system:admin user as:

Logged into "https://172.17.0.10:8443" as "system:admin" using existing credentials.
...

NOTE: If you are curious, explore the ~/.kube/ folder where the user configuration is stored.

Once logged as the cluster-admin user (system:admin), check the pods running the metrics components using:

oc get pods -n openshift-infra

You should see something similar to:

NAME                         READY     STATUS    RESTARTS   AGE
hawkular-cassandra-1-mf900   1/1       Running   0          1d
hawkular-metrics-c0x5w       1/1       Running   0          1d
heapster-xfs3m               1/1       Running   0          1d

There are quite a few more objects as part of the metrics components including secrets, service accounts, replication controllers, etc.

Hawkular metrics is exposed through a route that is used in the web console to show the pods metrics visually.

oc get all -n openshift-infra

The OpenShift metrics components deployed in this katacoda scenario are not suitable for production environments. It is required to store the metrics in a persistent volume to avoid data loss and the configuration should fit the environment. This deployment should be used for learning purposes only.

For more information about the OpenShift metrics components, the deployment process, etc. see the official documentation