Difficulty: Intermediate
Estimated Time: 10 minutes

Portworx is the cloud native storage company that enterprises depend on to reduce the cost and complexity of rapidly deploying containerized applications across multiple clouds and on-prem environments. With Portworx, you can manage any database or stateful service on any infrastructure using any container scheduler. You get a single data management layer for all of your stateful services, no matter where they run.

A popular Kubernetes storage and Docker persistent storage solution, Portworx is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler.

AutoPilot Tutorial

In this tutorial, you will learn how to let Portworx AutoPilot automatically resize PVCs based on Prometheus Metrics

  • Use the Portworx Storage Class to create a PVC with 3 replicas of the data
  • Use a simple YAML file to deploy PostgreSQL using this storage class
  • Create an AutoPilot Rule that will scale the PVC by 200% once a PVC is greater than 20% filled.
  • Use pgbench to fill up the volume and watch AutoPilot resize the volume automatically.

High Level Overview

  • Note that this demo is using Prometheus. AutoPilot and Prometheus come pre-installed in the environment.

  • Then we will deploy PostgreSQL with replication factor of 3 and with io_profile=db. To learn more about io_profile settings please visit our docs page.

  • Then we're going create an AutoPilot Rule that will scale the PVC by 200% once a PVC is greater than 20% filled.

  • Then we're going run a benchmark to fill up the database to more than 20%

  • Finally, we verify AutoPilot successfully triggered the automated resize of our PVC from 1GB to 3GB

Other things you should know

To learn more about how why running PostgreSQL on Portworx is a great idea take a look at the following links:

This scenario assumes you have already covered the following scenarios:

Thank you for trying the playground. To view all our scenarios, go here

To learn more about Portworx, below are some useful references.

Automatically Resize Kubernetes volumes using AutoPilot

Step 1 of 6

Wait for Kubernetes & Portworx to be ready

First we need to wait for Kubernetes and Portworx to be ready. Be patient, this is not a very high performance environment, just a place to learn something :-

Step: Wait for Kubernetes to be ready

Click the below section which waits for all Kubernetes nodes to be ready.

watch kubectl get nodes

When all 4 nodes show status Running then hit clear to ctrl-c and clear the screen.

Step: Wait for Portworx to be ready

Watch the Portworx pods and wait for them to be ready on all the nodes. This can take a few minutes since it involves pulling multiple docker images. You will see 'No resources found' until all images are pulled.

watch kubectl get pods -n kube-system -l name=portworx -o wide

When all the pods show STATUS Running and READY 1/1 then hit clear to ctrl-c and clear the screen.

Let's also make sure Prometheus is up and running as AutoPilot will use this to capture metrics for rules.

watch kubectl get pods -n kube-system -l prometheus=prometheus

When all the pods show STATUS Running and READY 3/3 then hit clear to ctrl-c and clear the screen.

Now that we have the Portworx cluster up, let's proceed to the next step !

This tab will not be visible to users and provides only information to help authors when creating content.

Creating Katacoda Scenarios

Thanks for creating Katacoda scenarios. This tab is designed to help you as an author have quick access the information you need when creating scenarios.

Here are some useful links to get you started.

Running Katacoda Workshops

If you are planning to use Katacoda for workshops, please contact [email protected] to arrange capacity.

Debugging Scenarios

Below is the response from any background scripts run or files uploaded. This stream can aid debugging scenarios.

If you still need assistance, please contact [email protected]