Difficulty: intermediate
Estimated Time: 30 minutes

In this self-paced tutorial, you will learn the basics of how to use OpenShift Serverless, which provides a development model to remove the overhead of server provisioning and maintenance from the developer.

In this tutorial, you will:

  • Deploy an OpenShift Serverless service.
  • Deploy multiple revisions of a service.
  • Understand the underlying compents of a serverless service.
  • Understand how Serverless is able to scale-to-zero.
  • Run different revisions of a service via canary and blue-green deployments.
  • Utilize the knative client.

Why Serverless?

Deploying applications as Serverless services is becoming a popular architectural style. It seems like many organizations assume that Functions as a Service (FaaS) implies a serverless architecture. We think it is more accurate to say that FaaS is one of the ways to utilize serverless, although it is not the only way. This raises a super critical question for enterprises that may have applications which could be monolith or a microservice: What is the easiest path to serverless application deployment?

The answer is a platform that can run serverless workloads, while also enabling you to have complete control of the configuration, building, and deployment. Ideally, the platform also supports deploying the applications as linux containers.

OpenShift Serverless

In this chapter we introduce you to one such platform -- OpenShift Serverless. OpenShift Serverless helps developers to deploy and run applications that will scale up or scale to zero on-demand. Applications are packaged as OCI compliant Linux containers that can be run anywhere. This is known as Serving.

OpenShift Serving

Serverless has a robust way to allow for applications to be triggered by a variety of event sources, such as events from your own applications, cloud services from multiple providers, Software as a Service (SaaS) systems and Red Hat Services (AMQ Streams). This is known as Eventing.

OpenShift Eventing

OpenShift Serverless applications can be integrated with other OpenShift services, such as OpenShift Pipelines, Service Mesh, Monitoring and Metering, delivering a complete serverless application development and deployment experience.

This tutorial will focus on the Serving aspect of OpenShift Serverless as the first diagram showcases. Be on the lookout for additional tutorials to dig further into Serverless, specifically Eventing.

The Environment

During this scenario, you will be using a hosted OpenShift environment that is created just for you. This environment is not shared with other users of the system. Because each user completing this scenario has their own environment, we had to make some concessions to ensure the overall platform is stable and used only for this training. For that reason, your environment will only be active for a one hour period. Keep this in mind before you get started on the content. Each time you start this training, a new environment will be created on the fly.

The OpenShift environment created for you is running version 4.2 of the OpenShift Container Platform. This deployment is a self-contained environment that provides everything you need to be successful learning the platform. This includes a preconfigured command line environment, the OpenShift web console, public URLs, and sample applications.

Note: It is possible to skip around in this tutorial. The only pre-requisite for each section would be the initial Prepare for Exercises section.

For example, you could run the Prepare for Exercises section immediately followed by the Scaling section.

Now, let's get started!

Summary

In this workshop, you have worked with OpenShift Serverless and learned about underlying Knative Serving concepts. OpenShift Serverless helps developers to deploy and run applications that will scale up or scale to zero on-demand. Applications are packaged as OCI compliant Linux containers that can be run anywhere.

We hope you have found this workshop helpful in learning about OpenShift Serverless and would love any feedback you have on ways to make it better! Feel free to open issues in this workshop’s GitHub repository.

To learn more about OpenShift Serverless and Knative, the resources below can provide information on everything from getting started to more advanced concepts.

OpenShift Serverless Webpage: https://www.openshift.com/learn/topics/serverless

OpenShift Serverless Documentation: https://docs.openshift.com/container-platform/4.4/serverless/serverless-getting-started.html

Knative Serving GitHub: https://github.com/knative/serving

Knative CLI GitHub: https://github.com/knative/client

Knative Website: https://knative.dev/

Read more in the OpenShift blog announcement for OpenShift Serverless: https://www.openshift.com/blog/announcing-openshift-serverless-1-5-0-tech-preview-a-sneak-peek-of-our-ga

Getting Started with OpenShift Serverless

Step 1 of 5

Prepare for Exercises

OpenShift Serverless is an OpenShift add-on that can be installed via an operator that is available within the OpenShift OperatorHub.

Some operators are able to be installed into single namespaces within a cluster and are only able to monitor resources within that namespace. The OpenShift Serverless operator is one that installs globally on a cluster so that it is able to monitor and manage Serverless resources for every single project and user within the cluster.

You could install the Serverless operator using the Operators tab within the web console, or you can use the CLI tool oc. In this instance, we will use the latter.

Log in and install the operator

To install an operator, you need to log in as an admin. You can do so by running:

oc login -u admin -u admin

Now that you have logged in, you should be able to see the packages available to you to install from the OperatorHub. Let's take a look at the serverless-operator one.

oc describe packagemanifest serverless-operator -n openshift-marketplace

From that package manifest, we can see all of the information that you would need to create a Subscription to the Serverless Operator

# ./assets/01-prepare/operator-subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: openshift-operators
spec:
  channel: stable
  installPlanApproval: Manual
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: servicemeshoperator.v1.1.0
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: serverless-operator
  namespace: openshift-operators
spec:
  channel: techpreview
  installPlanApproval: Manual
  name: serverless-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: serverless-operator.v1.4.1

Note: TLDR; Serverless requires some components of Service Mesh to be installed on the cluster, normally we would allow the OLM handle this dependancy. For the purposes of this tutorial we need to specify specific versions of both operators, hence the startingCSV spec above.

The installPlanApproval and startingCSV for our particular environment only supports up to serverless version 1.4.1 and servicemesh version 1.1.0, hence the differing versions from the packagemanifest spec in the terminal. In newer OpenShift versions you can omit the startingCSV from the yaml above to install the newer releases. The channel also needs to be adjusted, refer to the Serverless and Service Mesh documentation for more info.

The channel, name, starting CSV, source, and source namespace are all described in the packagemanifest you just described.

Tip: You can find more information on how to add operators on the OpenShift OLM Documentation Page.

For now, all you need to do is apply the associated YAML file to subscribe to the OpenShift Serverless and Service Mesh Operator.

oc apply -f 01-prepare/operator-subscription.yaml

Approve and Verify the Operator Installation

Normally, the subscription might be set to an Automatic install plan approval, which would handle the approval for you. In our case the installPlanApproval: Manual in our Subscription requires the admin to approve the installplan in order for it to begin. In these cases it might be easiest to see this from the OpenShift Web Console and approve the changes as shown in the picture below. However, in this tutorial we will find the installplan and approve it using the CLI.

installplan

To do so click and run the script below where we automate approving the installplan.

# ./assets/01-prepare/approve-operators.bash

#!/usr/bin/env bash
OPERATORS_NAMESPACE='openshift-operators'
OPERATOR='redhat-operators'

function approve_csv {
  local csv_version install_plan
  csv_version=$1

  install_plan=$(find_install_plan $csv_version)
  oc get $install_plan -n ${OPERATORS_NAMESPACE} -o yaml | sed 's/\(.*approved:\) false/\1 true/' | oc replace -f -
}

function find_install_plan {
  local csv=$1
  for plan in `oc get installplan -n ${OPERATORS_NAMESPACE} --no-headers -o name`; do
    [[ $(oc get $plan -n ${OPERATORS_NAMESPACE} -o=jsonpath='{.spec.clusterServiceVersionNames}' | grep -c $csv) -eq 1 && \
       $(oc get $plan -n ${OPERATORS_NAMESPACE} -o=jsonpath="{.status.catalogSources}" | grep -c $OPERATOR) -eq 1 ]] && echo $plan && return 0
  done
  echo ""
}

function wait_for_operator_install {
  local A=1
  local sub=$1
  while : ;
  do
    echo "$A: Checking..."
    phase=`oc get csv -n openshift-operators $sub -o jsonpath='{.status.phase}'`
    if [ $phase == "Succeeded" ]; then echo "$sub Installed"; break; fi
    A=$((A+1))
    sleep 10
  done
}

while [ -z $(find_install_plan 1.1.0) ]; do sleep 10; echo "Checking for service mesh CSV..."; done
approve_csv 1.1.0
sleep 5
wait_for_operator_install servicemeshoperator.v1.1.0

while [ -z $(find_install_plan 1.4.1) ]; do sleep 10; echo "Checking for serverless CSV..."; done
approve_csv 1.4.1
sleep 5
wait_for_operator_install serverless-operator.v1.4.1

Note: The main commands in the automation above are: find installplan - oc get installplan -n openshift-operators, and approve installplan - oc edit <install plan> -n openshift-operators and change approved: false to approved: true.

You should expect this to loop around 12 or so iterations.

When you see the message "Installed", the OpenShift Serverless and Service Mesh Opeartors are installed. We can see the new Serverless resources that are available to the cluster by clicking the script below to run:

oc api-resources | egrep 'Knative|KIND'

As you can see, the OpenShift Serverless Operator added two new resources: operator.knative.dev and servings.knative.dev. Next, we need to use these resources to install KnativeServing.

Install KnativeServing

As per the Knative Serving Operator documentation You must create a KnativeServing object to install Knative Serving using the OpenShift Serverless Operator.

To do so, see the yaml that we are going to apply to the cluster:

# ./assets/01-prepare/serving.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: knative-serving
---
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
spec:
  config:
    network:
      domainTemplate: '{{.Name}}-{{.Namespace}}-ks.{{.Domain}}'

Apply the yaml like so: oc apply -f 01-prepare/serving.yaml

The KnativeServing instance will take a minute to install. As you might have noticed, the resources for KnativeServing can be found in the knative-serving project. We can check for it's installation by using the command:

# ./assets/01-prepare/watch-knative-serving.bash

#!/usr/bin/env bash

A=1
while : ;
do
  output=`oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'`
  echo "$A: $output"
  if [ -z "${output##*'Ready=True'*}" ] ; then echo "Installed"; break; fi;
  A=$((A+1))
  sleep 10
done

The output should be similar to:

DependenciesInstalled=True
DeploymentsAvailable=True
InstallSucceeded=True
Ready=True

Note: You should expect this to run for 22 or so iterations.

We can further validate an install being successful by seeing the following pods in knative-serving project:

oc get pod -n knative-serving

When completed, you should see all pods with the status of Running.

NAME                                READY   STATUS    RESTARTS   AGE
activator-d6478496f-qp89p           1/1     Running   0          90s
autoscaler-6ff6d5659c-4djrt         1/1     Running   0          88s
autoscaler-hpa-868c8b56b4-296rc     1/1     Running   0          89s
controller-55b4748bc5-ndv4p         1/1     Running   0          84s
networking-istio-679dfcd5d7-2pbl4   1/1     Running   0          82s
webhook-55b96d44f6-sxj7p            1/1     Running   0          84s

Login as a Developer and Create a Project

Before beginning we should change to the non-privileged user developer and create a new project for the tutorial.

To change to the non-privileged user in our environment we can execute: oc login -u developer -p developer

Next create a new project by executing: oc new-project serverless-tutorial

There we go! You are all set to kickstart your serverless journey with OpenShift Serverless. Click continue to go to the next module on how to deploy your first severless service.