Difficulty: Beginner
Estimated Time: 30 minutes

In this self-paced tutorial, you will learn how to use OpenShift Pipelines to automate the deployment of your applications.

In this tutorial, you will:

  • Install the OpenShift Pipelines Operator
  • Create a Hello World Task
  • Install task resource definitions
  • Create a Tekton Pipeline
  • Trigger the created pipeline to finish your application deployment.

Getting started

OpenShift Pipelines is a cloud-native, continuous integration and delivery (CI/CD) solution for building pipelines using Tekton. Tekton is a flexible, Kubernetes-native, open-source CI/CD framework that enables automating deployments across multiple platforms (e.g. Kubernetes, serverless, VMs, and so forth) by abstracting away the underlying details.

OpenShift Pipelines features:

  • Standard CI/CD pipeline definition based on Tekton
  • Build container images with tools such as Source-to-Image (S2I) and Buildah
  • Deploy applications to multiple platforms such as Kubernetes, serverless, and VMs
  • Easy to extend and integrate with existing tools
  • Scale pipelines on-demand
  • Portable across any Kubernetes platform
  • Designed for microservices and decentralized teams
  • Integrated with the OpenShift Developer Console

Tekton CRDs

Tekton defines some Kubernetes custom resources as building blocks to standardize pipeline concepts and provide terminology that is consistent across CI/CD solutions. These custom resources are an extension of the Kubernetes API that lets users create and interact with these objects using the OpenShift CLI (oc), kubectl, and other Kubernetes tools.

The custom resources needed to define a pipeline are listed below:

  • Task: a reusable, loosely coupled number of steps that perform a specific task (e.g. building a container image)
  • Pipeline: the definition of the pipeline and the Tasks that it should perform
  • TaskRun: the execution and result of running an instance of a task
  • PipelineRun: the execution and result of running an instance of a pipeline, which includes a number of TaskRuns

For further details on pipeline concepts, refer to the Tekton documentation that provides an excellent guide for understanding various parameters and attributes available for defining pipelines.

In the following sections, you will go through each of the above steps to define and invoke a pipeline. Let's get started!


In this workshop, you have worked with OpenShift Pipelines and learned about underlying Tekton concepts. OpenShift Pipelines provides CI/CD solutions for addressing the fundamentals of CI/CD (i.e., automation of building, testing, and deploying application components) but also offers modern solutions around addressing scale, server maintenance, and making all parts of the CI/CD process highly reusable for any number of application development tasks.

We hope you have found this workshop helpful in learning about OpenShift Pipelines and would love any feedback you have on ways to make it better! Feel free to open issues in this workshop’s GitHub repository, but also reach out to your workshop leaders to share any thoughts on how we can make this a better experience.

Further resources

To learn more about OpenShift Pipelines and Tekton, the resources below can provide information on everything from getting started to more advanced concepts.

OpenShift Pipelines Webpage: https://www.openshift.com/learn/topics/pipelines

OpenShift Pipelines Documentation: https://openshift.github.io/pipelines-docs/docs/index.html

Tekton Official Webpage: https://tekton.dev

Tekton Pipelines GitHub: https://github.com/tektoncd/pipeline

Tekton CLI GitHub: https://github.com/tektoncd/cli

For examples of Tekton pipelines and tasks, visit the tektoncd/catalog GitHub repository: https://github.com/tektoncd/catalog

Getting Started with OpenShift Pipelines

Step 1 of 7

Step 1 - Install the Pipelines Operator

OpenShift Pipelines are an OpenShift add-on that can be installed via an operator that is available in the OpenShift OperatorHub.

You can either install the operator using the OpenShift Pipelines Operator in the web console or by using the CLI tool oc. Let's log in to our cluster to make changes and install the operator. You can do so by running:

oc login -u admin -p admin

This will log you in using the credentials:

  • Username: admin
  • Password: admin

Installing the OpenShift Pipelines Operator in Web Console

You can install OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the OpenShift Pipelines Operator, the Custom Resources (CRs) required for the Pipelines configuration are automatically installed along with the Operator.

Firstly, switch to the Console and login to the OpenShift web console using the same credentials you used above.

Web Console Login

In the Administrator perspective of the web console, navigate to Operators → OperatorHub. You can see the list of available operators for OpenShift provided by Red Hat as well as a community of partners and open-source projects.

Use the Filter by keyword box to search for OpenShift Pipelines Operator in the catalog. Click the OpenShift Pipelines Operator tile.

Web Console Hub

Read the brief description of the Operator on the OpenShift Pipelines Operator page. Click Install.

Select All namespaces on the cluster (default) for installation mode & Automatic for the approval strategy. Click Subscribe!

Web Console Login

Be sure to verify that the OpenShift Pipelines Operator has installed through the Operators → Installed Operators page.

Installing the OpenShift Pipelines Operator using the CLI

You can install OpenShift Pipelines Operator from the OperatorHub using the CLI.

First, you'll want to create a Subscription object YAML file to subscribe a namespace to the OpenShift Pipelines Operator, for example, subscription.yaml as shown below:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
  name: openshift-pipelines-operator
  namespace: openshift-operators 
  channel: stable
  name: openshift-pipelines-operator-rh
  source: redhat-operators
  sourceNamespace: openshift-marketplace

This YAML file defines various components, such as the channel specifying the channel name where we want to subscribe, name being the name of our Operator, and source being the CatalogSource that provides the operator. For your convenience, we've placed this exact file in your /operator local folder.

You can now create the Subscription object similar to any OpenShift object.

oc apply -f operator/subscription.yaml

Verify installation

The OpenShift Pipelines Operator provides all its resources under a single API group: tekton.dev. This operation can take a few seconds; you can run the following script to monitor the progress of the installation.

until oc api-resources --api-group=tekton.dev | grep tekton.dev &> /dev/null
 echo "Operator installation in progress..."
 sleep 5

echo "Operator ready"

Great! The OpenShift Pipelines Operator is now installed. Now, let's start the workshop.