In this self-paced tutorial, you will learn how to use OpenShift Pipelines to automate the deployment of your applications.
In this tutorial, you will:
- Install the OpenShift Pipelines operator
- Deploy a partial application
- Create reusable Tekton
- Create a Tekton
- Trigger the created pipeline to finish your application deployment.
Let's get started
OpenShift Pipelines is a cloud-native, continuous integration and delivery (CI/CD) solution for building pipelines using Tekton. Tekton is a flexible, Kubernetes-native, open-source CI/CD framework that enables automating deployments across multiple platforms (e.g. Kubernetes, serverless, VMs, and so forth) by abstracting away the underlying details.
OpenShift Pipelines features:
- Standard CI/CD pipeline definition based on Tekton
- Build container images with tools such as Source-to-Image (S2I) and Buildah
- Deploy applications to multiple platforms such as Kubernetes, serverless, and VMs
- Easy to extend and integrate with existing tools
- Scale pipelines on-demand
- Portable across any Kubernetes platform
- Designed for microservices and decentralized teams
- Integrated with the OpenShift Developer Console
Tekton defines some Kubernetes custom resources
as building blocks to standardize pipeline concepts and provide terminology that is consistent across CI/CD solutions. These custom resources are an extension of the Kubernetes API that lets users create and interact with these objects using the OpenShift CLI (
kubectl, and other Kubernetes tools.
The custom resources needed to define a pipeline are listed below:
Task: a reusable, loosely coupled number of steps that perform a specific task (e.g. building a container image)
Pipeline: the definition of the pipeline and the tasks that it should perform
PipelineResource: inputs (e.g. git repository) and outputs (e.g. image registry) to and out of a pipeline or task
TaskRun: the execution and result (i.e. success or failure) of running an instance of a task
PipelineRun: the execution and result (i.e. success or failure) of running a pipeline
In short, to create a pipeline, one does the following:
- Create custom or install existing reusable
- Create a
PipelineResourcesto define your application's delivery
- Create a
PipelineRunto instantiate and invoke the pipeline.
For further details on pipeline concepts, refer to the Tekton documentation that provides an excellent guide for understanding various parameters and attributes available for defining pipelines.
In the following sections, you will go through each of the above steps to define and execute a pipeline.
Let's get started!
In this workshop, you have worked with OpenShift Pipelines and learned about underlying Tekton concepts. OpenShift Pipelines provides CI/CD solutions for addressing the fundamentals of CI/CD (i.e., automation of building, testing, and deploying application components) but also offers modern solutions around addressing scale, server maintenance, and making all parts of the CI/CD process highly reusable for any number of application development tasks.
We hope you have found this workshop helpful in learning about OpenShift Pipelines and would love any feedback you have on ways to make it better! Feel free to open issues in this workshop’s GitHub repository, but also reach out to your workshop leaders to share any thoughts on how we can make this a better experience.
To learn more about OpenShift Pipelines and Tekton, the resources below can provide information on everything from getting started to more advanced concepts.
OpenShift Pipelines Webpage: https://www.openshift.com/learn/topics/pipelines
OpenShift Pipelines Documentation: https://openshift.github.io/pipelines-docs/docs/index.html
Tekton Pipelines GitHub: https://github.com/tektoncd/pipeline
Tekton CLI GitHub: https://github.com/tektoncd/cli
While the Tekton website is under construction at this time, please look out in the future for information about Tekton to be available at the following link: https://tekton.dev/
Read more in the OpenShift blog announcement for OpenShift Pipelines: https://blog.openshift.com/cloud-native-ci-cd-with-openshift-pipelines/
For examples of OpenShift Pipelines tasks, visit the openshift/pipelines-catalog GitHub: https://github.com/openshift/pipelines-catalog
For examples of Tekton pipelines and tasks, visit the tektoncd/catalog GitHub repository: https://github.com/tektoncd/catalog
To rerun an OpenShift Pipelines tutorial on your own, check out the openshift/pipelines-tutorial GitHub repository: https://github.com/openshift/pipelines-tutorial
Getting Started with OpenShift Pipelines
Step 1 - Install the Pipelines Operator
OpenShift Pipelines are an OpenShift add-on that can be installed via an operator that is available in the OpenShift OperatorHub.
Operators may be installed into a single namespace and only monitor resources in that namespace. The OpenShift Pipelines Operator installs globally on the cluster and monitors and manages pipelines for every single user in the cluster.
You can install the operator using the "Operators" tab in the web console, or you can use the CLI tool "oc". In this exercise, we use the latter.
To install the operator, you need to log in as an admin. You can do so by running:
oc login -u admin -p admin
Now that you have logged in, you should be able to see the packages available to you to install from the OperatorHub. Let's take a look at the openshift-pipelines-operator one.
oc describe packagemanifest openshift-pipelines-operator -n openshift-marketplace
From that package manifest, you can find all the information that you need to create a Subscription to the Pipeline Operator.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: dev-preview installPlanApproval: Automatic name: openshift-pipelines-operator source: community-operators sourceNamespace: openshift-marketplace startingCSV: openshift-pipelines-operator.v0.8.2
The channel, name, starting CSV, source and source namespace are all described in the package file you just described.
You can find more information on how to add operators on the OpenShift documentation page.
For now, all you need to do is apply the associated YAML file.
oc apply -f ./operator/subscription.yaml
The OpenShift Pipelines Operator provides all its resources under a single API group: tekton.dev. This operation can take a few seconds; you can run the following script to monitor the progress of the installation.
until oc api-resources --api-group=tekton.dev | grep tekton.dev &> /dev/null do echo "Operator installation in progress..." sleep 5 done echo "Operator ready"
Once you see the message
Operator ready, the operator is installed, and you can see the new resources by running:
oc api-resources --api-group=tekton.dev
Verify user roles
To validate that your user has the appropriate roles, you can use the
oc auth can-i command to see whether you can create Kubernetes custom resources of the kind needed by the OpenShift Pipelines Operator.
The custom resource you need to create an OpenShift Pipelines pipeline is a resource of the kind pipeline.tekton.dev in the tekton.dev API group. To check that you can create this, run:
oc auth can-i create pipeline.tekton.dev
Or you can use the simplified version:
oc auth can-i create Pipeline
When run, if the response is yes, you have the appropriate access.
Verify that you can create the rest of the Tekton custom resources needed for this workshop by running the commands below. All of the commands should respond with yes.
oc auth can-i create Task
oc auth can-i create PipelineResource
oc auth can-i create PipelineRun
Now that we have verified that you can create the required resources let's start the workshop.