Difficulty: Beginner
Estimated Time: 10 minutes

In this scenario, you will learn how to deploy different Machine Learning workloads using Kubeflow and Kubernetes. The interactive environment is a two-node Kubernetes cluster allowing you to experience Kubeflow and deploy real workloads to understand how it can solve your problems.

The Kubeflow project is dedicated to making Machine Learning on Kubernetes easy, portable and scalable. The goal is not to recreate other services, but to provide a straightforward way for spinning up best of breed OSS solutions.

Details of the project can be found at https://github.com/kubeflow/kubeflow

In this scenario you learned how to deploy different style of ML workloads using Kubernetes and Kubeflow.

The aim of Kubeflow is to provide a set of simple manifests that give you an easy to use ML stack anywhere Kubernetes is already running and can self configure based on the cluster it deploys into.

More details can be found at https://github.com/kubeflow/kubeflow

Don’t stop now! The next scenario will only take about 10 minutes to complete.

Deploying Kubeflow

Step 1 of 7

What is Kubeflow?

The Kubeflow project is dedicated to making Machine Learning easy to set up with Kubernetes, portable and scalable. The goal is not to recreate other services, but to provide a straightforward way for spinning up best of breed OSS solutions. Kubernetes is an open-source platform for automating deployment, scaling, and management of containerised applications.

Because Kubeflow relies on Kubernetes, it runs wherever Kubernetes runs such as bare-metal servers, or cloud providers such as Google. Details of the project can be found at https://github.com/google/kubeflow

Kubeflow Components

Kubeflow has three core components.

TF Job Operator and Controller: Extension to Kubernetes to simplify deployment of distributed TensorFlow workloads. By using an Operator, Kubeflow is capable of automatically configuring the master, worker and parameterized server configuration. Workloads can be deployed with a TFJob.

TF Hub: Running instances of JupyterHub, enabling you to work with Jupyter Notebooks.

Model Server: Deploying a trained TensorFlow models for clients to access and use for future predictions.

These three models will be used to deploy different workloads in the following steps.