Difficulty: Advance
Estimated Time: 10-15 minutes

In this scenario you will learn how to deploy a Kubernetes cluster using Kubeadm and deploy Tensorflow workloads onto the cluster. The example is based on Tensorflow Serving, a flexible, high-performance serving system for machine learning models. The tensorflow server will return a classification of an image based on the http://www.image-net.org/ dataset.

The scenario explains how the Tensorflow workflows can be accessed by a client running in either a Docker Container, an Interactive Kubernetes Pod or using the Kubernetes Batch Job API.

The client sends an image specified by the command line parameter to the server over gRPC for classification into human readable descriptions of the ImageNet categories.

Congratulations, you successfully deploy a Kubernetes server running a Tensorflow server than run classification on images you send to it.

Don’t stop now! The next scenario will only take about 10 minutes to complete.

Deploying Tensorflow on Kubernetes

Step 1 of 6

Step 1 - Initialise Kubernetes Cluster

The first step is to initialise a Kubernetes Master for managing workloads and a Node for running containers. The Container Network Interface (CNI) enables containers running on different nodes to communicate.


To bootstrap the cluster run the tool Kubeadm. This downloads the required Docker Images, configures TLS security and starts the necessary containers.

kubeadm init --token=102952.1a7dd4cc8d1f4cc5 --kubernetes-version v1.10.0

Once the process has initialised the master, set the KUBECONFIG. The admin.conf file defines the TLS certificates and IP address of the master. This allows you to administrate the cluster remotely.

export KUBECONFIG=/etc/kubernetes/admin.conf


The master is responsible for managing workloads, with nodes running the workloads. The command below uses the joins the Kubernetes cluster, providing the token and IP address for the master.

kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 [[HOST_IP]]:6443

Container Network Interface (CNI)

To allow containers to communicate across hosts there needs to be a Container Network Interface. In this scenario we recommend Weave Net but others are available.

kubectl apply -f /opt/weave-kube

Terminal Host 2