In this scenario you will learn how to deploy a Kubernetes cluster using Kubeadm and deploy Tensorflow workloads onto the cluster. The example is based on Tensorflow Serving, a flexible, high-performance serving system for machine learning models. The tensorflow server will return a classification of an image based on the http://www.image-net.org/ dataset.
The scenario explains how the Tensorflow workflows can be accessed by a client running in either a Docker Container, an Interactive Kubernetes Pod or using the Kubernetes Batch Job API.
The client sends an image specified by the command line parameter to the server over gRPC for classification into human readable descriptions of the ImageNet categories.
Based on example from https://tensorflow.github.io/serving/serving_inception
Deploying Tensorflow on Kubernetes
Step 1 - Initialise Kubernetes Cluster
The first step is to initialise a Kubernetes Master for managing workloads and a Node for running containers. The Container Network Interface (CNI) enables containers running on different nodes to communicate.
To bootstrap the cluster run the tool Kubeadm. This downloads the required Docker Images, configures TLS security and starts the necessary containers.
kubeadm init --token=102952.1a7dd4cc8d1f4cc5 --kubernetes-version $(kubeadm version -o short)
Once the process has initialised the master, set the KUBECONFIG. The admin.conf file defines the TLS certificates and IP address of the master. This allows you to administrate the cluster remotely.
The master is responsible for managing workloads, with nodes running the workloads. The command below uses the joins the Kubernetes cluster, providing the token and IP address for the master.
kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 [[HOST_IP]]:6443
Container Network Interface (CNI)
kubectl apply -f /opt/weave-kube.yaml