Welcome!
Get Started with Istio and Kubernetes
In this scenario, you will learn how to deploy Istio Service Mesh to Kubernetes. Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code
The scenario uses the sample BookInfo application. The application has no dependencies on Istio and demonstrates how any application could build upon Istio without modifications.
Congratulations!
You've completed the scenario!
Scenario Rating
Your environment is currently being packaged as a Docker container and the download will begin shortly. To run the image locally, once Docker has been installed, use the commands
cat scrapbook_istio_deploy-istio-on-kubernetes_container.tar | docker load
docker run -it /istio_deploy-istio-on-kubernetes:20210411
Oops!! Sorry, it looks like this scenario doesn't currently support downloads. We'll fix that shortly.

Steps
Get Started with Istio and Kubernetes
Launch Kubernetes Cluster
To start, launch the Kubernetes cluster. This will launch a two-node Kubernetes cluster with one master and one node.
launch.sh
Health Check
Once started, you can get the status of the cluster with kubectl cluster-info
Deploy Istio
Istio is installed in two parts. The first part involves the CLI tooling that will be used to deploy and manage Istio backed services. The second part configures the Kubernetes cluster to support Istio.
Install CLI tooling
The following command will install the Istio 1.0.0 release.
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.0 sh -
After it has successfully run, add the bin folder to your path.
export PATH="$PATH:/root/istio-1.0.0/bin"
cd /root/istio-1.0.0
Configure Istio CRD
Istio has extended Kubernetes via Custom Resource Definitions (CRD). Deploy the extensions by applying crds.yaml.
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system
Install Istio with default mutual TLS authentication
To Install Istio and enforce mutual TLS authentication by default, use the yaml istio-demo-auth.yaml:
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
This will deploy Pilot, Mixer, Ingress-Controller, and Egress-Controller, and the Istio CA (Certificate Authority). These are explained in the next step.
Check Status
All the services are deployed as Pods.
kubectl get pods -n istio-system
Wait until they are all running or have completed. Once they're running, Istio has correctly been deployed.
Deploy Katacoda Service
To make the sample BookInfo application and dashboards available to the outside world, in particular, on Katacoda, deploy the following Yaml
kubectl apply -f /root/katacoda.yaml
Without this, the bookInfo example and other dashboards will not be available.
Istio Architecture
Istio intro
The previous step deployed the Istio Pilot, Mixer, Ingress-Controller, and Egress-Controller, and the Istio CA (Certificate Authority).
Pilot - Responsible for configuring the Envoy and Mixer at runtime.
Proxy / Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.
Mixer - Create a portability layer on top of infrastructure backends. Enforce policies such as ACLs, rate limits, quotas, authentication, request tracing and telemetry collection at an infrastructure level.
Citadel / Istio CA - Secures service to service communication over TLS. Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation.
Ingress/Egress - Configure path based routing for inbound and outbound external traffic.
Control Plane API - Underlying Orchestrator such as Kubernetes or Hashicorp Nomad.
The overall architecture is shown below.
Deploy Sample Application
To showcase Istio, a BookInfo web application has been created. This sample deploys a simple application composed of four separate microservices which will be used to demonstrate various features of the Istio service mesh.
When deploying an application that will be extended via Istio, the Kubernetes YAML definitions are extended via kube-inject. This will configure the services proxy sidecar (Envoy), Mixers, Certificates and Init Containers.
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
Deploy gateway
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Check Status
kubectl get pods
When the Pods are starting, you may see initiation steps happening as the container is created. This is configuring the Envoy sidecar for handling the traffic management and authentication for the application within the Istio service mesh.
Once running the application can be accessed via the path /productpage.
https://[[HOST_SUBDOMAIN]]-80-[[KATACODA_HOST]].environments.katacoda.com/productpage
The architecture of the application is described in the next step.
Apply default destination rules
Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules.
kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
Bookinfo Architecture
The BookInfo sample application deployed is composed of four microservices:
- The productpage microservice is the homepage, populated using the details and reviews microservices.
- The details microservice contains the book information.
- The reviews microservice contains the book reviews. It uses the ratings microservice for the star rating.
- The ratings microservice contains the book rating for a book review.
The deployment included three versions of the reviews microservice to showcase different behaviour and routing:
- Version v1 doesn’t call the ratings service.
- Version v2 calls the ratings service and displays each rating as 1 to 5 black stars.
- Version v3 calls the ratings service and displays each rating as 1 to 5 red stars.
The services communicate over HTTP using DNS for service discovery. An overview of the architecture is shown below.
The source code for the application is available on Github
Control Routing
One of the main features of Istio is its traffic management. As a Microservice architectures scale, there is a requirement for more advanced service-to-service communication control.
User Based Testing / Request Routing
One aspect of traffic management is controlling traffic routing based on the HTTP request, such as user agent strings, IP address or cookies.
The example below will send all traffic for the user "jason" to the reviews:v2, meaning they'll only see the black stars.
cat samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
Similarly to deploying Kubernetes configuration, routing rules can be applied using istioctl.
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
Visit the product page and signin as a user jason (password jason)
Traffic Shaping for Canary Releases
The ability to split traffic for testing and rolling out changes is important. This allows for A/B variation testing or deploying canary releases.
The rule below ensures that 50% of the traffic goes to reviews:v1 (no stars), or reviews:v3 (red stars).
cat samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
Logout of user Jason otherwise the above configuration will take priority
Note: The weighting is not round robin, multiple requests may go to the same service.
New Releases
Given the above approach, if the canary release were successful then we'd want to move 100% of the traffic to reviews:v3.
cat samples/bookinfo/networking/virtual-service-reviews-v3.yaml
This can be done by updating the route with new weighting and rules.
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml
List All Routes
It's possible to get a list of all the rules applied using istioctl get virtualservices
and istioctl get virtualservices -o yaml
Access Metrics
With Istio's insight into how applications communicate, it can generate profound insights into how applications are working and performance metrics.
Generate Load
To view the graphs, there first needs to be some traffic. Execute the command below to send requests to the application.
while true; do
curl -s https://[[HOST_SUBDOMAIN]]-80-[[KATACODA_HOST]].environments.katacoda.com/productpage > /dev/null
echo -n .;
sleep 0.2
done
Access Dashboards
With the application responding to traffic the graphs will start highlighting what's happening under the covers.
Grafana
The first is the Istio Grafana Dashboard. The dashboard returns the total number of requests currently being processed, along with the number of errors and the response time of each call.
As Istio is managing the entire service-to-service communicate, the dashboard will highlight the aggregated totals and the breakdown on an individual service level.
Jaeger
Jaeger provides tracing information for each HTTP request. It shows which calls are made and where the time was spent within each request.
https://[[HOST_SUBDOMAIN]]-16686-[[KATACODA_HOST]].environments.katacoda.com/
Click on a span to view the details on an individual request and the HTTP calls made. This is an excellent way to identify issues and potential performance bottlenecks.
Service Graph
As a system grows, it can be hard to visualise the dependencies between services. The Service Graph will draw a dependency tree of how the system connects.
https://[[HOST_SUBDOMAIN]]-8088-[[KATACODA_HOST]].environments.katacoda.com/dotviz
Before continuing, stop the traffic process with Ctrl+C
Visualise Cluster using Weave Scope
While Service Graph displays a high-level overview of how systems are connected, a tool called Weave Scope provides a powerful visualisation and debugging tool for the entire cluster.
Using Scope it's possible to see what processes are running within each pod and which pods are communicating with each other. This allows users to understand how Istio and their application is behaving.
Deploy Scope
Scope is deployed onto a Kubernetes cluster with the command kubectl create -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml'
Wait for it to be deployed by checking the status of the pods using kubectl get pods -n weave
Make Scope Accessible
Once deployed, expose the service to the public.
pod=$(kubectl get pod -n weave --selector=name=weave-scope-app -o jsonpath={.items..metadata.name})
kubectl expose pod $pod -n weave --external-ip="[[HOST_IP]]" --port=4040 --target-port=4040
Important: Scope is a powerful tool and should only be exposed to trusted individuals and not the outside public. Ensure correct firewalls and VPNs are configured.
View Scope on port 4040 at https://[[HOST_SUBDOMAIN]]-4040-[[KATACODA_HOST]].environments.katacoda.com/
Generate Load
Scope works by mapping active system calls to different parts of the application and the underlying infrastructure. Create load to see how various parts of the system now communicate.
while true; do
curl -s https://[[HOST_SUBDOMAIN]]-80-[[KATACODA_HOST]].environments.katacoda.com/productpage > /dev/null
echo -n .;
sleep 0.2
done