Before we work with Deployments, we'll go over the basics of ReplicaSets. A ReplicaSet will ensure that the desired number of replicas of each Pod are up and running. Any time a Pod goes down, the ReplicaSet will deploy a new one to maintain high availability.
Now inspect the file
./resources/tutum-rs.yaml. It should look familiar to the
Pod resource. We do have a few additions. These additions are what configure our ReplicaSet.
apiVersion: apps/v1 kind: ReplicaSet metadata: name: tutum-rs labels: app: tutum spec: replicas: 3 selector: matchLabels: app: tutum template: metadata: labels: app: tutum spec: containers: - name: tutum image: tutum/hello-world ports: - containerPort: 80
The biggest additions are
replicas: 3 and
selector. The first component configures the total number of replicas of the Pod should be active at all times. The
selector matches a set of constraints to identify Pods to represent. In this case, the ReplicaSet will track Pods with the label
We can deploy this ReplicaSet the same way we did Pods:
kubectl create -f ./resources/tutum-rs.yaml
Now watch Kubernets create 3
tutum Pods based on the specification in the
kubectl get po --watch
Wait for the pods to be created. You can press
CTRL-C to stop watching.
rs is shorthand for
kubectl describe rs tutum-rs
Now modify the
ReplicaSet to instantiate 5 pods by changing the
replicas: 3 value.
kubectl edit rs tutum-rs
edit, you can live edit the configuration of the resource in Kubernetes. However, it will not edit the underlying Manifest file representing the object.
In the last step you scaled up the
tutum-rs ReplicaSet to 5 pods by editing the spec file. Those changes were automatically applied.
To manually scale a ReplicaSet up or down, use the
scale command. Scale the
tutum pods down to 2 with the command:
kubectl scale rs tutum-rs --replicas=2
You can verify that 3 of the 5
tutum instances have been terminated:
kubectl get pods
or watch them until they finish
kubectl get po --watch
Of course, the ideal way to do this is to update our Manifest to reflect these changes.
Kubernetes provides native autoscaling of your Pods. However,
kube-scheduler might not be able to schedule additional Pods if your cluster is under high load. In addition, if you have a limited set of compute resources, autoscaling Pods can have severe consequences, unless your worker nodes can automatically scale as well (e.g. AWS autoscaling groups).
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: frontend-scaler spec: scaleTargetRef: kind: ReplicaSet name: tutum-rs minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 50
To see all the
kubectl autoscale --help
It is also possible to automatically generate a config file, which we've seen before. The command to output a YAML config looks like this:
kubectl autoscale rs tutum-rs --max=10 --min=3 --cpu-percent=50 --dry-run=true -o=yaml
--dry-run=true, this means that Kubernetes will not apply the desired state changes to our cluster. However, we provided it with
-o=yaml, which means output the configuration as YAML. This lets us easily generate a Manifest.
-o=yaml is an excellent way to generate configurations!
We've provided this content in
Now actually apply the configuration:
kubectl create -f ./resources/hpa-tutum.yaml
At this point, we have a ReplicaSet managing the Tutum Pods, with Horizontal Pod Autoscaling configured. Let's clean up our environment:
kubectl delete -f ./resources/hpa-tutum.yaml
kubectl delete -f ./resources/tutum-rs.yaml