Distilled JRE Apps in Containers
The scenario walks through a series of steps for distilling a container to its minimum to support your Java application.
For years one of Java's strengths was write once, run anywhere (WORA). With containers it's now polyglot and package once, run anywhere (PORA). All this is about writing our code to be agnostic from the target.
Java remains relevant for containers. However, it would appear between WORA and PORA there is some redundancy. If your application is running in a container and you know exactly what is in the container, why would you need a Java Runtime engine that can run anywhere? Thanks to containers they can run anywhere. What we put inside our containers should be statically defined and linked as natively as possible, at build time.
What if we could avoid putting a whole JRE in a container yet still deliver a working Java app? Blasphemy.
The distillation pattern is about applying best practices to make your containers small, simple, secure, and fast. These ideas all contribute to container distillation:
|High cohesion||All things in the container are used and purposeful|
|Low coupling||All public access is used and purposeful|
|Idempotent||Well tested and known versioned dependencies|
|Immutable||Simplicity increases when things do not change at runtime|
|Small attack surface||Remove access points like ports and file mounts|
|Small container images||Remove all thing not used, reduce storage and transmit costs|
|Fast startup time||Expect ephemeral containers to fail, restart and scale|
|Fast execution time||Performance pays, CPUs and memory are limited resources|
You will learn how to:
- install a container registry onto Kubernetes
- build and run a simple Java application
- build and run the same application with a container
- use multi-stage technique for building containers
- leverage Java 9+ modularity with JLink
- compile Java to a native binary and run it from a container
- start using GraalVM
We went from a 184MB container to a 25MB container! (87% reduction) Both containers ran the same code, from the same source, yet we were able to distill so much unneeded software out of the container. we just looked at container size, but other topics like performance and security are also important things to consider when delivering containers.
You now understand a few different techniques for efficiently getting your Java application into a container. Now it's ready to be run on Kubernetes. There are some basic ways to containerize applications that are not very efficient. With binary, native applications using GraalVM you now have more techniques to creating distilled containers. Java continues toward cloud native.
With these steps you have learned how to:
- ✔ install a container registry onto Kubernetes
- ✔ build and run a simple Java application
- ✔ build and run the same application with a container
- ✔ use multi-stage technique for building containers
- ✔ leverage Java 9+ modularity with JLink
- ✔ compile Java to a native binary and run it from a container
- ✔ start using GraalVM
Distilling Java Containers
Your Kubernetes Cluster
For this scenario, Katacoda has just started a fresh Kubernetes cluster for you. Verify it's ready for your use.
kubectl version --short && \
kubectl get componentstatus && \
kubectl get nodes && \
The Helm package manager used for installing applications on Kubernetes is also available.
helm version --short
You can administer your cluster with the
kubectl CLI tool or use the visual Kubernetes Dashboard. Use this script to access the protected Dashboard.