By James Mitchell on December 01, 2023
Tags: cloud / kubernetes / microk8s / k3s / k0s / aws-eks / prometheus / grafana / karpenter
Kubernetes has taken control of my (working) life.
When working for Suitebox we moved the backend Java services out of OracleCloud and into the AWS cloud. We also created some new services, PDF digital signing, Keycloak IDP, and a Spring Configuration Server. All of these services were built as Docker images, and when deciding how to run them, I asked "Will we need to run these in multiple clouds?". The answer was "No", which strongly indicated we didn't need the complexity of Kubernetes. Instead, we ran the services on the AWS Elastic Container Service.
When it came time to ask the same question at Aportio, the answer was "Probably". So we committed to moving away from AWS Elastic Beanstalk and into the AWS Elastic Kubernetes Service. The idea is that in the future we can target any cloud provider that offers a managed Kubernetes service - or even run the application on our own hardware in a data centre using a Kubernetes distribution such as MicroK8s, K3s, and k0s.
So why Kubernetes? For the ability to run our workloads on any cloud or platform. The ability to use a managed version of Kubernetes, with the cloud provider managing the control plane, made it much faster to get the project up and running.
Once we were in the Kubernetes environment, I appreciated the ability to run Prometheus and Grafana to see how the nodes and applications are performing; and the Kubernetes superpower, autoscaling the application based on load, and restarting failed pods.
I also want to show some love to Karpenter, which provisions new nodes when the workload expands. It only supports AWS (in 2023) but aims to support other clouds as well. The promise it makes is to be aware of how much resource you require, and to be aware of the costs for different virtual machines in the cloud, then to provision a new node base on the cheapest node to meet the demand. It is also able to re-assess the current resources demands, then consolidate the running pods onto a cheaper set of nodes.