About Helm vs Kubernetes Operators.
Helm is like
npm for Kubernetes. There are repositories containing charts, and each chart allows you to install an application into your K8s cluster.
How it Works
Under the hood, Helm works by filling out some templated Kubernetes yaml files with user-provided values, drastically reducing boilerplate and allowing you to deploy a reasonably complex application without the user of the helm chart having to understand too much about how to manage such an application.
In addition to that template-engine functionality, Helm also manages versions of your application. You can use Helm to release a new version of your app (for example, updating the docker image tag) and then quickly roll back to a previous version if you discover a bug. This is great for teams deploying stateless applications such as microservices or LittleHorse Task Workers.
First, it is quite simple to write a Helm chart. This means that most DevOps teams can quickly write a helm chart that can be used by microservice teams across their organization.
Secondly, Helm is a client library (well, it has been since the removal of Tiller...but that's another blog post). Therefore, you don't need to run any privileged pods inside the K8s cluster; all you need is a CI server with permissiosn to create the necessary K8s resources.
Unfortunately, Helm doesn't do much beyond initial installation and upgrades. Monitoring, self-healing, autoscaling, certificate rotation, and management of non-Kubernetes resources (eg. Kafka Topics, LittleHorse Task Definitions, AWS LoadBalancers, etc) are some exercises left to the reader, to name just a few.
Kubernetes Operators are a pattern introduced by Red Hat that intends to capture the knowledge of an expert Site Reliability Engineer (or, more punnily, a software operator) into a program that manages (or operates) a complex application.
To accomplish this, a Kubernetes Operator extends the Kubernetes API to introduce a new resource type that is custom-made for your application. The Operator works in tandem with Kubernetes itself to manage applications of a specific type.
How they Work
A Kubernetes Operator has two components:
CustomResourceDefinition, which defines the extension to the Kubernetes API (including relevant configurations for your application type).
- A Controller, which watches any resources from your Custom Resource Definition and "reconciles" them.
CustomResourceDefinition can be over-simplified as an Open API (not Open AI) specification for how your custom resource will look. For example, in LittleHorse Platform, the simplest version of a
LHCluster resource (which creates a, you guessed it, LittleHorse Cluster) is:
- name: internal-k8s
CustomResourceDefinition allows you to
kubectl apply -f <that file up there>, and then you can
kubectl get lhclusters:
Now how does the LittleHorse cluster get created, configured, managed, and monitored? That's where the Controller comes into play. In the Operator pattern, a Controller is a process (normally, it runs as a
Pod in a cluster) that watches for all events related to a
CustomResourceDefinition and manipulates the external world to match what the Custom Resources specify.
Generally, that means creating a bunch of Kubernetes
Deployments, etc. to spin up an instance of an application. For example, the Strimzi Kafka Operator watches
Kafka resources and deploys an actual Kafka cluster.
However, a Controller can also manage non-kuberentes resources. For example, many
Ingress controllers provision or configure physical load balancers outside of the Kubernetes cluster. As another great example, the Strimzi Kafka Topic Operator watches for
KafkaTopic resources and creates (you guessed it) Kafka Topics using the Kafa Admin API.
We at LittleHorse plan to add similar CRD's that are specific to LittleHorse...stay tuned to learn about the
LHPrincipal CRD's 😉.
Kubernetes Operators are beautiful. Since they were developed by Red Hat, they (along with Strimzi) are the biggest reason why Red Hat is in my top-three favorite software companies of all time.
A well-written operator can make it a breeze to manage even the most daunting applications. Since the Controller is code written in a general-purpose language (normally Go or Java), an Operator can do just about anything that can be automated by an SRE. This includes:
- Autoscaling and alerting based on metrics
- Self-healing and mitigation in the face of hardware faults or degradations
- Certificate rotation
- Metadata management in your application (for example, creating Kafka Users)
- Intelligent rolling restarts that preserve high availability
- Provisioning infrastructure outside of Kubernetes, for example CrossPlane.
The biggest downside to Operators in Kubernetes is that writing a Controller is hard. Additionally, it requires running a
Pod with special privileges that allow the
Pod to create other K8s resources. Because of this, writing an Operator for something like standardizing your team's blueprints for deploying a microservice just doesn't make sense.
Future blogs will dive into some of the challenges that we had to overcome with LittleHorse Platform, and how we minimized the permissions that our Operator needs to provide a self-driving LittleHorse experience to our customers.
Helm or Operators?
Well, I'm a software engineer, so I'm going to say "it depends." However, Kafka legend Gwen Shapira said in a fantastic podcast that some "it depends" are more helpful than others. So in an effort to fall in the "more helpful" side:
- If you want a framework for deploying simple stateless applications while minimizing boilerplate (i.e. allowing different teams to deploy microservices), then you probably want Helm.
- If your application doesn't require much hand-holding after initial configuration on Kubernetes, Helm might do.
- If you want to provide a Kubernetes-native way to manage non-kubernetes infrastructure, you need an Operator.
- If you want to provide a self-driving experience for consumers of a highly complex application such as Kafka, ElasticSearch, or LittleHorse, you need an Operator.
LittleHorse Platform is an enterprise-ready distribution of LittleHorse that runs in your own Kubernetes environment. We believe that Helm is fantastic for deploying many stateless applications, and even some stateful applications. However, Helm wouldn't let us go far enough towards providing our customers with a fully self-driving LittleHorse experience. As such, we chose to put in the extra work and build a full Kubernetes Operator. Stay tuned for an extensive list of current and upcoming LH Platform features, all powered by the Java Operator SDK.
To inquire about LittleHorse Platform, contact
email@example.com. To get started with LittleHorse Community (free for production use under the SSPL), check out our Installation Docs.