Setting Sail with Kubernetes


Today I'm embarking on a journey to learn more about Kubernetes. In its own words, it's a system for "automating deployment, scaling, and management of containerized applications."

Kubernetes has recently become relevant in my professional life. Self-hosting applications (rather than paying for SaaS subscriptions) has also been interesting to me for a while. Why not combine these interests and set up a Kubernetes cluster to self-host some applications?

I've been hearing about Kubernetes - often shortened to K8s - for years but never had a reason to use it. Here are some things I've heard about it:

It's complicated

We're using Kubernetes to avoid vendor lock-in

Throw it in a Kubernetes cluster and it will scale automatically

Here's our official Helm chart

No, you don't need Kubernetes to host your CRUD app with 100 users!

Needless to say, I'm intrigued. Time to kick the tires and form some opinions of my own. But first, here's the pitch.

The Promise of Kubernetes

Let's talk about what K8s strives to be without getting into where it falls short. After all, I don't yet know the extent to which Kubernetes achieves these goals. We'll get into that in future posts.


You can deploy almost anything to K8s. Unlike other platforms which offer to solve some of your problems, K8s claims to solve most of your problems. Here are some interesting things you can do with K8s:

Is it really capable of all this? Where are the seams between K8s and the rest of the required bits? How does K8s know how to run my application. Is it practical to run a database on K8s?


Perhaps the most impressive thing about Kubernetes is that it is portable. By that I mean it can run in your data center, in your favorite public cloud provider, or on a few Raspberry Pis.

How is that possible? How easy would it be to actually switch from one place to another? Do the capabilities of K8s differ based on where it runs?


K8s is an open source platform that knows how to deploy and scale containerized workloads out of the box. But it also allows for extension beyond these core capabilities. As a result there's a thriving ecosystem around K8s. Even if you don't plan on extending K8s yourself, you can take advantage of projects that do.

Take Helm for example which calls itself the "the package manager for Kubernetes." A quick glance reveals Helm charts such as Elasticseach, Jenkins, and Vault. Can I just change a manifest and get a working Vault instance?

These extension points also enable the interesting concept of operators in K8s. Here are some tasks which operators can allegedly perform:

These are the things we pay our cloud providers to handle. Can K8s really automate these chores?


I listed this at the bottom on purpose. Not because it isn't important but, because with declarative infrastructure as code tools like Terraform and Pulumi, it isn't unique to K8s.

Still it's an important benefit, and one that the K8s community takes seriously. With declarative infrastructure, you don't have to wonder how your environment is configured. Everything is under version control and changes are just a pull request away. You'll often hear it discussed along with GitOps. Since the community is fully bought in, there are also tools to help you implement these practices. ArgoCD, for example, is a "declarative, GitOps continuous delivery tool for Kubernetes."

A Word of Caution

As with any tool, K8s has its drawbacks. I hope to uncover more of these in my travels, but some are plain to see even at the outset.


K8s' steep learning curve is such a common criticism that it almost goes without saying. It's understandable given the ambitious scope of the project.

It's often said that software should match the complexity of the domain. I'll keep this in mind and judge accordingly.


As I write this, large companies run production workloads on K8s. And Google (you may heard of them) famously open-sourced K8s after 15 years of experience running containerized workloads in production.

But K8s is still finding its way. It is not a boring technology. Standards are still evolving. Best practices are in flux. And K8s is still missing many features one might reach for. A standard interface for managing object storage, for example, was just released as an alpha feature.

We proceed

Given the good and the bad, I'm ready to learn more about K8s. In the next post, I'll set up a small cluster to get our hands dirty.