Kubernetes is a container orchestrator built by a large community of technology vendors and users that was inspired by Google’s Borg orchestrator.
Kubernetes is operated by everyone from major Cloud providers to you. From the What is Kubernetes page:
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Reading Time: 5 minutes

If you check the Kubernetes homepage or a conference, you’ll also see promises like “Planet Scale”, “Never Outgrow,” and “Run Anywhere.”
It’s safe to say that Kubernetes strives to be a highly portable orchestration system that lets you operate at small to very large scale. It’s also safe to say that “easy to deploy, operate, and use” are not in the core set of promises, though the broader ecosystem certainly offers that.
The Kubernetes API is its biggest strength
To me, the most interesting and valuable aspect of Kubernetes’ ‘portability’ is the Kubernetes API. Kubernetes provides APIs for more than 50 resources that model Workloads, Services, Config and Secrets, and Storage, Metadata, and now even Clusters. Kubernetes can incorporate and expose raw compute, storage, and networking resources from a wide variety of sources using plugins to networking and storage interfaces whether in a self-managed datacenter or the Cloud. The API provides a uniform interface to the cluster’s underlying resources that applications, deployment tools, and the cluster can agree upon no matter what the cluster is built from.
This adaptability helps you leverage existing networking, storage, and compute assets as well as customize Kubernetes to the environment it needs to run in.
Kubernetes can help you adapt both applications and infrastructure to serve a ‘single’ deployment and operations API. Of course, not all Kubernetes networking and storage plugins implement those interfaces and APIs evenly or equally well. Each component choice may result in significant differences in how the cluster operates and may be important to application performance and reliability.
For example, this good Oct 2019 comparison of Ingress controllers evaluated eleven controllers against thirteen criteria important to that organization. Each of these Ingress controller implementations performs well in certain contexts, but not in others.
You will need to evaluate these options, select one that is right for you, integrate components, and verify the resulting application platform works as advertised.
Kubernetes may be best thought of as a distributed system toolkit rather than a platform that works “out of the box.” If this looks like a lot of specialized systems engineering and integration work and responsibility — it is. Most teams are better-off using a managed Kubernetes offering from a Cloud vendor and/or Kubernetes management tools like Rancher to deploy and operate clusters, possibly using a PaaS-for-Kubernetes on top of the cluster. This is especially true because…
Continuous Change
Kubernetes changes very quickly. Even ‘long term support’ release versions (e.g. 1.11, 1.14) live for only 9 months (c.f. Kubernetes versioning policy). You should expect some changes to Kubernetes resource APIs and implementation even when transitioning from one “long term” release to another. (Update: c.f. Why people replace Kubernetes clusters instead of upgrading them)
This is one of the reasons it is imperative to create functional and load test for the core platform responsibilities such as log shipping, metric collection, application deployment process, load balancing, scaling, etc. You’ll need to verify that your application platform works frequently in order to keep up with a supported release.
Then the question becomes: should our platform or delivery teams be doing this system integration and operations work or should a vendor be doing it? Funny how that worked out.
This is especially relevant in light that contrary to the hype of “Planet Scale,” there’s a “More, smaller clusters” movement in the Kubernetes community. I think this reflects the pace of change in addition to evolution of cluster architecture to more robust fault, management, and security partitions.
That ‘take’ was long, so I’ll keep the differences short.
Differences
Here are three key differences between Kubernetes and most other orchestrators.
First, Kubernetes has a huge ecosystem of vendors, developers, and users trying to make Kubernetes run usefully everywhere, using almost anything.
Second, Kubernetes is extensible so you can customize the behavior and enhance the features of your cluster through custom resource definitions, admission controllers, and more.
Third, Kubernetes will likely require that you rearchitect significant portions of your application architecture. The networking model is a big change for most organizations. Kubernetes assigns every Pod a routable IP from a contiguous IP block and provides some DNS-based service discovery between services within the cluster. However, this can result in a few problems:
- a single container cluster may need to reserve thousands of IPs to allocate to pods
- the Kubernetes cluster(s) must be able to allocate/deallocate IPs, which may not be possible on some networks
- many people need clusters to span multiple datacenters, but many networks are bounded by a physical location; for example AWS subnets are bounded by an Availability Zone (datacenter)
- Kubernetes Services frequently need to be advertised outside of the cluster
- Most people would like some sort of network access control for traffic attempting to reach application instances running in a Pod, but this defaults to open
To be clear, there are solutions to all these problems. But these are some of the reasons there is so much interest in various overlay/underlay networks, ingress controllers, service meshes, and network security policy tools. There’s a lot to be done.
Best For
I think some of the best ways to use Kubernetes are to:
- As a managed service that is deployed and operated by a team of experts that continuously integrate change from upstream Kubernetes
- Extend the life and leverage of existing datacenter compute assets that have APIs (e.g. vSphere) when you don’t already have well-functioning automated processes to provision resources for applications and deploy them
- Create a compute platform that is tailored to your exact business, environmental, organizational constraints and strengths
- Adopt a relatively consistent orchestration platform and application deployment process for Cloud and on-premises deployments; there will be differences between the on-premises and Cloud clusters, but they’ll be smaller than say Kubernetes to ECS
Closing Words
Kubernetes is the most widely talked-about container orchestrator for good reasons. It is very flexible, can provide a relatively consistent deployment target to applications, and (especially) has a large and competitive support ecosystem. Kubernetes also represents a large and complex set of architectural changes and requires top-notch automation and skills to operate clusters. Adopters can improve their chances of success by planning their deployments carefully and buying as much of the necessary expertise in the form of managed services, pre-integrated and tested Kubernetes distributions, and external specialized support staff.
The next post in this series covers container orchestrator selection strategy.
Stephen
#NoDrama