Skip to main content

How can developers get going quickly with managed Kubernetes?

(Image credit: Shutterstock / Ekaphone maneechot)

Two trends are increasingly obvious in modern software development. Firstly, continuous deployment and rapid delivery of new versions of applications are now increasingly the norm over slow and massive version changes. Secondly, applications are deployed using containers, often on virtual hosts located in data centers, rather than on-premise.

To manage fleets of these containers, to make sure they respond to changes in demand, to upgrade versions without downtime, and to recover after outages, we need a tool. Enter Kubernetes, the container orchestrator. However, Kubernetes can be complicated to set up and run, with a steep learning curve to get started. That’s why many developers are turning to managed Kubernetes to run and manage their container suites.

The different options for developers wanting to run/manage containers 

In recent years, containers have only become more popular in software. In March 2016, the Cloud Native Computing Foundation (CNCF) completed its first survey on container adoption and found only 23 percent of respondents making use of containers in a production environment. Now, in new research from Civo, which surveyed over 1,000 cloud developers, 50 percent of respondents reported that their organization utilizes containers, with 73 percent of those using containers in a production environment – a big difference in only 5 years.

This increased popularity, however, does not make containers any easier to use or understand. For developers getting started with containers and managing containerized software, getting up and running continues to be a daunting task.

To step back, we should look at containers themselves before talking about managing them. A good place to start is getting to grips with Docker. Docker is an open-source project which allows people to develop, share, and run containerized applications – separated from machine infrastructure. It was originally built for Linux, but has since enjoyed meteoric growth and today is widely regarded as the most popular containerization engine, and is used by millions of developers all over the world.

It’s worth noting here that Docker is not the only option for running containers. There are alternative options such as Podman, rkt and containerd. Each with its benefits to help with container creation but, for this article, we’ll treat Docker as the standard, as the core concepts can be applied across the board.

So, Docker looks after how containers are built, distributed, and run, but how can developers manage them effectively? This is where Kubernetes, the most popular choice for container orchestration, comes in. Originally modeled after a project for container orchestration created by Google, Kubernetes (also known as K8s), is an open-source container orchestration platform now maintained by the CNCF.

Kubernetes helps with the deployment, scaling, scheduling, and overall management of containerized applications. It does this by grouping containers into pods, which in turn run on one or more nodes which are referred to as clusters. Kubernetes clusters are a set of these node machines, whether virtual or physical. At a minimum, a cluster contains the functions of a control plane that looks after which applications are running, and one or more nodes. Unfortunately, even the above explanation of its architecture may not be the easiest to understand, which brings us back to the issue of complexity and difficulty of adoption.

Again, as with Docker, there are alternative container orchestration options to Kubernetes, such as Apache Mesos and Docker Swarm. Within the Kubernetes space, there are also various distributions, such as K3s, which we’ll explore later. For now, we’ll focus on Kubernetes more generally, as the most popular choice for application orchestration.

So, Docker allows us to build, run and ship containers, and Kubernetes helps manage said containers throughout their lifecycle. They are complementary and, with the latest versions, they integrate well together.

However, it’s not entirely straightforward to get a full containerized suite running in your environment. Kubernetes is known for its steep learning curve, specialized terminology, and tooling that requires command-line knowledge, and to automate deployments, the management of YAML configuration files. Given the time it takes to learn these systems, many developers are choosing to explore managed Kubernetes to get a boost.

Why managed Kubernetes is faster and easier 

The number of developers utilizing managed Kubernetes has been increasing over the years. In the latest CNCF survey, 26 percent of respondents use a managed Kubernetes service, up from 23 percent the year before and catching up quickly to on-premises installations at 31 percent. And, Flexera’s 2021 State of Cloud report showed that only 48 percent of respondents used self-managed Kubernetes.

This increased number of respondents utilizing managed services could also be attributed to the increase in developers using Kubernetes in a production environment. The latest CNCF survey found that 78 percent of respondents are now using Kubernetes in production, compared to 58 percent from the year before.

Which brings us to the central question, what is it that makes managed Kubernetes an attractive choice for developers and organizations?

The main reason is the most obvious one: it takes less time and effort to get started. For developers looking to run their own production environments with the help of Kubernetes, it takes a high level of specialist knowledge and investment of a considerable amount of time – time which many organizations don’t have. This was reflected in recent research from Civo that found 57 percent of respondents reported Kubernetes’ steep learning curve as the top challenge restricting them from utilizing Kubernetes. With managed Kubernetes, you avoid this challenge as there’s less of a need to have a team managing the Kubernetes release cycle and reliability of your clusters. 

The second reason feeds in from the first: It can be a huge task to try and employ all the people needed to run a Kubernetes environment. Not only is it challenging in the current climate to recruit developer/operations people with the skills to configure and run Kubernetes, it’s also expensive. If a developer knows how to fine-tune a Kubernetes cluster, their time is probably better suited on business-critical or problematic workloads, while handing off the less critical ones to a managed Kubernetes provider. This allows an organization’s internal developers to focus on the most important aspects of the system, while the vendor handles the rest.

Along with less pressure to maintain a team internally, managed Kubernetes can also offer better reliability and security. Managed Kubernetes providers can have teams of engineers whose job is ensuring the stability of customer environments and deployments, allowing these customer organizations to concentrate on shipping code. Providers can also be a repository of best-practice knowledge, having seen what works for a wide range of customers. This means that each organization looking to make the shift to Kubernetes does not have to reinvent the wheel.

The stability and support offered by managed Kubernetes providers, and the resulting lessened pressure on internal teams, can give businesses the luxury of time for forward planning. This forward-thinking helps enable developers to stick at the cutting edge of what is possible with modern software delivery technologies such as Kubernetes, taking advantage of vendor services that utilize latest Kubernetes updates. It helps with keeping things moving and iterating quickly, rather than getting stuck trying to keep on top of everything.

Why K3s is the best distribution choice to speed things up 

So far, we’ve mostly talked about Kubernetes generally, or in terms of the “traditional” upstream Kubernetes – K8s. However, there can be an issue with traditional Kubernetes in production environments, especially smaller organizations: Kubernetes is resource-heavy.

Engineers looking to run this traditional upstream Kubernetes need a lot of computing resources to ensure that everything works correctly. For example, it’s general practice to separate control plane nodes (which manage the Kubernetes system itself) and worker nodes (which run the workload). To ensure resiliency, it’s customary to run etcd (the Kubernetes state database) on a separate cluster, then have separate Ingress nodes for handling incoming traffic. This quickly results in 3 x K8s control plane nodes, 3 x etcd, 2 x Ingress, plus nodes to run the workloads. This means a minimum of 8 instances before building the rest of the environment.

It is these challenges that is encouraging many in the industry to push for wider adoption of K3s, a lightweight Kubernetes distribution with all the frills of K8s at a fraction of the size. It was created by Rancher labs in February 2019, then donated to the CNCF in August 2020. Crucially, this ensured that the technology could continue to develop with open-source, vendor-neutral governance.

One of the main differences between K3s vs K8s is that K3s is packaged as a single binary of less than 40MB that implements the Kubernetes API and other components required to run a cluster. To achieve this size reduction, Rancher Labs removed cloud provider-specific implementations and storage classes bundled in the main K8s source tree, which can all be replaced with add-ons – if need be. It’s also a fully CNCF certified Kubernetes offering, meaning that configuration and workload specifications will work both on K8s and K3s.

The much-reduced footprint of K3s means that it’s possible to run a cluster on nodes that have anything from 512MB of RAM upwards. This means that it is possible to run workloads on the control plane node as well as dedicated worker nodes if required, and the single, small binary means that it can install in a fraction of the time it takes to launch regular Kubernetes clusters. 

Over the last two years, K3s has enjoyed huge success. According to Rancher Labs, K3s was downloaded more than a million times in its first year – that works out as an extraordinary 20,000 times every week. We are already seeing growing utilization of K3s in production environments across the enterprise, from edge settings with limited hardware, to supporting data center workloads. A lightweight distribution opens the doors to what is possible with orchestration software. Given adoption is only set to continue to rise, K3s is clearly on course to rise to prominence in the Kubernetes space.

And, with more managed platforms providing a service based on K3s emerging on the scene, it won’t be long until it’s making its impact on the big cloud providers’ radar.

The future of container orchestration 

More people are using containers than ever before and, as more developers shift focus to cloud-native app development, this number will only continue to grow. The CNCF’s Cloud Native Landscape details a dazzling array of rapidly-developing technology and tools that can be difficult for even the dedicated observers in the space to stay up to date with.

The speed at which the technology changes will continue to be one of the main drivers for people opting for managed Kubernetes. This is combined with the fact that managed Kubernetes can increase reliability and allows internal developers to focus their time on what matters for their business rather than worrying about keeping track of the minutiae of clusters and their configuration.

K3s is changing the way we use Kubernetes. Since its creation in 2019, K3s has revolutionized the industry, offering an attractive, lightweight alternative Kubernetes distribution to K8s, well-suited for developers to deploy in a range of different situations.

The really exciting shift we are seeing is more firms utilizing K3s in production environments, supporting high-compute, business-critical workloads across the enterprise. This trend is driven by the growth of managed K3s, offering businesses a credible way to spin up clusters at pace and run them at a fraction of a price – a significant sea change from K8s.

Overall, it is clear we are entering into an exciting new era, when developers can utilize managed K3s to really push the boundaries of innovation in Kubernetes.

Kai Hoffman, developer advocate, Civo (opens in new tab)

Kai is a developer advocate at Civo, the first pure-play cloud-native service provider. He is passionate about making cloud-native best practices available for and used by everyone.