Skip to main content

How to know you’re getting best value out of your Kubernetes and containerized workload investments

(Image credit: Image Credit: B-lay)

Kubernetes has generally been developed and applied as a fully open-source technology enabling an interoperable ecosystem with wide-ranging vendor support, and the industry has so far been happy to accept its adoption.

If you want to play in the cloud today, you have to use Kubernetes. The adoption of it is almost everywhere, regardless of whether you’re a ‘born in the cloud’, or a ‘born before the cloud’ company.

Kubernetes and OpenStack

The reason why the orchestration system has been so widely adopted is that it has taken the strengths of OpenStack, built on them, and then made them more widely available.

OpenStack is a good platform and continues to have key uses, and in particular, it can handle core cloud-computing services for managing compute, networking, storage, identity, and images. For many organizations, the OpenStack platform continues to perform these tasks and will do for some time.

But, Kubernetes has a key advantage that there are more vendors working on it, offering a wider variety of services, which makes it more interoperable with other systems. For cloud engineers, it has become easier to find the vendor offering you are looking for when deploying Kubernetes (because of a wider variety of services and vendors supporting Kubernetes, and of vendors supporting Kubernetes).

In reality, the widespread adoption of it is something of a virtuous circle - early adoption by a number of users and also vendors, and the accessibility that this created, led to further adoption by other vendors.

Consistent deployment / roll-out on Kubernetes

The next key reason why Kubernetes is here to stay is deployment: Kubernetes has also become the best way to deploy in a consistent way in the cloud. As the number of people with Kubernetes expertise has grown and as the number of deployment automation tools that understand Kubernetes has grown, it has become easier and easier to get up and running on Kubernetes. Further, Kubernetes is now supported by all the major cloud platforms.

The modern Kubernetes orchestration platform benefits both from cloud platforms offering mature Kubernetes services as well as the development of a robust ecosystem, giving users a wide array of vendors to work with and make use of interoperable technologies, which hasn’t always been the case in the past.

Containers and Kubernetes

In cloud computing, one of the key technological advancements of the last 20 years is containers. A container does what it says it does - it is a standard unit of software that packages up code, and its dependencies, so that an application can run quickly, reliably and independently from one computer to another.

One of the reasons for Kubernetes’ widespread adoption is that it was built specifically for the era of containers, and unlike OpenStack, is fully designed to take full advantage of the deployment flexibility that containers offer as a result.

In fact, Kubernetes addresses some of the challenges of working with containers. Containers made it easy to launch applications, but not necessarily easy to manage them, and this is where Kubernetes has come in. Containers allow disparate applications to run, but Kubernetes came in and helped make it easier to deploy and manage the disparate applications.

For companies running monolithic applications, containers provide a way of bringing legacy applications into newer architectures - again, aided by Kubernetes and its ecosystem. For organizations that run encrypted services, the container also provides a great way of segregating services and workloads in private clouds.

Containers have made an impact both for companies born in the cloud as well as those born before it. For older organizations that rely on legacy technologies, containerizing a product can squeeze another five years out of an application.

The role of Kubernetes in the future:

While it is certainly true to say that Kubernetes is rapidly coming to dominate the cloud computing infrastructure discussion, the application of it is more complex than this. It’s important to realize that using Kubernetes doesn’t automatically make you efficient.

When deploying via Kubernetes, you are still on the hook for the cost of the Kubernetes cluster, and for the management of the applications, as well as associated scaling costs. Yet to unlock the potential of Kubernetes and other similar software, businesses typically have to invest large efforts in sizing containers, managing the workloads, and maintaining infrastructure. What's more, these costs only multiply as you scale.

So often, businesses adopt Kubernetes as part of a broader cloud initiative aimed at saving money, but many find their bills spiraling out of control if they don’t enact the right processes from the get-go. Covid-19, and the digressionary economic forces that it has brought with it, is making this an immediate issue, rather than one that had otherwise been kicked into the long-grass.

Cost savings automation for Kubernetes in the era of Covid-19

As in so many industries, automation holds the key to get the best value out of your Kubernetes investment.

Not the basic automation of simple tasks, but the entire and continuous optimization of every stage of the chain. Automation that can be ramped up or down depending on needs, scale and expertise. 

From helping to monitor and right-size resource configurations for containerized workloads, whether orchestrated by Kubernetes or another service, to automatically optimizing allocation and purchase of cloud resources, automation removes the barriers and pain points seen in modern cloud-environments.

A breakdown of cost-saving opportunities for cloud professionals

There are a number of technologies that cloud infrastructure professionals need to put to use to keep the costs of their container and Kubernetes deployments as low as possible.

One key area where cost can be minimized is managing and optimizing the scaling and sizing of the infrastructure used to support containerized workloads. This is especially true on stacks such as Amazon ECS, and on Kubernetes frameworks such as Kops, EKS, GKE and AKS.

Another key technology solution is software that helps optimize long-term commitment purchases of cloud resources including reserved instances. These products provide time and cost benefits when it comes to using reserved instances, providing optimization in a way that does not require constant human monitoring and management. Most importantly, they can forecast and automate the buying and selling of reserved instances, both directly from cloud providers as well as via marketplaces.

Further, there is technology out there that allows cloud professionals to predict spot instance behavior, capacity trends, pricing, and interruption rates. Being able to predict and avoid sudden interruptions of spot instances in advance to avoid downtime is particularly useful for companies running mission-critical workloads. 

Using automation and analytics, these offerings allow companies to get the best price for cloud resources, across multiple workloads and services by utilizing analytics and machine learning technologies that automatically and continuously adapt to changing resource demands on cloud.

There are also AI-analysis services that allow cloud professionals to co continuously monitor and optimize the use of these services.

Kubernetes - the investment that needs optimizing:

Whilst there is plenty of future opportunity for innovation with Kubernetes, another wave of technological innovation in its ecosystem is coming from solutions that rationalize the costs of applications and infrastructure that are part of its deployment.

To ensure that your Kubernetes spend is as efficient as possible, you need to look at leveraging automation that can help you optimize your costs, availability and capacity. Whether you are running web services, container workloads, big data, or stateful applications, you will need to first step towards taking control of, and optimizing, cloud infrastructure and usage. This will help you understand what you’re doing in the cloud and make the best use of your investment.

Kevin McGrath, CTO at (opens in new tab), a NetApp company

As the CTO of Spot, Kevin is responsible for researching and evaluating innovative technologies and processes to guide the company’s technology roadmap, leveraging his extensive background in DevOps and in delivering Software as a Service within the communications and IT infrastructure industry. Kevin started his career at USi, the first Application Service Provider (ASP), over 20 years ago. It was here that he began delivering enterprise applications as a service in one of the first multi-tenant shared datacenter environments. After USi was acquired by AT&T, Kevin served in the office of the CTO at Sungard Availability Services where he specialized in migrating legacy workload to cloud-native and Serverless architectures. Kevin holds a B.A. in Economics from the University of Maryland and a Masters in Technology Management from University of Maryland University College.