18 features to consider when evaluating an enterprise Kubernetes solution

(Image credit: Image source: Shutterstock/violetkaipa)

Digital transformation across industries is driving the need for IT to enable cloud-native applications. This has led enterprises to adopt Kubernetes as the most effective way to support cloud-native, container-based architectures, and to modernise their applications and IT infrastructure.

Organisations of all sizes are looking to take advantage of Kubernetes – for both greenfield applications and for re-architecting and modernising legacy applications. While many organisations are looking to leverage Kubernetes, talent scarcity of Kubernetes experts, coupled with the complexities of running Kubernetes at scale, hinder successful adoption.

Kubernetes is notoriously difficult to deploy and operate at scale — particularly for enterprises managing both on-premises and public cloud infrastructure. Numerous Kubernetes solutions and products have emerged in the industry (from both startups and established traditional vendors) aimed to solve some of the challenges around Kubernetes. The space has become crowded, and difficult for organisations to navigate and compare the various offerings.

Below, we identify 18 technical and operational capabilities to consider when evaluating various solutions for enabling Kubernetes at scale in the enterprise. In the next posts in this series, we’ll compare some of the leading commercial solutions and how they stack up across these key features.

1. High availability of Kubernetes clusters:

Kubernetes does not offer out-of-the-box deployment of highly available clusters and HA must be configured by the Kubernetes administrator. It is recommended that at least three master nodes are configured behind a load balancing solution, with integrated or independent clustered deployment of etcd that stores all the cluster state information. Any high availability solution must also account for failure scenarios and auto-repair and recovery.

2. Supported deployment model(s)

The deployment model of a Kubernetes solution defines how it will integrate within your enterprise environment and what level of support service level agreement (SLA) it can provide for day 2 operations.

The top three deployment models for Kubernetes solutions are:

Traditional on-premises deployment: users download and deploy Kubernetes on their infrastructure on their own or using professional services and support from a vendor

Hosted Kubernetes as a service (KaaS): A vendor will offer Kubernetes as a service on top of infrastructure that’s hosted by a cloud or hosting provider

Hybrid Cloud Kubernetes as a Service: Kubernetes is offered as a service on the infrastructure of your choice – either your own on-premises data centres, or across public cloud infrastructure.

3. Prerequisites and operating system requirements

The prerequisites of an enterprise Kubernetes solution define what infrastructure requirements you need to satisfy before you can get up and running with Kubernetes. Some solutions require an expensive licensing purchase of underlying infrastructure, such as a hypervisor, or an investment in a hosted Kubernetes solution.

4. Monitoring and operations management

A production Kubernetes cluster must be monitored at all times to handle any issues and outages without severely affecting cluster and application availability to users. An enterprise Kubernetes solution must provide this capability out of box.

5. Cluster upgrades

Kubernetes has a large community of contributors and a new version is available every 3 months. An enterprise-class solution will support rolling upgrades of clusters, such that the cluster and the cluster API is always available even while the cluster is being upgraded. Additionally, it will provide the ability to rollback to previous stable version upon failure.

6. Multi-cluster management

A single Kubernetes cluster can scale horizontally to support large sets of workloads. However, running Kubernetes in production requires being able to run multiple Kubernetes clusters, as you will want to fully isolate your dev/test/staging applications from production applications by deploying them on a separate cluster.

7. Multi-tenancy, role-based access control and single sign-on support

Kubernetes supports multi-tenancy at the cluster level using the namespace abstraction. However, in a multi-cluster environment, you need a higher level multi-tenancy abstraction to supplement Kubernetes multi-tenancy and provide the right level of isolation across different teams of users. It should integrate with Single-Sign On (SSO) solutions most commonly

used by enterprises such as Active Directory or ADFS, Okta, and other popular SAML providers.

8. Load balancing

Kubernetes automatically load balances requests to application services inside of a Kubernetes cluster. However, some services need to be exposed externally for consumption by outside clients. Kubernetes does not provide an out-of-the-box load balancing solution for that type of services. An enterprise Kubernetes solution should include a robust external load balancing capabilities, or integrate seamlessly with existing commercial load balancers.

9. Private registry support and image management

Running containerised applications on Kubernetes clusters requires having access to a container registry where your application images will be stored. A large enterprise organisation will typically want a secure private container registry to store its proprietary application images. An enterprise Kubernetes solution should provide image management capability out of box.

10. Hybrid cloud integrations and APIs

Every enterprise today wants to build a cloud-neutral strategy by investing in multiple cloud solutions. Having multiple private and/or public clouds as part of your cloud strategy ensures that you aren’t getting locked into a single provider with no leverage on pricing, to have high availability across your infrastructure overall, and to satisfy your unique business policies.

11. Enterprise-grade user experience

Enterprise-grade user experience is all about having a polished user interface that enables enterprises to manage their hybrid environments through a single UI. This delivers complete visibility simplifying communications across the environment. This UI should allow operations that span multiple clusters: for example, globally searching for workloads of a specific type or tagged with a specific label across all clusters running on different regions, data centres and cloud providers.

12.Application lifecycle management – application catalogue

Application catalogue provides easy one-click deployment for a set of pre-packaged applications on top of Kubernetes. It also provides end users a vehicle to build and publish their own applications via the catalogue for others in their team or their organisation to deploy in a one-click manner. The application catalogue enables organisations to standardise on a set of application deployment recipes or blueprints, avoiding sprawl of configurations.

13. Production grade service level agreements (SLA)

As more and more organisations are running their business on Kubernetes, IT must ensure that it can support the SLAs that the business requires. IT must ensure that Kubernetes is available to developers and the business to support key initiatives. Most organisations require 99.9 per cent uptime.

14. Ease of setup, installation, continuous, use, management, and maintenance

A successful Kubernetes platform must be easy to implement and maintain so organisations can leverage containers continuously. This alone is a major barrier that many organisations do not overcome.

15. Networking support and integrations

Networking integration is a critical component of running Kubernetes clusters in production and at scale. An enterprise will typically want Kubernetes to integrate with a Software-Defined-Networking (SDN) solution of their choice that they currently standardise on or a container native solution such as calico or weave that gives them more options around isolation.

16. Storage support and integrations

Similar to networking, integration with enterprise-grade storage is an essential component of running Kubernetes clusters in production. Kubernetes provides an abstraction called Persistent Volumes to hold data persisted by stateful applications. It is important for an Enterprise Kubernetes product to map PVs to an actual highly-available storage technology. Enterprises will typically want their Kubernetes deployment to integrate with storage solutions that they have already deployed such as NetApp, Pure, SolidFire, etc. or they may want to integrate with a container native storage technology such as Portworx.

17. Self service provisioning

Developers must have self-service access to one or more Kubernetes clusters with the right levels of isolation in place so only members with the appropriate privileges can access production workloads.

18. Built-in CI/CD Support

One of the most critical workloads run by the developers is Continuous Integration / Continuous Delivery. A robust CI / CD pipeline is critical to ensure agile development and rapid delivery of new software releases to customers.

Vamsi Chemitiganti, Chief Strategist, Platform9 Systems
Image source: Shutterstock/violetkaipa