Organisations all over the world are quickly adopting Kubernetes. At last count, it has gained 15.7 per cent global market share - unprecedented growth for an infrastructure tool. More than 10,000 companies now use it. However, as many of them will have now realised, Kubernetes is a difficult beast to tame. Yes, it is undeniably powerful - hence the mass, rapid adoption. But it is by no means a perfect system. Kubernetes is inherently complex. It requires a lot of skill and knowledge to use. Some set up processes are manual and, therefore, open to human error that can lead to security and stability problems that cost a lot of time and money to rectify. Thankfully, these challenges are in no way insurmountable. By being aware of the pitfalls, putting the right procedures and oversight in place, and being willing to do the hard yards, they can all be avoided. Given the colossal benefits of Kubernetes over everything else on the market, this extra work is well worth doing.
First, we need to recognise that the supporting documentation for Kubernetes is vast and usually crosses multiple roles within an organisation, from operators, security engineers to developers. Ascertaining how much documentation is relevant to your job is, on the face of it, not a straight forward job. As a result, there is no substitute for understanding the framework of Kubernetes. If you know this, then it becomes easier to navigate the documentation to focus primarily on your area of relevance. This may sound fairly basic stuff, but you would be surprised how often this critical first step is skipped.
Second, it’s important to understand the configuration of Kubernetes. Specifically, what is or isn’t covered as part of the orchestration setup. There are so many ways you can build Kubernetes, from cloud Kubernetes as a Service offerings, off-the-shelf products, through to Do-It-Yourself setups, the configuration of Kubernetes and underlying infrastructure can differ drastically. The are some main principles; make sure that your Kubernetes API endpoint isn’t public, basic authentication or client based certificates are not enabled, Role Based Access Controls are in place and constrained appropriately, Network Policies are implemented and Pod Security Policies are on with a set of sensible defaults that restrict how containers can run inline with security best practices. There are plenty of guides online that you can consult to ascertain exactly which settings will be most appropriate for your business.
Going back to basics
Next, you need to recognise and understand how the clusters are intended to be used. If the cluster is shared between different teams, then the security approach may not be consistent and so adding in extra security precautions in using taints and tolerations for node groups can be a good way to reduce risk. Alternatively, being able to give clusters to specific teams to isolate applications and reduce the blast radius is a much more preferred solution to the problem of a large multi-tenant cluster.
For those who are more focused on their application, then having a basic understanding of the main components of Kubernetes - network policies, ingress, certificate management, deployment, configmaps, secrets and service resources is going to be key. If the administrator of Kubernetes has put in place pod security policies around containers, then some of the security constraints will be top down and require modifications to your deployments to be inline with their policies. If that isn’t the case, then familiarising yourself with what a good pod security policy is and why, will help you approach your application security in a more considered way.
Beyond the running container itself, is protecting the traffic communication and sensitive data your application is using, such as database passwords. Network policies are good at restricting traffic flow to applications, and allow you to control the traffic to and from Kubernetes based containers. Secrets natively don’t offer any encryption and are base64 encoded, so making sure the cluster administrator has enabled secret encryption will add that additional security layer when data is being stored in the etcd backend datastore.
Having the cluster administrator install something like cert manager, will allow an automated way for your applications to use TLS certificates and hence encrypt data between users and your application and application to application. If you are sharing the Kubernetes infrastructure with other teams or services, then making sure you encrypt it will help protect the data in transit. There are also services like Istio, but as that product does a lot more than just certificates, it can add more complexity than necessary.
The Cluster Administrators will also need to pay attention to the implementation detail. Making sure that applications have suitable segregations, not just with security but operationally is a major decision when architecting cluster topologies. Preferably, having an automated, repeatable and consistent security mechanism for providing clusters, will allow them to be at a team or project level as opposed at a multi-tenant level. This helps remediate risk by gaining repeatability but reducing the blast radius of a potential compromise or accidental operational mistake that could bring something like ingress down for all services if shared.
The final thing to note is that a security system is only as good as how it is monitored and maintained. You need to be vigilant. The ability to search your audit logs to ascertain what is happening inside Kubernetes is critical. There also needs to be an alert system in place that will notify you or your CSOC team about specific events. This will give you the time you need to investigate issues if and when they arrive. Allowing them to be rectified before they grow into a bigger problem.
Everything I’ve listed here is straight-forward to implement. Yes, it may take a little more time to do and at times it is going to be frustrating. However, the alternative is, as many companies have already learnt to their cost, the creation of an imperfect system that can cause a host of security issues. Thankfully, I firmly believe that in the medium term solutions will come to market that will rectify these issues and generally make Kubernetes simpler to use. But, until that happens, it is up to every organisation to do the necessary hard work to get the most out of Kubernetes.
- Avoid multi-million dollar mistakes in Kubernetes refactoring: Four best practice to follow and two pitfalls to avoid
Jon Shanks, CEO and co-Founder, Appvia