Skip to main content

Moving to containers – what you need to know

(Image credit: Image Credit: Rawpixel / Shutterstock)

Containers are becoming more and more popular for deploying applications. Whether you are looking at Kubernetes or Docker, containers are useful for shipping application components and running them in public or hybrid cloud environments. They provide a way to abstract microservices components from the underlying hardware or cloud service, helping you prevent lock-in. However, moving to containers will mean changes for your IT security team.

Containers have to be understood and managed just as much as traditional IT infrastructure. Without this insight, it’s difficult to keep these images up to date and secure. So how can you take a pragmatic approach to containers, security and tracking activity?

Looking out for containers

First, it’s worth validating if containerisation is used within your organisation and where. This might seem like a simple point, but it is easy for developer teams to build and run their own applications in the cloud without involving other IT teams from the start. It’s also worth finding out how many deployments are used across the organisation and for which purposes.

If you do find containers in place, it’s then important to work out how much of the software development process is based on this approach. Normally, work carried out in development will tend to carry on throughout the business, so if initial projects are developed in containers then testing, quality assurance and production instances will move over to containers as well over time. This can lead to multiple platforms and instances running in parallel.

To manage this, define your current level of visibility across your IT assets. This will help you benchmark where you currently have good insight and where additional data is needed. It will also help you demonstrate how you have improved visibility across IT over time.

Once you have found any gaps, you can then understand what kind of information you are missing. For containers, getting data on images that are deployed can be difficult if you have not thought ahead. For example, if you find containers being deployed on a cloud service, how can you know how many discrete images are running at any one time?

To get this data requires that agents are embedded into any standard container build. By having these sensors included within any and all container images, each container can automatically report status data back. This can help you know what number of images are deployed, what software packages are included in the runtime environment, and whether those packages are up to date.

However, each one of these stages has to be carried out continuously. With containers able to be created and removed automatically in response to demand, continuous assessment and protection of those containerised applications has to be implemented throughout the entire lifecycle of those applications, from the time the applications are built, to when they are bundled and shipped into container registries as images, through to the images getting deployed as running application container instances.

This approach has to cover the assessment and enforcement for all the moving parts involved in running containers – this therefore includes the container infrastructure stack as well as the containerised application’s lifecycle over time. These two areas – stack and lifecycle – have to be thoroughly aligned. More importantly, this information should be consolidated alongside your traditional IT asset data to help you prioritise any issues that need fixing, regardless of where they exist.

Securing software development processes, not just your technology

Containers are commonly getting adopted as part of agile development and DevOps processes. If you find that your teams have containers in place, it is also worth looking at the wider Continuous Integration / Continuous Deployment (CI/CD) pipeline process where containers are being used as well. CI involves breaking software development down into smaller, more manageable chunks that can be delivered quickly, while CD covers getting these development projects through testing and into production.

For CI/CD implementations to work effectively, these services have to automated and integrated together. This helps developers get their new software projects tested and into production quickly, while further integration with cloud services can help the developers and operations teams collaborate on scaling up deployments automatically to meet demand. Using tools like Jenkins, CircleCI or Travis CI can help speed up these processes, but they only automate the steps that you include.

For security professionals looking at how to get involved in securing containers, it’s not enough to simply say that security should be considered from the start. Instead, it’s important to look at how security can add value throughout the DevOps lifecycle for developers, as well as providing all the necessary data to the security team around the status of IT assets and infrastructure.

For example, you can help developers by providing the ability to carry out their own tests for potential security vulnerabilities within software components or libraries. This task can be automated as part of their workflows and any issues discovered can be automatically put into the software development issue tracking software for fixing. By taking out the “office politics” of reporting security issues discovered during application scanning or container image checks, this can help everyone see that the security team aims to make the software development process easier.

For teams using containers, this scanning process has to cover all the different locations where images can exist. This includes any software assets that are used within the containers, any container images that are stored in the company’s own library, any container images that are pulled from public libraries, and the containers themselves when they are running.

Getting this information back can help developers understand where they have fixes to make within their containers, but equally it can help you see where your security vulnerabilities exist. By collaborating on the discovery process and helping developers own this for themselves, you can make it easier to keep secure. More importantly, you can get this information in context alongside your other platforms and infrastructure.

Running an asset management service here involves getting container-native visibility and protection that is built in from the start. This process also has to seamlessly integrate with the existing enterprise CI/CD pipelines so that any container gets managed and logged properly. Lastly, layering in this asset management approach should include monitoring each container’s runtime, so that any change in the image itself can be flagged.

For IT security teams, getting insight into the status of all company IT assets – whether these are traditional physical servers or endpoints on a company network, or new applications deployed in containers on public cloud – is essential to keeping data secure. Without this information on what is taking place, it’s easy to miss potential risks. However, the changes taking place around containers in terms of how applications are developed and used mean that it’s important to take a more pragmatic approach to getting this data in the first place. By understanding the changes taking place in software development, you can embed security in the process early. Similarly, by centralising asset data, you can make better decisions on where to prioritise your security resources.

Marco Rottigni, Chief Technical Security Officer EMEA South, Qualys (opens in new tab)
Image Credit: Rawpixel / Shutterstock

Marco Rottigni is Chief Technical Security Officer EMEA at Qualys. He has more than 20 years experience in security, working with a variety of companies on their cloud security requirements.