The future for IT is fluid. More companies than ever before are choosing public or hybrid cloud for their IT, while new software deployment strategies are also becoming increasingly popular. Approaches such as these help IT to implement services for the business faster, reduce costs and scale up their applications based on demand.
However, all these elements – instant scale, increased flexibility, and automated deployment into production – can have their own risks to security of data and applications. With new, elastic IT in place, you can see IT security stretched to breaking point.
Without the old rules around security in place, application deployment teams can risk flaws or bugs going into production. Compliance standards and controls on where data is stored can be flouted. Vulnerabilities in IT infrastructure or software can lay unpatched if IT Security and Operations teams aren’t aware of those assets being created and torn down over time.
For IT Security teams, this can look like IT is becoming a free-for-all without opportunities to stop, think and maintain standards around secure deployment. Rather than employing existing best practices that don’t take these new models into consideration, your IT security processes and planning will have to be just as elastic. Put simply, as IT infrastructure moves into the cloud, so must your security.
So how can your organisation get ahead of these risks, maintain security and still take advantage of these new options?
Cloud and security – keeping up with Just In Time IT
For IT infrastructure teams, the advantage of using new cloud services and techniques like DevOps is that new applications can be implemented faster compared to traditional procurement and models. By supporting faster changes and more agile development, IT teams should be more productive.
For companies deploying to public cloud or in hybrid cloud environments, automation around IT management processes can help. With so much change taking place across enterprise IT, scanning and vulnerability management processes will have to move from the traditional monthly or weekly scans to continuous scanning.
DevOps teams use continuous deployment (CD) and continuous integration (CI) to roll out updates from development, through automated testing and into production. Production updates can take place weekly, daily, or even multiple times a day. To keep these new processes secure, you will have to work at the same pace. This will begin with tracking what new IT assets are created over time, from initial deployment through to when updates are required. As the pace of change increases, your IT asset management process should evolve too.
This change in approach is a necessary one as the infrastructure supporting applications can be built, tested, and even removed from the cloud between scans. Without this level of oversight, IT teams risk missing changes taking place in the installed asset base over time. Even worse, vulnerabilities in virtual machines or cloud images can be missed if they are not running live at the time that a scan takes place. By automating the IT asset management process, you can keep accurate records of cloud usage and also ensure that any patch needed is put in place.
Containers and cloud
Containers are small, lightweight instances that contain only the elements required to run a particular service. They are designed to run as part of overall services that can scale up as needed, then be turned off when not required. A further change in approach is required if your developers are deploying applications based on software containers rather than in the cloud on virtual machine images that contain traditional operating systems.
Containers are different to virtual machines or cloud machine images in that they can be provisioned to meet demand automatically and be removed just as quickly. However, these deployments can’t be managed with traditional security tools or passive scanning services. Instead, any vulnerability management agent has to be part of the base container so that it can automatically flag to IT security that the new asset has been created.
The drawback here for IT security teams is that there is a loss of oversight without good preparation. For developers, IT security may be last on their list of requirements when it comes to running their applications. However, the flipside here is that getting good security practices in at the start can help automate vulnerability management at scale. By building security management into the base container, you can help automate and improve security.
Embedding security into the fabric of software container deployments helps you maintain control over the whole IT infrastructure, regardless of how many new containers are needed or turned off at any one time. By building agents into the base build, you can spot problems within the overall software stack more easily. This can also work well alongside CI and CD projects involving deployments to cloud, because those elements are automatically flagged if any security issue arises with the software components being used over time.
Improving cloud security
If your team is just starting out around cloud, there are several best practices that can help improve your overall security posture. For example, most enterprises will look at running multiple cloud instances. These might be duplicated environments running on different AWS Availability Zones for redundancy, through to full multi-cloud instances across internal private cloud instances and different public cloud providers.
No matter how complex your use of cloud becomes, you should look at how to track the moving parts that are involved. This includes automating the process for IT asset management across your cloud environments, so you have an audit trail on who requested what new service and how much resource they used.
Similarly, you can track cloud accounts and users to check on behaviour over time. Role-based access control can help ensure that developers, testers and IT operations staff can support their cloud instances, but can’t access machines or data that they are not authorised to do. Administrator accounts should have special oversight too; recent stories around cloud services being misused to mine bitcoin due to poor account security should give you an idea of what can take place when this is not done well.
Understanding your responsibilities around your deployment of cloud – and what the cloud providers are responsible for as well – can help you determine where to invest your time and how to improve security planning. For example, working with DevOps teams on integrating security tools into developer workflows can help both teams to improve their results.
Rather than IT security teams being involved at the end when applications are moved into production – and when fixing issues is most expensive – this approach can help you support your security and development teams at the same time. After all, what differentiates a security issue within an application from a user experience problem? For example, a SQL injection problem that could break an application is another form of poor input handling. If you can help developers find and fix security problems during their code testing and QA phases, security can help all the teams involved be more productive.
These insights can be used to find and fix misconfigurations across any cloud instance, from individual implementations through to more complex hybrid cloud deployments. Just as developers and operations teams want to make the most of these new approaches to deploying IT, your security strategy will have to become just as flexible and elastic. This will involve a combination of understanding these new requirements and what is needed to remain secure. This will stretch security into new models. However, by making it easier to adopt and use rules and policies around secure development and deployment, IT security teams can improve performance over time.
Darron Gibbard, Managing Director of Qualys, EMEA North
Image Credit: Melpomene / Shutterstock