Skip to main content

Powering up your DevOps to deliver real agility, not just speed

The goal of DevOps is to increase agility by enabling incremental releases of code in an almost continual delivery cycle, breaking away from the ‘old way’ of releasing large code updates on an annual or bi-annual basis. But in doing so, it is crucial that these incremental releases do not negatively impact the operational (production) environment, and hence the user experience. 

This holds true whether the production environment is legacy physical servers, a virtualised environment or a public cloud. Continuous deployment to all of these environments is possible - although it is certainly true that more focus has been given to virtual and cloud platforms, and that to automate deployment to physical environments can be more challenging. 

A common misconception is that it is okay for a release to fail – the argument being that in a world of continuous delivery, one can either back out a change or quickly deploy another release. If the end-to-end infrastructure is entirely owned then this is perhaps feasible, however, as soon as a mobile element is introduced, app store deployment needs to be factored in, along with the approval process timeframe. 

Not so continuous now and not without risk. A survey from Dynatrace late last year found that 35 per cent of European and 46 per cent of US shoppers would abandon a slow mobile service and shop elsewhere. And with many all too ready to vent their opinion via social media, it is reputation as well as revenue that suffers. Successfully implementing a DevOps model requires giving thought to the three standard components of all businesses – people, process and technology.


In a traditional operating model developers work in a development environment; release code to QA teams using one or more test environments; and this is eventually deployed by operations teams into one or more production environments. By the time code hits production, the development team is typically busy working on one or two iterations further ahead. 

In a DevOps model you need to bring these teams much closer together, and ideally share at least some team members across the areas. This leads to much closer collaboration and tighter planning, which is a necessity given the shorter release cycles. And it’s also crucial that each team understands the others’ operating environments – for example, you really want to be using the same monitoring tools across development, test and production, so that each team can understand what’s happening (and crucially, what the differences are) in each environment. 

This provides the starting point for understanding the future impact of development changes on the production environment, thereby ensuring our goal of no adverse impact on release.


Processes are all about structuring workflows so that they are efficient and consistently deliver the right outcomes. The adoption of best practice guidance and frameworks like ITIL have transformed the IT operations and ITSM landscape from the (not so) organized chaos portrayed in “The IT Crowd” into lean, user-focused, self-service delivery organizations. 

DevOps is very reliant on process automation to deliver on that goal of agility. Automated build processes, automated test processes, automated release processes. Automation certainly can make processes more efficient and faster to execute, but automating a process where you have little visibility of the potential outcome can lead to disaster. 

As a simple example, you could automate the entire build/test/deploy cycle for your latest multi-tier eCommerce application. But what happens if the latest code change has increased the resource requirements for each instance of the application servers? Did your automation validate in advance that the production environment has enough headroom to absorb that additional demand for resources? If not, you could be heading for a big problem. 

So when automating process workflows, it’s important to integrate a good information flow - ensuring that your processes are measuring and surfacing the right information at each workflow stage.


DevOps is a movement that uses technology to deliver technological solutions more efficiently into production environments. Much of the focus to date has been on the power of technology to automate core processes. 

But as discussed in the previous sections, it is just as important to use technology to understand how your solution behaves as it moves between environments; that way you can predict at an early stage (in development) how it will ultimately behave in production.

DevOps and capacity planning

If the ultimate goal of DevOps is to deliver software in a more agile manner while avoiding problems in the operational environment, then we need to have a DevOps focus on the operational requirements (ORs, often also referred to as non-functional requirements or NFRs). 

These include things like security, availability and capacity – at Sumerian we focus on the latter. The closer collaboration between development, QA and operations promoted by DevOps offers an opportunity for organisations to significantly improve their capacity planning – and, indeed, will impose significant penalties if they fail to do so. An understanding of the footprint of existing deployed application components on the production infrastructure is a necessity for operations teams to manage that infrastructure. 

But how to model the impact of deploying a changed component into that environment? If the same instrumentation is used in development and test environments, then using predictive analytics the data collected in the test environment can be analysed to create component models which compare the relative resource requirements of the old and new instances. These component models can then be used to construct a “what-if” scenario model, to predict in advance the expected resource requirements when the new component is deployed. 

This can further be overlaid with growth projections to ensure that enough resource exists to support business plans (or to determine what additional resource will be required). By continuing to apply predictive analytics to resource consumption in the production environment production teams will be able to identify potential threats to service and take action before they become service affecting. 

This information can then be used to close the loop by helping development teams identify long-term code problems such as memory leaks or database growth issues. By powering up your DevOps processes with insight from this continuous analysis of capacity data across the triumvirate of development, QA and operations you can ensure you true agility rather than just speed, multiplying the benefits and outcomes you can deliver to the business.

Peter Duffy, CTO, Sumerian