DevOps and legacy systems: the benefits, challenges and answers

null

Legacy applications are still at the core of many businesses’ operations and some of these are still running on old operating systems that are hard to manage. The original vendor may no longer support the technology. The people who set up the systems have moved on or even retired. Maintenance involves expensive, manual fixes and workarounds.  

These systems become more and more fragile and increasingly easy to break, creating potential security risks and inefficiencies.  The situation is only going to get worse if these ‘brownfield’ legacy apps are left running in the heart of an organisation. Plus, the new generation of software developers and IT professionals don’t want to work on what they view as dinosaur software applications: they want to be involved in more modern approaches, not fixing and patching old stuff.

DevOps – that increasingly popular software development methodology that bridges the gap between development and operations teams to deliver software faster and more effectively - can help bring legacy systems into the ‘now’.  While much of the buzz around DevOps has been around ‘born in the web’ projects, it can also breathe new life into old applications and bring them into the rest of the software environment.  

However, legacy apps are typically harder to migrate due to a blend of technology, process and cultural issues. While these are not insurmountable, they cannot be under-estimated and require thorough planning and commitment.

Drop old-school ‘firefighter’ mentality

Let’s start with the people, especially those daily ‘fire-fighters’ who have become used to responding to requests for help and solving problems. Sure, they are heroes, but in a DevOps-based transformation project, is fire-fighting the best use of their time? Are they really adding value to long-term goals? Probably not, in which case, they need support in changing the way they approach their work and there also needs to be prioritisation within the backlog of tasks. Of course, it can be hard to resist when the CEO phones up with a demand that is not on the backlog schedule. This is where a scaled approach to Agile can be of help, by keeping everyone accountable and on-track for the ‘bigger picture’.   While, it can be tempting to apply Agile to just one project or department, this negates its ultimate value when applied ‘enterprise-wide’. 

Also, backlog management at the enterprise level fosters better decision making, driven by broadly accepted definitions of ‘done’, along with more effective acceptance criteria. This provides more clarity and team alignment, because work is described at the appropriate level of detail, can be automatically estimated and more easily prioritised and thus can reduce development time.

Embrace failure and change

Another challenge with introducing changes to your development processes, is how to handle the inevitable occasions where things go wrong. Often someone in management says, ‘That (Agile/DevOps/etc…) didn’t work, so let’s revert to the old way of doing things.’ Failure is an inherent part of using any new methodology and experts advise us to not be de-railed, remain committed and keep pushing forward. One mistake – or even a dozen – does not mean that Agile or DevOps does not work. 

There is also another aspect of mindset change here too: Agile and DevOps are on-going, they are evolving. There is not necessarily an ‘end point’. It’s like creating a garden while being prepared for weeds springing up along with those plants you originally planned for, and that the garden will change over time.  

This lack of predictability can be hard for many types of businesses to handle, especially in compliance-driven markets. This is where enterprise Agile planning and portfolio management can help, because it provides the visibility and predictability that is required, even extending as far as the budget. In other words, it is possible to combine the flexibility and dynamism of Agile with the control of Waterfall, the more traditional and still widely used methodology.   

Have a data access strategy

Legacy environments are notorious for locked-in siloed data, created and managed by different teams in different formats and locations. This lack of access to potentially very useful data can get in the way of maximising the potential of a new technology project. However, by putting mobile or web interfaces in front of those legacy systems, or applying REST APIs, that legacy content can be made visible and even extracted. Again, there may be a cultural attitude to address here, with people reluctant to share ‘their’ data and this is where creating what is fast becoming known as a ‘data access strategy’ can add value. This prevents data ‘ownership’ and instead, sets out a collaborative effort that says where data should be, how people get access to it, and how to prevent unnecessary duplication. More organisations are becoming fans of this kind of data transparency and creating a ‘single source of truth’ to achieve this goal.

‘Shift left’ and get away from lengthy QA

In legacy applications, QA is a time and resource drain, because it is typically a manual effort. In one organisation I worked in years ago, QA took up 25 per cent of the year when all the hours were added together. ‘Brute force’ testing is also very typical in risk-averse markets (of which many are still using brownfield applications). The answer is to automate testing as much as possible, including test processes that happen at the development stage, not when developers think their work is ready for the QA team. This trend has been coined ‘shift left’ and it’s a good fit with the continuous delivery mindset that is inherently part of both Agile and DevOps.  

The first step is to identify and prioritise some testing scenarios, then create automated tests against those, with the continuous integration system configured to run those with each build of items modified in your version control repository.

Self-service 

Old applications tend to go together with old infrastructure. Upgrading the different components of a legacy application therefore becomes cumbersome, expensive and error-prone. One way to address this is to create a virtualised and self-service infrastructure. While this approach is not suitable for all legacy applications, it is becoming increasingly popular, particularly the adoption of ‘Infrastructure as Code’. However, it requires careful planning, including finding the right strategy and then convincing management of the need to invest in new tools where needed.

While much of the buzz around DevOps has been around new projects, it has much to offer when extended to legacy applications. It is not necessary to leave behind or live with outdated brownfield applications and team cultures. Instead, bring the best of parts of legacy assets into the DevOps world and make them a valuable part of a modern IT environment.  

Chuck Gehman, Technical Marketing Engineer for Perforce Software

Image Credit: Profit_Image / Shutterstock