Organisations have a mix of legacy and advanced systems. Regardless of the race towards digitalisation, it is impossible to rip and replace legacy systems, entirely. Primary blocker is the dependency on the system status quo to generate revenue and difficulty of making changes on a running system. Next hurdle is the lost knowledge of systems and applications because people who built them moved on. But legacy systems should not prevent enterprises from competing in the digital age.
The legacy systems still hold relevance for most of the enterprises because of their use in day-to-day critical functions or unavailability of an equally elaborate alternative. Usage of mainframes in the financial and insurance industries is a classic example of such a situation. As per a report, The Future of Adaptive Authentication in the Financial Industry by OneSpanTM 96 percent of organisations still rely on legacy processes tied to username and passwords for authentication. This is cited as one of the top challenges stopping financial institutions from setting up modern processes of customer authentication and security.
Few legacy technologies like COBOL, Mainframe, DEC minicomputers, Windows XP and OS/2 are still widely in use. For instance, till 2014, 96 out of 100 world’s largest banks, 9 out of 10 world’s largest insurance companies, 23 out of 25 largest retailers in the US and 71 percent of the Fortune 500 use IBM System z mainframes. Till 2014, there were 10,000 mainframes actively being used around the world. The numbers are large and show how integral legacy systems are for businesses. Enterprises need to automate the legacy applications using DevOps way to cope up with advances in the market to meet customer expectations.
DevOps and legacy systems -Not a misfit
Often organisations quote DevOps and CI/CD approaches work for green-field projects, and not for legacy applications/systems. Amidst challenges like too much technical debt, tightly integrated hardware components, fragile codebase, it is tough to select specialised approach like Agile practices, CI/CD, Test Driven Development, etc.
Organisations are sceptical about taking all at a time or partial approach to moving to automation of a legacy system using DevOps way. The right approach should comprise five principles of Automation, Standardisation, Shift-left, Communication, and Feedback imbibed in the development.
Automated build and deployment
One of the major challenges with legacy systems is no CI processes in place. This makes the build and deployment processes slow delaying the releases and production. Legacy systems do not use version control or branching strategy for engineers to work on code development with set PR approval process to merge to master. There are three ideal branching strategies that work for legacy systems–
- Development 🡪 Release 🡪 Master–Used for active development
- Maintenance/Hotfix 🡪Master–Used for bug fixes and emergency defects
- Feature Branch 🡪 Develop 🡪 Release 🡪Master–This is for adding release features to the main branch
Automate unit test coverage
With legacy systems and waterfall development model, technical debt keeps on rising for organisations. It is not possible to get rid of it completely at one go. Incremental improvements in unit test coverage can help developers in managing code quality and improve code systematically over the period.
SonarQube is a popular tool to perform reviews with static analysis of code to detect bugs, code smells and potential security vulnerabilities. It works for over 25 languages including legacy languages like COBOL, C etc. As organisations achieve maturity in their DevOps practices testing should shift left to achieve quality and agility expected out of the new development methodologies.
Automate infrastructure provisioning
Companies in the initial phase of DevOps adoption consider this step as optional. In legacy systems, hardware components are tightly coupled making maintenance and upgrades very difficult. Manual build processes are time-taking and involve IT admins to follow a set of processes. This requires self-service infrastructure to help developers roll out their environments and destroy them with minimal human intervention. This increases the time for developers to test or deploy their code. The early organisations start the process, the earlier they will develop their own playbook (basically the documentation of steps/series for any manual task) with expected security baseline and other practices for self-service infrastructure. As a first step, teams have to set up processes manually, but later playbooks can be updated to benefit everyone in the organisation.
Tools such as Ansible, Chef provide legacy systems to manage code and code component configurations effectively. The ease of use allows developers to roll-out environments with a single line of code. These tools are compatible and work seamlessly for organisations to move from manual IT to automation.
Shift-left and automate test suites
Shifting test activities early in the legacy systems help in reducing technical debt and improving quality for new development. In legacy set-up there’s no or limited concept of testing in non-production environments, developers develop and handoff the code to testing teams and wait for testing to test for fixing bugs and issues. Teams have to move away from this approach to practice DevOps in the legacy systems.
DevOps promotes identifying errors early in the development cycle and fix it at the earliest. To achieve this, executing automated suites of integration, functional and non-functional tests in lower environments with a pre-set acceptance criteria/pass percentage to promote to higher environments. This introduces the concept of standardising acceptance tests, automating them at the early stage. Creating bare minimum acceptance criteria for important functions can be a good start, however, there needs to be a timeline to integrate complete testing in the development.
Automated deployment of code
Automation is language/framework/technologies agnostic and can be infused in legacy systems. The process of deploying application goes through pre-defined steps to ship code from the development environment to the production environment. Tools such as AWS CodeDeploy, OctopusDeploy and Azure Pipelines can be used to deploy legacy code. Containerisation technologies provide legacy systems to be packaged and run on multiple instances very easily.
Achieving complete automation from start to end is a work in progress but few things to consider immediately for legacy applications are–
- Few automation tests in place before kicking off deployment
- Automate infrastructure provisioning
- Notifies developers, testers, QA teams whenever deployment is done
- Visibility of deployment throughout the team
- Easy and fast rollback mechanism needs to be in place
Monitoring application in real time
Automation is the key ingredient to adopt DevOps for legacy systems. Legacy systems are challenged with no feedback and monitoring practices. This results in time wasted for root cause analysis of downtime, failures and various performance issues.
With real-time monitoring of application in production, it is easy to get the full context to accelerate deployment cycles. New Relic is a popular tool for measuring the impact of every code change and monitoring environments to ensure stability and innovation.
Monitoring entire development lifecycle
The state of legacy systems clearly shows how continuous and early feedback can change the quality over time. DevOps in legacy systems will not happen overnight and organisations should take a phased approach. This requires keeping continuous track of all the activities right from code check in to testing in lower environments till development and even after that. Continuous monitoring, getting feedback and improving the weak areas can help in deploying fast and often in legacy applications.
Addressing the mindset challenges
While there are a myriad of tools, technologies, and framework available to address technical challenges; addressing mindset and cultural challenges require management buy-in. Management should buy the idea of bringing DevOps for the legacy systems and familiarise it with the teams. Explaining the benefits of DevOps approach and answering all the whys will help teams in understanding how DevOps will make their lives easier.
This can be the most important success factor in achieving DevOps for legacy systems. Teams are comfortable working under an old style without realising the time and efforts they put in manual tasks. Organisations need to find out opportune moments and make teams realise how moving to DevOps practices will improve employee experience by planned work and timely feedback.
Is this the right time to get started?
The way legacy systems are designed and built has changed drastically. We’ve come a long way in the last decade before which releases used to take enormous time and effort in Waterfall Development, Monolithic Architectures, Physical Servers, and Data Centres ecosystems. Times changed, practices evolved. Multiple releases now take place in weeks and even days with DevOps Practices, Microservices Architecture, Containers and Cloud technologies.
To compete with born-digital companies with digital nature as a competitive advantage, legacy companies can pick up the advantage of data, experience and a large customer base to lead the digital era coupled with DevOps. It is the right time to get started!
Vishnu Nallani, VP & Head of Innovation, Qentelli (opens in new tab)
Image Credit: Profit_Image / Shutterstock