As development evolves, so must operations

(Image credit: Image source: Shutterstock/Omelchenko)

Coding is changing and platforms are becoming more distributed, so governance must now centralise. Today’s cloud realities demand that development and operations run at different paces while preparing legacy code for new hybrid normality. Can it be done? Yes, here’s how.

Back in my salad days as a full-time developer, our process for updating and deploying code was messy, inefficient, and prone to errors. The entire code base was checked into a single code repository. We’d check out the most recent version, tag it with our project information, make our changes, merge with whatever changes happened in the interim, then check it back in. This actually ran fairly smoothly when you were certain you were the only one making changes to that specific piece of code.

But there was never a way to be certain.

Merge errors and misaligned communications often caused new and… interesting bugs to creep into the code base. Hopefully, these would be caught during a regression test, but those tests often happened shortly before a major release, when the pressure to fix them was high as your operations team stared at you impatiently since you were the one holding up the deployment.

Thank goodness those days are gone, right? Riiight.

For many large organisations saddled with heritage code bases, this scenario is not the past - it remains a painful present. And as newer development follows agile best practices - developing microservices whose smaller code bases are tracked individually and deployed in multi-cloud and multi-datacentre environments - new and even more interesting issues have begun to rear their heads.

This brave new world of massively distributed application development has huge implications for software engineering and operation teams. On the dev side, the goal is to realise the benefits of agile methods as code is developed, managed and kept secure using microservices and containers. For the Ops team, the cost of running and managing distributed infrastructures can easily spiral out of control, alongside the costs of ensuring the services remain accessible.

In a world where DevOps - the combination of the two teams into a single, full stack-aware beast - is the new buzzword, the strain of keeping it all running smoothly can be massive, and it will only get worse as applications are further decomposed into Functions as a Service running conceivably on any device anywhere in the world. Organisations must ensure they are able to scale their systems across the different infrastructures - both in private datacentres and public cloud infrastructures - without losing control, producing sub optimal code or racking up astronomical costs.

Centralised governance, distributed execution

In a distributed environment, coders have no idea where their code will ultimately live - and they shouldn’t really care. We’re already seeing the dawn of algorithmically optimised systems, where the rules of what server to spin up and where, are determined in real time based on optimising the flow of data. In this environment, managing access and monitoring performance becomes a balancing act.

On the one hand, you want to make sure you’re getting the information you need about your systems, including where and how different services are distributed. These functions are traditionally handled by API gateways and management tools, through which all data must pass. This can lead to performance bottlenecks as these centralised systems sit in the middle of all service communication.

As services become more widely distributed, these bottlenecks will become more apparent. A number of tools have emerged to address this, but many of them are built into the microservice management systems themselves. In a truly distributed architecture, these APIs are served by a number of systems - microservices, devices, function as a service providers, and more. The management landscape is only growing in complexity.

The ideal solution is one that centralises the configuration, authentication, and reporting for easy access and management, but allows the individual systems to execute on those rules to leverage their positions in the network to make their data flows more efficient. The current trend of microgateways is heading in this direction.

Microgateways are small executables - just a few megabytes, in most cases - that can live in the same execution space as the code they support, whether that’s a microservice container, embedded device, or a small transactional service. With the right configuration, microgateways can handle most of the functionality traditionally handled by larger, centralised gateways, including data transformation, authentication and authorisation, and logging.

Managing those configurations, however, can be a challenge. As microgateways gain further adoption, traditional API management will have to shift toward centralising the management of distributed gateways rather than acting as a single point of entry. This model of localised governance with distributed execution will allow for greater flexibility in deployment of services, allowing systems to be more opportunistic in how they route data without being forced to adhere to rigid infrastructural limitations.

As development evolves, so must operations

This idea of massively distributed services optimised using automated algorithms seems almost idyllic. If the machines are managing the machines based on a handful of predetermined rules, what happens to the human operations team?

The shift to DevOps, especially as many systems move to a managed cloud environment, has already shook up our traditional view of the operations team as the gatekeepers to technology. Many young organisations that adopted the cloud from the start have replaced their traditional operations teams with “DevOps Engineers” whose main focus is to build and maintain their automated test and deployment systems.

This arrangement works fine when a majority of the infrastructure is managed by a third party, but the operations team still reigns when the infrastructure is a hybrid of several systems. Even as the management of these systems becomes more automated and infrastructure is moved offsite, having a staff of operations engineers who understand the intricacies of modern networking can help organisations stay out of trouble as the complexity of these systems increases.

The best operations engineers can see right through the marketing of most cloud infrastructure providers and help set up systems that keep costs in check while improving performance. Where developers write application code and DevOps engineers ensure that code is well-tested and deployed appropriately, the operations team can continue to ensure all of the systems under their watch are performing adequately and are appropriate for the tasks required of them.

As development and service hosting practices have evolved, so must the role of operations evolve from that of technological gatekeeper to one of internal consultant. Where the focus used to be on Service Oriented Architectures, the new shift must be toward Service Oriented Operations, providing expert advice and guidance to the development and DevOps teams while ensuring the infrastructures adopted by the organisation best fit their business needs.

Make your heritage infrastructure look young again

Perhaps the most critical immediate role for your operations team - and where they can provide continuous value - is to figure out how best to fit older systems to the new development models. The pressure is on to modernise heritage systems, make data more accessible while keeping it secure and performant, and leverage your existing systems to innovate in an increasingly fast-paced competitive market. A well-seasoned operations team is your secret to making this work.

Within older, larger corporations the challenge, as ever, is heritage code, most of it running on outdated infrastructures that may be near or passed their planned end of life.  We typically refer to these as “legacy” systems, but I prefer the term “heritage” here. This is the code your business was built upon and continues to drive value for you today. Tearing down these systems to start over from scratch is likely not worth it, especially if their value outweighs the cost of ripping and replacing it.

The fact is, a great deal of that heritage code runs just fine, but it can hold your organisation back from experimenting with new, innovative ways of doing business that require faster, less proprietary access to your data. How can an organisation saddled with such systems ever possibly compete against these new disruptive upstarts?

The answer lies in putting an abstraction layer in front of your older systems to deploy them as APIs following modern design best practices. This API-led approach can make even your stodgiest older systems accessible to new applications with far less coding and rearchitecting effort than starting from scratch. It’s easier said than done, for certain, but still easier than starting over. It may seem expensive at first, but, when you factor in the uncertainty and developer time introduced in fresh code, the total cost can be substantially lower.

This abstraction layer not only makes it easier to consume your data on the front end, it makes it easier to update your backend systems when it makes sense for the business. So long as the APIs themselves don’t change, you can swap out the code and systems that drive them with reasonable freedom. This can expand your transition window significantly, allowing you to replace legacy systems as they near their planned end of life and the cost to maintain them climbs past the value they provide.

By opening access to these older systems, your developer teams can leverage them in their application development with relative ease.

With challenges come opportunities

Change is hard, but true evolution can be arduous. The seemingly constant cycle of upgrading technology to follow new best practices and face new market realities can hamstring companies who have not adopted an agile mind set.  But the challenges facing organisations who seek to adopt modern, distributed development practices are also sparking exciting changes to traditional team structures behind development, testing, deployment and performance management.

As with all evolution, those organisations that can’t adapt will disappear in time – either acquired for their customer list and quietly retired, or taken down by a more nimble competitive market. Those that take the time to strategize and plan for these changes will be awarded with better opportunities, driven by an agile culture championed by a new breed of faster, more effective DevOps engineers.

Rob Zazueta, Director of Digital Platform Strategy, TIBCO Software
Image source: Shutterstock/Omelchenko