Scott Jeschonek, Director of Cloud Solutions at Avere Systems thinks that while oil and water don’t mix, legacy and cloud do in comparison. In spite of the hype about moving applications to the cloud and about turning legacy applications in cloud-natives, he finds that legacy systems are alive and well, and he believes they aren’t going to go anywhere anytime soon:
“Though the cloud promises the cost savings and scalability that businesses are eager to adopt, many organisations are not yet ready to let go of existing applications that required massive investments and have become essential to their workflows.”
Complexity and challenges
He adds that re-writing mission-critical applications for the cloud is often inefficient: The process is lengthy and costly in financial terms. There can also be some unexpected issues that arise from moving applications to the cloud, but these will vary from one firm to the next. Top of the list is the challenge of latency. “Existing applications need fast data access, and with storage infrastructure growing in size and complexity, latency increases as the apps get farther away from the data. If there isn’t a total commitment to moving all data to the cloud, then latency is a guarantee”, he writes.
He mentions other challenges in his article for Data Center Dynamics: ‘Unlike oil and water, legacy and cloud can mix well, include mismatched protocols and the amount of time required to re-write software applications to conform with cloud standards. With regards to mismatched protocols he says legacy applications typically use standard protocols such as NFS and SMB for network-attached storage (NAS). They are “incompatible with object storage, the architecture most commonly used in the cloud.” Too many this makes migrating to the cloud a daunting prospect, but it needn’t be.
Benefits of familiarity
The ideal situation would be to keep the familiarity of legacy applications and their associated time-efficiencies. Not many professionals are so keen to embrace new technologies while sacrificing these benefits because they often have limited experience of them. So in many respects it makes sense to use the existing applications. If they aren’t broken, the why fix or replace them? Unless there is a dire need to replace existing infrastructure and applications, there is no reason to buy the latest and greatest new technology on the premise that it’s the newest must-have on the block.
“That said, nothing is stopping you from moving applications to the cloud as-is”, he says before adding: “While enterprises may still choose to develop a plan that includes modernisation, you can gradually and non-disruptively move key application stacks while preserving your existing workflow.”
To avoid the cost of time and money in re-writing application he recommends cloud-bursting. This can be achieved with hybrid cloud technologies. They permit legacy applications to run on servers “with their original protocols while communicating with the cloud”. Often an application programmable interface (API) will be used to connect the two for this purpose.
Cloud-bursting solutions can let legacy applications “run in the datacentre of in a remote location while letting them use the public cloud compute resources as needed”, he says. The majority of the data can remain on-premise too. This reduces the risk and minimises the need to move large files to save time. This approach makes life easier for IT, and from an organisational perspective faster times to market can be achieved. As a utility model is used with the cloud, organisations only pay for what they use – and it allows organisations to focus on their core business while providing financial agility.
Cloud storage can also be used to back up data. Backing up is like an insurance policy. It might seem like an unnecessary expense, the cost of downtime as experienced recently by British Airways can be more prohibitive. The Telegraph reported on 29th May 2017 that there is ‘Devotion to cost-cutting 'in the DNA' at British Airways’. The journalist behind the article, Bradley Gerrard, also wrote: “The financial cost of the power outage is set to cost the airline more than £100m, according to some estimates. Mr Wheeldon expected it to hit £120m, and he suggested there could also be a “reputational cost”. Some experts claimed that human error was behind the downtime.
Anyway, cloud back-up is a must have in all necessity. “By using cloud snapshots or more comprehensive recovery from a mirrored copy stored in a remote cloud or private object location, the needed data is accessible and recoverable while using less expensive object storage options.”, claims Jeschonek. He therefore thinks that there are options to use a mixture of legacy systems and cloud solutions to save time and money. The cloud also offers additional back-up benefits too.
An article that appears on HP Enterprise’s website, ‘Cloud Control: 4 steps to find your company’s IT balance’, talks about the findings of a report by 451 Research: ‘Best Practices for Workload Placement in a Hybrid IT Environment’. It finds that companies must account for cost, business conditions, security, regulation and compliance. The report also notes that 61% of the respondents anticipated “spending less on hardware because they are shifting from traditional to on-premise clouds.” It adds that organisations are subsequently cutting their spending on servers, storage and networking.
Curt Hopkins, a staff writer for HPE magazine and the author of the article, agrees that huge costs can be incurred when moving non-cloud infrastructure to the cloud. “If you go with the public cloud, you will need to find a provider whose costs are affordable.” He adds that if you wish to “create your own private cloud, the cost of the servers on which to run it is not inconsequential.”
With old workloads you may have to plough through years and years of old documentation too. So before you move anything to the cloud, he advises you to undergo a complete total cost of ownership assessment. This will require you to factor in capital and operational costs, as well as training and personnel considerations.
At the end of the day, he’s right to suggest that it’s all about finding the right IT balance. Hybrid IT, in the form of hybrid cloud, is the most appropriate way to achieve it. “Finding your IT balance is not a zero-sum game; you don’t have to choose legacy IT, public cloud or private cloud…you can mix those options based on your workloads”, he says. To find the right balance he stresses that there is a need to undertake a cost-benefit analysis, and he thinks balance can be found “in the interplay between your primary business considerations.” This therefore requires you to also evaluate your costs, security, agility and compliance as a whole to gain a complete picture of the costs and benefits.
A report by Deloitte, ‘Cloud and infrastructure – How much PaaS can you really use?’, argues that the past was about technology stacks. It says the present situation favours infrastructure-as-a-service (IaaS), but the future is about platform-as-a-service (PaaS). IT argues that PaaS is the future because many organisations are creating a new generation of custom applications. The key focus seems to therefore be on software developers, and the ability of organisations to manage risk in new development projects.
Significantly it says: “The widening gap between end user devices, data mobility, cloud services, and back office legacy systems can challenge the IT executive to manage and maintain technology is a complex array of delivery capabilities. From mobile apps to mainframe MIPs, and from in-house servers to sourced vendor services, managing this broad range requires a view on how much can change by when, an appropriate operating model, and a balanced perspective on what should be developed and controlled, and what needs to be monitored and governed.” Unfortunately, the report makes no mention of whether any cloud model is right or wrong for managing legacy applications.
A good mix
Cloud can nevertheless mix well with legacy applications, but there should also be some consideration of what your organisation can do with its existing infrastructure. Cloud back-up is advisable, and increasing your network bandwidth won’t necessarily mitigate the effects of latency. Nor for that matter will a rationalisation of your networking costs by reducing your network performance.
With machine learning, for example, it becomes possible to offer data acceleration with a product such as PORTrockIT. By using machine intelligence, you can mitigate the effects of data and network latency in a way that can’t be achieved with WAN optimisation. Cloud back-ups, as well as legacy and cloud applications that interconnect with each other can work more efficiently with reduced latency.
More to the point, while this is innovative technology, it enables you to maintain your existing infrastructure and so it can reduce your costs. With respect to disaster recovery, a data acceleration tool can improve your recovery time objectives to enable you to keep operating whenever disaster strikes. While traditionally data was placed close together to minimise the impact of latency, with a machine learning data acceleration solution your cloud-based disaster recovery sites can be placed far, far apart from each to ensure business and service continuity without falling to human error. So it’s worth investing in your legacy applications, hybrid cloud and in a data acceleration solution. Unlike oil and water, they all offer an optimising mix of solutions that will save you time and money.
David Trossell, CEO and CTO, Bridgeworks
Image Credit: Everything Possible / Shutterstock