Skip to main content

Strategic workload placement: how to avoid the public cloud repatriation trap

(Image credit: Image Credit: Everything Possible / Shutterstock)

The enterprise ‘gold rush’ to public cloud has, in turn, revealed a trend of cloud repatriation in its wake, proving that organisations need to take the time to fully consider which workloads are suitable for the cloud prior to migration, and that one size most certainly does not fit all. This article takes a look at customer insights on this trend, the causes, and what can be done to achieve intelligent cloud migration, thus avoiding the cloud repatriation trap.

With the aim of attaining agility to stay ahead of changing times, compliance requirements such as GDPR, and the continuous need for infrastructure upgrades, enterprise organisations are quite rightly seeking to optimise their data centre environment, all the while constantly attempting to balance performance, availability and cost. There is no doubt that the perception of low cost, flexibility and scalability of public cloud seems like an alluring option for IT teams grappling with the complexity of infrastructure management. Many have enthusiastically embraced this choice, having arbitrarily shipped their mission critical applications, workloads and processes to the cloud, only to later find this comes at a costly price as it dawns on them that ‘one size fits all’ applied to the public cloud, really means all who are not enterprises where managing complexity, cost and risk are part of day to day, sub-millisecond to sub-millisecond operational requirement.

Consequently, many organisations are choosing to repatriate applications from the public cloud back to an on-prem location or private cloud. ESG research has revealed that 57% of enterprises have removed at least one or more of their workloads from the cloud or SaaS (software as a service) provider back to on-prem infrastructure. This trend is driven by a right-sizing/reconciliation of bold claims with misaligned expectations of reality. When this reconciliation happens—it’s ugly and costly—as the not so sudden realisation becomes painfully clear that cloud first is not (and never was) meant for everyone. 

A alarming yet poignant example of the real cost that can be suffered by organisations, is the TSB IT crisis that occurred on 22nd April 2018, when serious problems arose after the bank transferred customer data from an old IT system and the new platform proved unable to cope with the volume of customers once it was live. The issue resulted in customers reporting they were unable to access their online accounts, as well as reports of a series of glitches and data leaks over social media. In this instance, the costs cannot just measured in downtime for customer banking transactions, but also in terms of the disastrous impact the incident has had on customer loyalty.

Following the cloud

The concept of ‘one size fits all’, has always been an attractive value proposition in any industry, and though attractive, there’s never been truth to the totality of the claim, solution, quality or reliability that this suggests. Sure the public cloud can be a smart and economical decision for businesses where profits, revenues, agility, and innovation are not heavily reliant on managing perpetual change and complexity at scale, but it was never a one size fits all solution.

Part of the reason for this rush by organisations to put everything in the cloud is very likely due to the message delivered by cloud providers that the ‘cloud is for everybody’. The key interpretation of which should refer to the ‘public’ cloud where the message and value proposition is more for everyone below the enterprise tier. The very second an enterprise tries to apply this frame of thinking to their digital business operations, they unfortunately start their journey down a path laden with landmines. 

Then the unfortunate realisation becomes painfully and catastrophically clear, that the cloud promises of economies of scale, infinitely scalable ‘Infrastructure as a Service’ (IaaS), IT resources, and future-proof solutions, is a mirage. When this realisation happens, it’s shocking, sad and expensive —as orders and mandates are reversed, and the exodus back to terra firma begins. As soon as the SLA drops below what has been agreed, this constitutes grounds for breach of contact, and the end user is well within his or her rights to sue the cloud provider for compensation and loss of business. These setbacks and sad experiences point to an important lesson on risk: leaders should never blindly follow the crowd, (or more poignantly, the cloud).

What’s coming back on-prem

Further exploration into the trend and conversations with enterprise customers reveal that it is the business revenue/operational-critical applications and processes of the IT estate that is specifically being repatriated. The mission-critical applications and data where managing perpetual shifts, complexity, constant change and minimising risk is the requirement. The take-away being that while it makes good sense to push some applications to the public cloud, it should rarely, if ever, be ‘the family jewels’. 

Cloud repatriation drivers

From an integrity perspective, it makes good sense not to place sensitive data into the public cloud. But aside from this, performance is also a key reason for repatriation. Without a performance based SLA there is no accurate way of monitoring or being aware of a pending slow-down, or worse, an outage of a poorly performing cloud-based application .This can have catastrophic implications on the smooth running of the infrastructure and in turn the business as a whole: customers’ transactions and the ability of staff members to carry out their daily tasks properly are compromised, not to mention the detrimental impact on brand reputation. 

It’s important to remember that public cloud is just another data centre, vulnerable to the same issues as on-premise or private cloud infrastructure. The difference being that the latter can be properly monitored in real-time with proactive alerts so that issues can be avoided before disaster strikes.

Although there is some degree of infrastructure monitoring in the public cloud, most of the tools utilised to monitor application performance highlight application speeds in isolation without looking at the impact on the wider infrastructure. SLAs are typically focused on availability, not performance response times.  In today’s on-premise data center, domain-specific tools concentrate on the performance of an individual component with no insight into the root cause issue, which is simply not up to scratch for today’s enterprise requirements. 

Avoiding the repatriation trap: intelligent migration without the risk

Before attempting any data centre migration or consolidation it is vital to ask questions such as: ‘Should workloads stay on prem’?  Should we move to a cloud-hosting provider? Or deploy to a public cloud?’ Finding the answers to those workload and deployments questions should always start with a solid understanding of workload requirements to ascertain whether business-critical applications will perform as expected once in the cloud. There are three fundamental criteria for deciding which workloads to deploy, where and at what time, which are: security, performance requirements and cost. 

The key to avoiding the repatriation scenario is for IT teams to understand workload performance requirements pre-migration.  To make accurate, confident decisions, an application-centric infrastructure performance management (IPM) approach that includes a cloud migration assessment capability is essential.  This approach provides key intelligence on workload profiling, application dependencies, and performance analysis to simplify the decision-making process and reduce the time to migrate enterprises’ large number of diverse workloads.  

By leveraging simulated applications based on your workload profiles, cloud workload performance can be validated before migrating the workloads.  IT organisations will then be able to determine whether migrated workloads will perform adequately – and take the necessary steps if they won’t. It will also provide key insights into how each component maps against others across the infrastructure and application layer, recognising which applications are in high demand at any given time so that any knock-on effects or slowdowns can be spotted and managed before problems arise.

By adding this vital step to their cloud migration strategy, enterprises will be able to determine whether cloud migration makes sense for each of their installed applications and which cloud offering is the most cost-effective. It will also enable intelligent and insightful decisions as to what to put into the cloud, be it public, private or even on-prem.  With this approach, enterprise organisations can be certain of the performance impact of deploying to the cloud before final decisions are made – eliminating surprises and enabling a full understanding of the cost/performance trade-offs. 

4 top tips for potential public cloud enterprise customers

1. Question & test - enterprise customers need to not be fooled by shiny new objects and bold claims without proof. Find out how much it will cost to run your applications in the cloud, and check which cloud provider makes the most sense with that in mind. Always test any cloud vendor claims against your bespoke workloads –  before migration.

2. Profile workload characteristics - take the time to discover and identify dependencies between compute networking and storage. Understand application/workload/process profiles, behaviours and requirements to accurately characterise workload performance - before arbitrarily launching them into the cloud and keeping your fingers crossed.

3. Playback – creating synthetic workloads to play back in the cloud will greatly help in selecting accurate cost-optimal configurations and placements.  This will avoid unnecessary over spending on cloud service providers 

4. Monitor – Keep on top of any unforeseen performance or capacity issues post-migration by monitoring actual workloads in the cloud. Make the investment in purpose-built fly-by wire systems that keep the biggest enterprises flying smoothly no matter what cloud it’s flying through. 

Repatriation reversal: the importance of understanding workload behaviour 

Cloud vendors need to adjust and adapt their claims, while partnering with leading solutions that can help validate their claims and calibrate their products and services so their offering is truly match-fit for the enterprise. As the saying goes, ‘Fool me once, shame on you, fool me twice, shame on me’. Once the cloud perception bubble bursts, customers will think twice about migrating to the public cloud.

By public cloud vendors addressing the reality at hand and by providing proven, accurate and insightful decision-making support about what their customers should take to the cloud, the number of enterprise customers experiencing the unfortunate and costly backlash of the ‘cloud first’ initiative could finally start to turn around.

Sean O’Donnell, Managing Director EMEA at Virtual Instruments 

Sean O’Donnell
Sean O’Donnell is Virtual Instruments’ EMEA managing director. Virtual Instruments is the industry’s first application-centric infrastructure performance management provider for the hybrid data centre. Its vendor-independent solutions deliver a unified real-time view of infrastructure performance in service of enterprise applications, whether they are deployed on-premises or in the cloud.