Skip to main content

Look before you leap: Mitigating risk in cloud data deployments

(Image credit: Image Credit: Everything Possible / Shutterstock)

Nowadays, cloud environments have become the de facto choice for data-driven applications. The maturity of cloud platforms and services, in addition to the increasing automation and resilience of cloud infrastructure, has lead to organisations transforming their cloud-first strategies into cloud-only ones. 

These advantages are already being recognised by mainstream, multi-billion dollar enterprises such as Capital One and Netflix who have both nearly entirely moved away from their own physical data centres. This approach is gradually being adopted on a wider-scale as more and more enterprises transition their data workloads to the cloud. This shift in attitude has resulted in enterprises treating cloud migration as a long-term investment rather than an update in IT.

According to global market intelligence provider, IDC, total spending on cloud IT infrastructure in 2018 amounted to $65 billion with year-over-year growth of 37 per cent. In addition to this, IDC reported that quarterly spending on public cloud IT infrastructure had more than doubled in the past two years, reaching $12 billion in the third quarter of 2018, and growing 56 per cent year-over-year.

It seems that, undeniably, there is a massive trend towards moving data deployments to the cloud. However, with the number of variables surrounding cloud migration, such as costs, visibility and dependency, streamlining this process can present a challenge to enterprises.

As such, IT teams are looking to minimise the risks associated with migration such as; disruption to availability, lost data, reduced visibility and control. By investigating these areas before migrating, IT teams clear their path to the cloud as they can evaluate whether running data services in the cloud makes financial and operational sense for their business use case. What some companies are discovering is that, when planning a migration, intelligence and visibility are integral to maximising the benefits of the enterprise’s cloud investment as they have a great impact on reducing the friction of migration and minimising resource usage costs.

Bridging the gap with predictive analytics

It’s important to ‘look before you leap’ into the cloud. It helps to make the right choices for a migration with AI-driven insights. There are technologies, born with the cloud era, designed to provide those data-driven intelligence and recommendations so necessary for optimising compute, memory, and storage resources. These are the tools DevOps and DataOps teams should be selling into the wide business in order to make the transition a smooth and cost-effective one.

Such tools aid the IT team in identifying which applications are the best candidates for migration and can provide detailed dependency maps to help all stakeholders understand the resource requirements before the migration kicks-off.

It’s a real force-multiplier when the IT team can, for example, see the seasonality and ideal time of day to take advantage of the best prices for cloud services, spot instances, autoscaling, and a number of other tactics that enable them to make the most of their resources, be it time, money, or skills. On the cost side, the team should be looking to enable automatic application speedup, optimised resource usage, and intelligent data tiering as part of the migration tool chest at hand.

Effective migration relies on validating the decision to do so by baselining performance before and after the move; by comparing how applications perform before and after the transition – and optimising them for the new cloud runtime environment. For the team under pressure, particularly those who maintain essential services for enterprises with a large consumer customer base, they’ll be looking for AI to offer guidance to improve the performance, scalability, and reliability of their applications once they’re in the cloud.

Maintaining continuity post-migration

However, once the migration is complete, this is not to say that the hard work is over. Once the initial stages are complete, DevOps and DataOps will be re-focused on leveraging these insights to demonstrate and increase the return on investment into better-running processes.

However big and complicated the infrastructure is, these teams will want to  extract clear metrics and hard facts from their infrastructure. This allows these teams to meet enterprise needs and, more importantly, generate informed forecasts. This is where AI can maintain relevance, delivering new recommendations and intelligence to keep delivering consistent, cost-effective performance for the long haul and ongoing health of enterprise service delivery.

One of the most common applications of AI is analysing which users, applications and projects are demanding the most resources, and now can offer chargeback and show back capabilities. This enables IT teams to digest the information with greater ease, and advise on where potential savings can be made across CPU (Central Processing Unit), memory, I/O (Input/ Output), and storage individually. This saves them sifting through the entire cloud capability range to identify key metrics amongst the wider service context. Essentially, AI removes the boring, yet necessary tasks involved in mitigating inefficient resource use by applications.

Kunal Agarwal, CEO, Unravel Data