Skip to main content

What multi-cloud means in 2021 and how to manage it

cloud
(Image credit: Shutterstock / Blackboard)

When multi-cloud first became an IT industry talking point around 2014, most technologists imagined a setup in which workloads were highly portable and could be moved from one cloud to another seamlessly based on cost, availability or other factors. An application might run in Azure one week and AWS the next as organizations arbitraged among infrastructure providers. Another common idea was that businesses would consume multiple cloud through cloud brokers that would provide single interface and API atop underlying clouds. Interest in this concept was high: Gartner found that 49 percent of customer using cloud infrastructure as a service in 2017 had adopted a deliberate multi-cloud strategy and predicted that number would rise to 75 percent by 2022. And they were correct; the use of multiple clouds is by far the most common pattern among enterprises, with 93 percent adopting this strategy in 2020 (according to Flexera's 2020 State of the Cloud).

But the realization of the multi-cloud has shifted over time. The imagined future of 6 years ago–with portable workloads that run unmodified on multiple clouds or single abstraction above multiple clouds–hasn’t come to pass. As it turns out, clouds are not interchangeable – there are technical and business differences between the major cloud providers making it difficult to manage and scale workloads for multiple clouds. While it’s not impossible to get to this point in the future (three-quarters of respondents to the 2020 State of Multicloud report from Turbonomics believed that applications would freely move across clouds in the future), ideas in the vein of “write once, run anywhere” have been around for a long time and so far failed to materialize. True portability requires more than technical runtime interchangeability - business, operational and support considerations also loom large. 

In reality, the benefit of multi-cloud at the organizational level is less about “write once, run anywhere” and more about different groups using different clouds for different applications based on their needs and the capabilities of each cloud. Using different clouds this way is inevitable given the nature of enterprise companies and has both business and technology benefits. In a modern software organization, teams should be free to choose the approach and infrastructure that enables them to build optimized solutions for their specific application or service. A multi-cloud approach at the organization level gives each team the freedom to select the best tools for their own needs.

Not all clouds are created equal

Be honest – how many large enterprises have just one of anything? Given the complexity of modern enterprise IT needs, it’s unrealistic to be a single cloud shop. On the technology level, different teams have different needs, and not all clouds are created equal. Locality of data centers, supported workloads, platform capabilities and supporting services vary between clouds and teams have different levels of expertise with each one. At the business level, clouds vary in terms of cost, support packages and discounts. All these factors mean that the cloud that’s a good fit for the Business Intelligence team may not be a good match for the Web Frontend team. Other factors like acquisitions and mergers also bring in new cloud vendors. Although applications may not be easily portable, companies and government often desire to spread different workloads across cloud vendors to avoid lock-in at the business level.

Organizations that have or are considering a mandate to run every workload on a single cloud might want to reconsider. Sacrificing optionality for consistency has some potentially serious downsides. Companies move to the cloud for faster deployments, increased innovation, lower costs and improved productivity. Forcing teams to use just one cloud puts a cap on those benefits defined by the limits of that one cloud for each different workload. If Azure has the best Windows support and GCP has the best machine learning service, and you need both, choosing one cloud or the other will limit your benefit. In contrast, while a decentralized, multi-cloud approach can introduce new control requirements it lowers the barriers to innovation and lets teams make locally optimal technology choices. 

Deployment and control

So, if enterprises commit to a multi-cloud approach, how should they deploy and control those multiple clouds so their teams can continue to optimize? The best way to do this, in my experience, is to default to a decentralized approach and centralize only what they must. Many businesses will try to centralize too much and either fail outright or constrain their teams to the point that multi-cloud loses its benefit.

To determine what should be centralized, I propose a simple decision tree. Only centralize aspects of your cloud deployment that would cause business risk if decentralized, would require high levels of coordination if decentralized, or are expensive to duplicate.

The goal is to favor local team decision-making, ownership, and optimization; only removing elements from local control when necessary. This aligns well with the DevOps and Agile software practices and microservice architectures. Let’s walk through how that works in practice and what should and should not be centralized.

  1. Policies, SLAs and validation - Policies that address business goals and risk such as high-level roadmaps, budgets, capacity planning, security policies and breach mitigation plans almost always should be centralized. I suggest either establishing SLAs and SLOs centrally or requiring teams to define them and reviewing centrally against defined requirements. How teams implement or meet these SLOs and policies should be left up to them.
  2. Deployment and operations - Deciding whether or not to centralize deployment and operations elements is difficult and varies based on the situation. With each decision, go back to those three questions: Does the element in question present business risk? Does it require coordination across clouds or teams? Is it expensive to duplicate? Only if the answer to all three questions is “yes” should the decision be centralized. For example, deployment tools, timing and execution should be decentralized since the answer to multiple of those questions is likely “no”. But deployment requirements (such as no-downtime deployments) are likely to be centralized. Ownership of problems and outages should probably be decentralized, but this can vary. Metrics collection, alerting services and ticket tracking tend to benefit from centralization.
  3. Architecture and technology – You likely want to centralize technology guidelines and best practices, then get out of the way and let your teams get to work. Service owners should be allowed to make optimum technical decisions about architecture and technology as long as they follow the overall business goals and high-level roadmaps. There are some architectural decisions that would answer “yes” to the last question in the decision tree. For example, you likely want only a few standardized inter-process-communication approaches. You might want to define API versioning and compatibility rules and it wouldn’t make much sense to have a distributed “distributed tracing” service. These are exception, however. Although it might be counterintuitive, decentralized architecture and technology ownership tends to reduce complexity!

While our understanding of the benefits and requirements of multi-cloud have evolved, the overall trend will only accelerate. In 2020, organizations already used more than 2 public clouds on average and were experimenting with expanding further. Multi-cloud is all about allowing teams to optimize. Decisions should be made at the smallest, most local context possible. Since different teams have different requirements, the best choice is a multi-cloud approach that provides a safe framework for each team to use the best tool for each job.

Brad Schick, Chief Executive Officer, Skytap