Skip to main content

Q&A with Leo Leung, Senior Director, Products & Strategy, Oracle Cloud Infrastructure on IaaS Trends

(Image credit: Image source: Shutterstock/everything possible)

What is the hypervisor tax and what do companies need to know?

With pretty much any major cloud provider, be it AWS, Microsoft Azure, or Google Cloud, customer offerings are orchestrated through hypervisors on the servers. This means that the computing, storage or network resources an organization receives is a type of virtualized resource that’s being shared with another customer or workload. Because that resource is shared, there’s always going to be some type of overhead, whether it’s 5, 10 or 20 percent, on every data transaction made. And when transactions are made frequently, like in a high-performance computing (HPC) or transaction processing environment, that overhead adds up. That’s the hypervisor tax.

It’s important for companies to know that in most public clouds, there is really no way to isolate your workload from others. So, if another workload experiences a spike and you happen to be on the same hardware or network, your workload will feel the effect. And even when vendors offer dedicated storage for higher costs, it’s still going through a server hypervisor at some level, so they can’t definitively guarantee that other workloads won’t affect your data. Businesses should understand that this overhead is something they will have to bear and will sometimes result in unexpected changes to applications at the worst times. For mission critical workloads, customers should ask for network design and network performance SLAs, as well as the ability to acquire resources, like bare metal servers, that don’t require hypervisors.

How should companies evaluate and balance cost vs performance to meet their IaaS needs?

About three to four years ago, there were some aggressive pricing moves among Infrastructure as a Service (IaaS) providers, but not much change since then. Because the pricing hasn’t changed notably in the past few years, people think they’re getting great value for what they’re paying when the reality is quite the opposite. Organizations need to think about pricing in the following three areas: storage, network download, dedicated (private) network connection.  


Storage pricing has settled down a bit. For instance, some people purchase block storage for 5 cents per gigabyte per month – but if they want performance, they have to pay extra for that. For most high-performance workloads like databases, organizations may actually be paying multiple times more than they need to, most likely because they’re paying for each individual input/output operation.  

Network download 

To serve many customers, organizations need a certain level of outbound network bandwidth, which is where most vendors charge a lot. Even when it’s tiered, this can cost tens of thousands of dollars above base cost when the reality is that there’s no technical reason for providers to charge more for outbound bandwidth versus inbound. This trend is just how the pricing model began and has become accepted as such.  

Dedicated network / private network connection 

Most vendors charge not only for the type of connection you have, but also for additional bandwidth on a monthly basis. This essentially penalizes production application use of the cloud. Like outbound network bandwidth, this is another area where newer vendors are changing the model.

In general, organizations should be looking at what they’re really trying to do with their applications and what resources they need to make that happen. Once they do that, they’ll realize that the cost and pricing categories actually haven’t changed as much as they think they have in the past few years. Rather, they’re simply paying much more than they should for their production workloads.

It seems most companies have adopted the fact that we live in a multi-cloud world. What workloads work best where? Beyond regulatory reasons, what is the main argument for hybrid cloud?

The majority of server computing occurs in private data centers. In fact, if you look at the number of servers shipped every year, 80% of servers are still not being shipped to cloud providers, but to customers’ data centers. That means that the benefits of the cloud are not available on many workloads just yet. And many people want to move to the cloud but have too many barriers. 

Our big initiative was to build a cloud that could enable organizations to move their workloads to the cloud. That affected a lot of our decisions in both the prime definition itself but also in the underlying architecture and its implementation. People want to be able to bring their past but also want to build their future. In order to do so, you have to offer all of the great benefits that people expect from the cloud – you need to be able to spin up a VM and tear it down; you need to be able to pay for just what you use; you need to have elastic resources; you need to have value-added services on top; you need to have configuration orchestration tools; you need to have Chef support, you need to have all of the new things as well as the old things. 

Within this multi-cloud world is where we can help customers make their lives just a little bit easier. 

How is serverless impacting the IaaS market, the enterprise, and operations?

The idea around serverless functions is to take a specific activity and package it so that it can run and scale completely on its own. These functions don’t require an entire stack, like an OS and middleware, and it only needs to run when being called upon, rather than all day like an entire application would.

When you build using this process, it has huge potential to improve efficiencies in the infrastructure and enterprise market. In theory, you’d only use the resources needed and the function is self-contained, meaning it requires less maintenance. And this practice actually follows the developer paradigm being used by most of today’s programmers: building highly-specific projects rather than integrating 100 things into a single application.

We’re still very early in this trend, but serverless has a lot of potential for infrastructure and enterprise as long as it can scale. And operations will become simpler because much of today’s work won’t require as much maintenance.

Automation and artificial intelligence have been discussed at length in the past few years, but how will it impact the cloud market? Will it change the way companies manage their cloud deployments? How?

Automation and artificial intelligence (AI) will be the catalyst of efficiency improvements for organizations and the cloud market. Not only will automating processes allow organizations to deploy something in a fraction of the time, but it will also reduce the overhead required to maintain infrastructure resiliency and retain infrastructure availability. These saved resources will free up employees’ time and allow them to focus on more strategic roles within the company.

A good example of modern automation is Kubernetes. Let’s say a developer uses Kubernetes to define the infrastructure state to be a certain size and capacity. Kubernetes will automate the deployment. But if the set of containers were to fail, Kubernetes will also determine the resources needed to get back to the steady state without any human intervention. These technologies will allow IT to provision and maintain an environment where the service is doing it all for them. All they’ll have to do is define the state and the service will take it from there.

Edge computing is another trend that’s being increasingly discussed – where does it fit in today’s market? How are cloud providers adapting to this and how will it change the way enterprises manage their infrastructure?

Right now, there’s a lot of integration that’s happening in order to allow data to pass from the edge to the core and vice versa. The next step is that there’s going to be a lot more things that emerge midway between the core cloud infrastructure and the far edge of devices. 

As time progresses and more edge points become available, the work is going to happen much closer to the edge versus all the way near the infrastructure layer. Some cloud providers are in the early stages of offering services that happen very close to the edge. And moving forward, I think there will be more actual applications processing and storage done at the edge. You can see some signs of that advancement with certain capabilities and services in the market, but it’s still early.

Leo Leung, Senior Director, Products & Strategy at Oracle Cloud Infrastructure

Image Credit: Everything Possible / Shutterstock

Leo Leung
Leo Leung is Senior Director of Products & Strategy at Oracle. He is a product manager and strategist who is passionate about building products that improve the way people work.