Cutting through the AWS and Azure cloud jargon confusion

Before I try to break down the AWS and Azure cloud pricing jargon, let me give you some context. I am a crusty, old CTO who has been working in advanced technology since the 1980’s. I have grown accustomed to the “deal of a lifetime” on the “technology of the decade” coming around about once every week. So, you can believe me, when I tell you have a very low BS threshold for dishonest sales folks and bogus technology claims. Yes, I am jaded. 

My latest venture is a platform that brings together multiple public cloud providers for cost control. And I can tell you first hand that it is not for the faint-of-heart. It’s like being dropped off in the middle of the jungle in Papua, New Guinea. Each cloud provider has its own culture, its own philosophy, its own language and customers, its own maturity level and, worst of all — its own pricing strategy — which makes it tough for buyers to manage costs.   

AWS and Azure Terminology Differences 

You have probably read the comparisons of various services across the top cloud providers, as people try to wrap their minds around all the varying jargon used to describe pretty much the same thing. For example, let’s just look at one service: Cloud Computing.   

In AWS, servers are called Elastic Compute Cloud (EC2) “Instances”. In Azure they are called “Virtual Machines” or “VMs”. Flocks of these spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS. The same things are called “scale sets” in Azure.   

Of course, cloud providers had to start somewhere, then they learned from their mistakes and improved. When AWS started with EC2, they had not yet released virtual private clouds (VPCs), so their instances ran outside of VPCs. Now all the latest stuff runs inside of VPCs. The older ones are called, “classic” and have a number of limitations. 

The same thing is true of Azure. When they first released, their VMs were not set up to use what is now their Resource Manager or be managed in Resource Groups (the moral equivalent of Cloud Formation Stacks in AWS). Now, all of their latest VMs are compatible with Resource Manager. The older ones are called, you guessed it … “classic”.

Both AWS and Azure have a dizzying array of instances/VMs to choose from, and doing an apples-to-apples comparison between them can be quite daunting. They have different categories: general purpose, compute optimized, storage optimized, disk optimized, etc.   

Within each one of those, there are types or sizes. For example, in AWS the tiny, cheap ones are currently the “t2” family. In Azure, they are the “A” series. On top of that there are different generations of processors. In AWS, they use an integer after the family type, like t2, m3, m4 and there are sizes - t2.small, m3.medium, m4.large, r16.ginormous (OK, I made that one up).   

In Azure, they use a number after the family letter to connote size, like A0, A1, A2, D1, etc. and “v1”, “v2” after that to tell what generation it is, like D1v1, D2v2.

The bottom line: this is very confusing for folks moving their workloads to public cloud from on-premise data centers (yet another wonderland of jargon and confusion in its own right). How does one decide which cloud provider to use? How does one even begin to compare prices with all of this mess? 

AWS and Azure Charging Differences 

To add to that confusion, they charge you differently for the compute time you use. What do I mean?  AWS prices their compute time by the hour. And by hour, they mean any fraction of an hour: If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Microsoft Azure pricing is listed by the hour for each VM, but they charge you by the minute. So, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).   

However, you really have to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. I mentioned my latest venture, ParkMyCloud, earlier. We park (schedule on/off times) for cloud computing resources in non-production environments (without scripting by the way). So, here is a graph of 6 months’ worth of data from an m4.large instance somewhere in Asia Pac. The m4 processor family is based on the Xeon Broadwell or Haswell processor and it is one of the most commonly used instance types. 

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.125 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is about 1,862 hours, but AWS charged for 1,922 hours and it cost $240.25 in compute time.   

Why the difference? When you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times.   

What would the cost have been if AWS charged the same way as Azure?  It would have only cost $232.69. Well, that’s not too bad over the course of six months, unless you have 1,000 of these. Then it becomes material.   

However, I wouldn’t rush to judgment on AWS. If you look at the comparable Azure VM, the standard pricing DS2 V2, also running Linux, costs $0.152/hour. So, this same instance running in Azure would have cost $290.39. Yikes!

Therefore, in my particular use case, unless Azure drops their pricing to make their CPU pricing more competitive, their per minute pricing really doesn’t save money. 

Conclusion 

The ironic thing about all of this is that once you get past all the confusing jargon and the ridiculous approaches to pricing and charging for usage, the actual cloud services themselves are much easier to use than legacy on-premise services. The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and Azure need to make things a lot simpler, so that newcomers can make informed decisions. 

From a pricing standpoint, AWS on-demand pricing is still more competitive than Azure for comparable compute engines, despite Azure’s more enlightened approach to charging for CPU/Hr. time. That said, AWS really needs to get in line with both Azure and Google, who charge by the minute. Nobody likes being charged extra for something they don’t use. 

Dale Wickizer, Co-Founder/CTO, ParkMyCloud

Image Credit: Amazon