Skip to main content

Reservations about reserved instances

(Image credit: Image Credit: Skyhigh)

When it comes to Reserved Instance (RI) purchases as a mechanism to control cloud costs, it pays to stay reserved. Although the basic proposition sounds attractive, the need to reserve a little extra judgment and curb your enthusiasm in regard to this pricing structure, stems from a set of operational realities that need some examination and clarification.

The term Reserved Instance has been popularised by well-known cloud platform providers. The ‘instance’ part is of course the volume of cloud capacity purchased - and the ‘reserved’ part of the equation is the option to denote a defined consumption of that cloud power, pre-purchased to be used over a specified amount of time.

Although the core promise of cloud hinges around flexibility and the ability to turn on and turn off consumption in an on-demand delivery model, Reserved Instance customers agree to pay for the full scope of a cloud contract for what is usually one or three years. The ‘benefit’ is that the base rate charged by the cloud services provider is lower. Much lower.

But benefits can of course be both real and perceived.

The inconvenient truth is that it’s tough to purchase cloud computing instances that accurately deliver on real world use cases for any extended period of time. Why? There are a lot of reasons, but a key factor here is that the type of cloud purchased may be right today, but not configured according to application and data requirements later down the line.

Cloud optimisation conundrums

The purchase of Reserved Instances has both financial and technical aspects to it. The financial part is a calculation that assumes the need to use that capacity for a period of one or three years and calculates that it would be cheaper to make that commitment upfront and benefit from the discount cloud providers offer. This is a relatively straight-forward decision, but it assumes a number of more technical things: for example, that the instance being used is the correct one in the first place.

Some clouds are optimised for high transactional throughput, some are tuned for a specific type of memory performance, some for processing, some for storage and so on and many companies are unaware that they are using the wrong instance family type or size. The implications of locking in the wrong instance type or size are obvious. Additionally, if the instance being used is right today, will that be true for the term of the commitment?

If a customer looks at their requirements on January 1st, with analytics, they should be able to specify their cloud optimisation demands accurately enough, at least for the month ahead. But just because your cloud workload fits today, it doesn’t automatically follow that it will fit tomorrow and throughout the rest of the year.

Business markets change, lines of business close, governance and compliance rulings change, new ventures develop and unplanned mergers and acquisitions happen that all change the way enterprise organisations operate. Looking wider, enterprise software vendors themselves have specific upgrade paths and windows that a customer will have to architect into their use of any cloud backbone.

Furthermore, it is possible to sell unwanted Reserved Instances, purchase convertible ones (at a premium) and opt for zonal versus regional Reserved Instances which can act as a mechanism to guarantee availability but impact flexibility.

None of this is simple.

This complexity creates a massive headache for the CIO and their IT team who are looking to bring cloud-native advantages to the business. Perhaps less obviously, what actually happens is that after planning and estimation procedures have been exhausted, IT departments fall back on a system of ‘best guess’ and ‘tribal knowledge’ as their cloud procurement strategy.

Risky business

The upshot of ‘poorly purchased’ Reserved Instances is wasted cost, obviously, but there are wider implications. Running the ‘wrong shape’ of cloud can lead to risks because the system created is not built to adhere seamlessly to the application requirements that will be made of it. Running an incongruous Reserved Instance also opens up operational inefficiencies, which can quickly follow through to poor customer service and, ultimately, loss of revenue.

This problem is so acute that even Facebook CEO Mark Zuckerberg has called out the cost of cloud computing and suggested that it has become a bottleneck for progress in medical and scientific research. Zuckerberg is quoted explaining that his wife’s Chan Zuckerberg Initiative (CZI) social research foundation has found that progress is being impeded by the cost of compute and data, not by how long it takes to turn around experiments.

While some convertible Reserved Instance parameters can be changed, many standard ones can’t, so this is the reality we are dealing with. Even those that can be changed, have strict rules that make it hard to change to meaningful uses with ease. And cloud services providers aren’t stupid i.e. they offer the largest Reserved Instance discounts where customers agree to pay for the whole load up-front, as opposed to no up-front, or partial up-front payment. 

The core rationale behind Reserved Instance is that an organisation should be able to look introspectively at its IT stack and its appetite for compute and data analytics/storage. The organisation should, in theory, be able to spot cloud usage instances classes that are predictable and constant enough to rank as possible contenders for reservation.

No free cloud lunch

But none of that insight comes free. Being able to self-analyse takes time, resources and costs money. Efficient tools are needed if a customer is going to be able to use these cloud adoption tactics to their advantage.

In real terms, Reserved Instances make it very complicated to know what type to purchase, when and for what purpose. Companies can lose millions in this process, all while still believing they are saving money from buying on-demand.

Unused and underutilised Reserved Instances are as certain as death and taxes. Knowing which instances are active, which instances are going to waste, which instances are convertible and which aren’t, which instances are about to expire and which workload is in the wrong location in the first place, can put an unwieldy burden on even the most exacting and efficient IT department.

As an important clarification in this story, we should note that AWS in particular has announced its new ‘Savings Plan’ offering in an attempt to quell some of the industry disquiet surfacing over inefficient Reserved Instance purchasing. Savings Plans are an AWS flexible pricing model that offers low prices on EC2 and Fargate container services for a commitment to a consistent amount of usage (measured in dollars per hour) over a 1- or 3-year term. Customers who sign up for Savings Plans will be charged the discounted plan price up to their commitment. This option offers up to 72 per cent savings in exchange for a commitment to a consistent amount of usage for that 1- or 3-year term.

There’s no such thing as a free lunch and there’s no such thing as a self-monitoring Reserved Instance deployment. When you see any price promotion that claims to offer ‘up to 75 per cent’ off, it’s usually too good to be true. In this case, it’s theoretically possible, but practically, pragmatically and procedurally almost impossible to fully realise those types of savings. Only with automation software that can analyse, predict and recommend how to manage Reserved Instances 24/7, will a business have a shot at achieving such lofty savings.

Ayman Gabarin is SVP, EMEA, Densify