The headline costs offered by public cloud providers are increasingly attractive, with each server/instance only a few pounds per month. However, there is much more to running a service than server capacity, and if organisations are not careful their cloud costs can quickly spiral out of control.
This doesn’t mean that public cloud is necessarily more expensive – simply that it is vital to understand how the services you are proposing to migrate to are charged for, and prepare a comprehensive business case. Once you have moved services to the cloud and retired your in-house infrastructure, you have to accept what your chosen cloud provider gives you unless you migrate services again.
The much vaunted ‘portability’ of cloud services between providers is not yet proven, particularly for complex applications, and please be aware it is not in the provider’s interests to make it easy. Their focus is to simplify migrating your IT into their cloud environments, and the environment design will be specific to that provider; migrating your service off it or, even better, dynamically moving it as pricing changes is a more complex proposition, and will almost certainly require third party software and independent expertise, with additional cost and complexity.
Before moving any services to cloud, in-house teams need a detailed understanding of three things: how their application works, including both characteristics, such as how much data flows between servers, and into and out of the application, and dependencies, such as security, access and authentication requirements; exactly what is and is not included in the public cloud service; and how the provider’s charging model works.
Public cloud is like an empty house
Buying public cloud is like buying the shell of a house – you can live in it, but need utilities, flooring and furniture to make it into a home. In addition, these are not individual houses but massive virtual complexes – so you are in effect sharing the bathroom with other residents. The security provided is just for the data centre itself, or with SaaS to access the core application, so each resident has to provide their own front door locks to prevent undesirables wandering in.
A helpful comparison is the move from physical to virtual servers. Physical servers had to be justified, bought, installed and configured, but although they were wasteful you knew immediately how many you had. Virtual servers were invisible and easy to stand up, and so human nature meant that they were never turned off. Cloud providers base their costs on a similar mindset; users tend to keep servers, data and all network traffic running, so end up paying more than they anticipated.
Take a routine application such as CRM. Eight hours of cloud plus the ability to turn it off at weekends can look significantly cheaper than fully loaded internal costs. However, running that application requires additional systems e.g. login/authentication, firewall/network, which need to be powered up beforehand. Shutdown and restarting have to be sequenced, and you need back-up, so 9-5 quickly becomes 7-9 or longer. Then you have remote workers who want to be able to log in at any time of the day or night, so your 8 hours a day soon become 16, at which point you start to question whether it is actually worth shutting down at all.
Now you have 24x7 running, and your costs are three times the headline price. Then check what the basic service charge includes. If other elements are required to run the application safely and securely and are not included, such as security, resilience, management, patching and back-up, these also need to be factored in.
Now allow for the cost of migration, the sunk costs of a computer room (unless all your equipment is coming to end of life), perhaps a disaster recovery solution and staff who know the systems. What was initially an easy cost justification has just become much more expensive.
Understand your service characteristics
The first prerequisite before moving services to cloud is to ensure that you understand cloud design principles and know the characteristics and requirements of the applications you plan to move. SaaS is relatively straightforward; PaaS and particularly IaaS are where more expertise is needed to ensure you design and optimise for your target cloud provider. Each provider does it a little differently and charges in different ways. If you have factored in the way your applications work when designing services prior to moving them to cloud, you are more likely to avoid unpleasant surprises.
This also means understanding the application’s likely usage patterns and how quickly its use is expected to grow, in terms of both user numbers and data volumes. All public cloud services are metered, which can be good or bad, depending on the application and its use. You pay per GB of data stored, and almost every organisation is seeing data volumes increase exponentially. The best way to keep this under control is for your IT team to implement data classification and then ask each department: “we have this volume of your data; how important is it to the business and can we delete it?”
Learn how charges are calculated
As already mentioned, buying cloud is not just about servers and storage. There will be additional costs for ancillary requirements such as IP addresses, domain resilience and data transfers into, out of and between servers which need to be considered when preparing budgets.
As an example, for an IaaS instance in AWS, there are a minimum of five and potentially eight metered costs for a single Internet facing server. Azure and other public cloud providers are comparable. The complexity increases if your organisation is hosting complex, multiple server environments. If other elements are required to run the application, such as security, resilience, management, patching and back-up, these will appear as additional charges. This is less of an issue with SaaS as it usually has a standard per user per month charge, but with IaaS, and to some extent PaaS, other elements are added on top.
In many services there is a cost per GB each time servers in different domains talk to each other, and a second cost per GB to send data over the Internet. For example, in AWS you are charged if you use a public IP address, and because you do not buy dedicated bandwidth there is an additional data transfer charge against each IP address – which can be a factor if you create public facing websites and encourage people to download videos. Every time a video is played, you will incur a charge, which may seem insignificant on its own but will soon add up if, say, 50,000 people download your 100MB video. In some applications servers have a constant two-way dialogue, so costs that initially seem small can quickly escalate.
The same issue applies with resilience and service recovery, where you will be charged for the data traffic between domains to keep a second DR or failover environment in a different region or availability zone. To understand costs accurately you need to know the frequency of snapshots or replication traffic, how big those snapshots are and the rate of change of data. AWS and Azure charge resilience in different ways; both will keep a copy and bring it up if a host fails, but with AWS you need a different type of service and pay extra, whereas for Azure it is included as standard.
There is also an array of options available for storage. MS Azure has five storage options to choose from, plus variables in each option, and each has different dependencies, as well as differing terminology. All these need to be understood, compared and evaluated as part of choosing a service. If you find storage and back-up costs escalating, IT staff need to act to prevent the situation getting worse.
The best way to avoid unexpected costs is to look closely at the different types of service available, such as on-demand, reserved or spot instances, the relevant storage, networking and security required, and match your workload and requirement to the instance type. Reserved instances are much cheaper per hour than on-demand, but you are tied in for a specified period, which means you are unable to move quickly should your situation change or a better commercial option be introduced. If an application is not optimised for public cloud, consider retaining it in-house or use a managed cloud service with defined, predictable costs.
Look for a similar risk and value framework
This does not mean that public cloud is necessarily more expensive or a bad choice, but what may have seemed like an easy cost justification becomes a much more nuanced decision when all factors are included.
It is also important not to discount the soft elements of service delivery. You could accidentally increase costs if you select a cloud platform or supplier that does not have the same risk and value framework in their processes and operations.
Some services can and should run in public cloud, some in private cloud and some should remain on-premise, creating a hybrid infrastructure that needs managing and monitoring. Organisations should therefore retain key skills in-house to control both costs and the security of their new hybrid cloud environment. You also need to measure and audit your chosen provider to ensure relevant security is applied.
Image Credit: Everything Possible / Shutterstock
Richard Blanford, managing director, Fordway