Skip to main content

The hidden costs of virtualisation and how to overcome them

Virtualisation has widely been seen as one of the most cost-saving server technologies to emerge in the last decade. The flexibility virtual machines allow to start up whole servers as and when they are needed, then shut them down when they are not, has in theory meant that general-purpose server hardware can be readily re-allocated from one task to another as necessary.

So there won't be idle resources wasting money doing nothing, because that particular area has been over-specified. But the theory doesn't always work this way in practice, as there can be hidden costs that the concept obscures. In this feature, we uncover some of these hidden costs, and discuss the steps a network administrator can take to address the impact.

One of the key areas where virtualisation has hidden costs comes from the very same feature that makes it so useful. To achieve its much-feted flexibility, virtualisation has to rely on being "jack of all trades but master of none," so that optimisations for one task don't mean a significant slow-down in another.

As a result, hardware will be designed to give the best possible performance across a broad range of the most frequently used applications. But this will mean that it won't be able to deliver best-in-class performance for any given application compared to hardware that was designed specifically with that task in mind. So if you do know which applications you will be running, and how much performance will be required, there is a good chance that dedicated hardware will be more cost-effective than full virtualisation.

Linked to this, multi-core scaling is another area where virtualisation can create serious performance bottlenecks. Even when a virtual machine has dedicated use of the underlying hardware and isn't using emulation, it can still result in performance that is a few times slower than the non-virtualised equivalent.

The main reason for this is the intervention of the virtual machine manager during synchronisation-induced idling in the application itself, host operating system or supporting libraries. It is possible to implement software that reduces the detrimental effect of these idling times via a process of idleness consolidation.

This ensures that each virtual core is fully utilised via interleaving workloads, allowing unused cores to power down, saving energy costs. However, this is not what a lot of virtualisation systems do, so their virtual cores will spend a fair bit of time sitting around doing nothing whilst they wait for the next block of work to do.

Another hidden cost is more directly financial. When purchasing software licenses, these may be costed on a per-core basis for the host server, even if only some of those cores will actually be tasked with running virtual environments using that particular piece of software. There may be no legal right to ask for fewer licenses than the server capacity, although some vendors will allow a Sub-Capacity License Agreement.

Even in this case, it will probably be necessary to set up a complex dynamic license auditing system that checks how many licenses are in use, and ensures that the number purchased is never exceeded. Either way, an extra cost is involved.

A company will either have to buy more licenses than it really needs, or at least purchase and implement an additional sophisticated system so that the company can prove it is complying with licensing levels. In the worst-case scenario, where a virtualisation server is considerably over-specified for a particular application, for failover reasons or because it runs a number of other virtual tasks, a company may end up paying for many more licenses than it ever actually uses.

On a related note, tracking costs in general when IT services are virtualised is an order of magnitude more complicated as well. In a traditional IT environment, a department or application will have specific hardware, software, and infrastructure allocated to it, and the costs for this will be clearly ascribed.

But in a virtualised environment, the hardware and infrastructure is shared across all departments and applications that use it, and allocated dynamically as required. Usage levels will be constantly in flux, so keeping track of how much each department or application is using will not be straightforward. This makes it hard to build solid, data-driven business cases of how utilisation might require new capacity initiatives for a particular scenario.

Some of the most sophisticated integrated cloud-based virtual server systems make it possible to allocate costs to various different types of deployment and their utilisation levels, or to the underlying server resources used.

This makes it possible to keep track of how much different usage scenarios are costing relative to others, which will make it possible to equate this with the revenue these activities are generating, so development budget can be allocated accordingly. But this will require extensive work modelling the implications of infrastructure, hardware and software licensing costs for different types of virtual machine, which in itself will be a cost.


Where all these hidden costs are becoming a problem, a solution like HP's Moonshot could be the answer. The cartridge-based approach it takes to provisioning, and the relatively low cost of these cartridges, means that a single system can serve multiple different applications, but allow each one to scale as required.

So some of the advantages of virtualisation are preserved, although it's not possible to dynamically re-allocate resources from one task to a radically different one. This is because the Moonshot cartridges are tailored to a selection of frequently required server types at the hardware level, and a company will purchase the type of cartridge it needs for a particular task.

But the advantages of this can be great, as HP's Moonshot avoids a significant hidden cost of virtualisation by supplying servers that really are optimised for specific tasks. Only the truly shared resources, such as power and networking infrastructure, are kept common. The quantity and type of cores, memory, and storage in each cartridge are all balanced for best provisioning of service types such as Web servers, Web caching, DSP-based calculations or remote virtual desktops.

The costs for each service are kept transparent by a direct relation to the cost of the cartridges used for that service, and licensing can similarly be kept down to only the servers that actually run that particular application.

Although virtualisation will continue to have a huge amount to offer the future of computing, in circumstances where its hidden costs could potentially outweigh the benefits, HP's Moonshot can supply an alternative where costs are very clear, so the gains can be clear as well.