At the other end of the performance spectrum, there are some heavyweight applications around. Judging what server resources are needed to meet the end user demand is a daunting task.
Often the designer will take what benchmarks are available, and make the informed decision of doubling everything, such as cpus, memory, network cards, etc, to be on the safe side.
The result is that for many ‘run of the mill’ applications, and even some of the more heavy ones, the power of the server deployed far outreaches that necessary to fulfill the performance the customer requires.
They are cheap enough, but consider the waste. A recent study at one NHS customer revealed that across their whole server estate, measured over a 31 day period, cpu utilisation peaked at only 17% of the total cpu processing power available, with 60% of the servers only peaking at 7% utilisation of their cpus.
The result is obvious - wasted energy to run them and wasted energy to cool them. Then consider the hidden costs in this ‘1 application -1 server’ world.
The waste culminated in time and cost of specifying a stand alone server, then ordering it, delivering it, installing it and maintaining it can add up considerably.
Server consolidation through virtualisation addresses all the issues raised. The technology delivers cost benefits through affordable, and more flexible, disaster recovery.
It also drastically reduces the recovery point and recovery time to meet the majority of organisations’ Business Continuity objectives.
Consolidation achieves much higher rates of server resource utilization with energy costs seeing subsequent measurable savings.
Being able to commission a virtual server within an hour and to easily generate and manipulate test environments to size resource needs with accuracy, improves system administration efficiency.
This allows agility to react to changing business needs.