Skip to main content

Best Practices For Virtualisation Podcast

Although introducing virtualisation to the IT mix is a sure-fire way of saving money, both in the short, as well as the longer term, rolling out the technology takes a lot of careful planning.

History has shown that deploying a successful virtualised operation require a high degree of careful planning prior to deployment, as well as regular reviews to ensure that the IT resource is running efficiently.

In this context, it is important to understand that, whilst there is a lot of open source software in the virtualisation space, the costs of installing and maintaining that software should not be ignored.

This is a podcast interview with Professor John Walker, Visiting Professor, Dept of Computing, Nottingham-Trent University.

The on-costs associated with open source software can actually be greater than with commercial software, mainly because of the hidden costs associated with the learning curve of what are, to all intents and purposes, unsupported applications.

For all but the most experienced IT professionals, there may be an argument to go down the commercial licence route, on the basis that it down to the vendor and/or systems integrator to provide solutions to problems that you, the client, may encounter.

This is particularly important in the early days of a virtualised system rollout, where inexperience and a lack of understanding of some of the basic principles of virtualisation are the order of the day.

There is also a need to resist the understandable urge to place all high-CPU demand virtual servers on the same physical server and to load balance resources.

VMware DRS, for example, dynamically allocates computing capacity across multiple physical servers which are aggregated into logical resource pools.

Against this backdrop, it's also important to realise that, if you are planning a virtualisation implementation with existing servers and are starting to run out of power, connectivity or cooling, it may be worthwhile considering a hardware refresh.

One pitfall that many newbies fall for is the need to consider industry- and application-specific issues. For instance, for those applications that do not adapt well to multiple threads, some organisations partition their servers into a number of VMs, each with the appropriate number of CPUs to match the number of threads supported in the application.

Another best practice that many firms overlook at the planning stages is the need to train staff to work with the new virtual environment.

Whilst virtualisation itself may be simple, it is important to remember that not all implementations are easy. As a result, it is relevant for members of IT staff working with the virtual platform to understand all of the issues associated with virtualisation and allow sufficient time to bring up new VMs.

One area that almost all virtualised IT resource-using firms hit problems with is the issue of being bound by a network, something that occurs when activities such as VM backup become constrained by limited network capacity.

HP advises that those organisations going down the virtualisation path should break down the planning and deployment processes into a series of stages.

The first stage involves setting expectations for the system.

In a traditional server environment, business units can come to expect full use of an entire server box, which usually provides more than enough capacity, making it seem like a bottomless resource.

But the dynamic nature of virtualisation means that resources are shared, as well as the cost of managing or acquiring them. Without clear guidelines and support from the rest of the organisation, it is all too easy to get conflicting requests or even refusals to virtualise certain assets.

The next stage is not to overload physical servers and that, if one server is taken offline for maintenance, the load on the remaining elements of the server farm do not become overstretched.

HP advises IT managers to cluster physical servers and virtual servers together. By freely mixing virtual servers with physical servers, clustering can help you address resource spikes and make sure that mission-critical applications are appropriately balanced across multiple servers, regardless of whether they are physical or virtual.

Finally, IT managers should be aware that getting started with virtualisation isn't always going to be the simple and predictable experience that you get with physical servers.

Not only do members of staff need new skills and tools, they also need a whole new way of viewing your IT infrastructure. And this is where the support of an experienced systems integrator and/or vendor can be a positive experience.