Server, or application virtualisation isn’t new. Remember the old mainframes? They used the concept of multi-tasking to share the resources of a single computer, and its single operating system, amongst many applications.
It was only with the advent of the PC and the server, to “serve” multiple PC’s, that we discovered the benefits of the client-server topology. Unfortunately, the operating system of these servers was not robust enough to handle more than one application at a time.
Even today, the majority of application providers insist on their application having its own dedicated server – or “we can’t support it”. This has led to the server sprawl we see around us.
Technology has moved on and we now have quad core processors and servers that can house 32 (or more) of these. Tomorrow they’ll be even faster.
We see more powerful processors being built with the excuse that applications are becoming more and more sophisticated and demanding of clock speed.
Unfortunately, last year’s processors are no longer available, so we have even the lowest powered server more than capable of supporting numerous applications.
An application supplier will publish the specification for a server to support the workload the customer requires - these specifications will sometimes still mention Pentium PIII.