Skip to main content

Deciphering the data centre challenge

As businesses become increasingly reliant on digital infrastructure and the world moves online, companies must adapt accordingly in order to grow and develop. Digital disruption can often change the very nature of how an organisation does business, so IT departments and the data centre facilities team must also be prepared to evolve and support this.

Digital advancement is happening now, and it is taking place at a rate of knots. Concepts depicting the data centres of the future forecast highly dynamic models equipped with sophisticated software that will allow effortless transfer and management of workloads. However, in the meantime, operators of legacy data centres are left in a position of stasis. And these static data centres won’t keep up with the demands of tomorrow, unless we can improve flexibility.

What problems does an existing data centre face?

For operators and IT departments, the problem with what we refer to as a static data centre is striking a balance between minimising risk and optimising operational efficiency. As the potential consequences of overloading facilities and causing a breakdown are so severe, a climate of fear currently exists.

The desire to avoid risk has led to over-provisioning and under-utilisation on the IT side, in order to guarantee delivery of the requisite compute power. On the facilities side of things, operational controls have been put in place to maximise the resilience of the data centre.

As a result, a ‘decision-making gap’, or a breakdown in communication, currently exists between the departments. Both IT and Facilities are trying to solve this problem individually, and in accordance with their own parameters. However, neither strategy is viable in isolation, and therefore the only approach that makes sense from a business perspective is striking a balance between the two.

Overall, a high cost of compute ($/w), translates to an equally high cost of delivering business outcomes, through loss of hardware availability, wasted capital expenditure, and increased operational expenditure. This is rapidly becoming a C-suite issue, and facilities managers are now faced with pressure from above to work within even stricter parameters. A change needs to take place, and data centre/IT managers must consider alternative ways of optimising efficiency without introducing additional risk.

What needs to be changed?

The decision-making gap between IT and Facilities can be remedied through the introduction of the ‘fluid data centre’ concept. An efficient data centre is one where the power and cooling supplied by the facility balances the IT demand. Therefore, the fluid data centre needs to bridge the decision-making gap between IT, Facilities, and the rest of the business. The best way to achieve this is by providing an alternative means of addressing the day-to-day challenges each department faces, without increasing the risk factor.

In order to implement such a strategy in an existing data centre or consolidation project, facilities managers must be able to accurately predict the impact of any potential change. Key to this is the use of engineering simulation - specifically, 3D modelling to represent the data centre, power system simulation (PSS), and computational fluid dynamics (CFD) to predict cooling. Using software in this way allows engineers to test any potential change safely, and without fear of the potential consequences.

Establishing an overview of the balance between risk and wasted expenditure within specified parameters allows engineers to examine cooling and power issues, whilst making informed decisions about physical space capacity planning. As a result, utilisation can be increased whilst energy expenditure and costs are simultaneously driven down.

Bringing fluidity to Facilities

So what practical advantages does the fluid data centre concept provide? The below example illustrates how it can be used by facilities to adapt to modern business demands:

Let’s say an acquisition takes place between two investment banks, resulting in the need to consolidate data centres. Each facility is has different hardware strategies in place, but DCIM shows they have the space, power and cooling to go ahead with the amalgamation. However, there is still an underlying fear that problems would be encountered.

In this situation, a truly fluid data centre should be able to cope, implementing change effectively, efficientl,y and confidently - by accurately predicting the outcomes of any operational changes made and identifying any potential problems that could be caused by installing additional servers or cooling units, or upgrading existing ones.

As evidenced by the above, engineering simulation is a truly essential part of the optimisation process for data centre operation and management. It allows facilities to cater for the ever-changing demands of the business it serves, in a risk-free, offline environment. More than just a concept, the fluid data centre is the future for all data centres, helping to introduce the latest technology to the most traditional legacy data centres while also bringing together the traditionally siloed IT and engineering department in order to forge a clear and successful path into the future.

Matt Warner, Software Development Manager, Future Facilities