Skip to main content

Why data center efficiency and performance rely on a holistic approach

Storage
(Image credit: Image source: Shutterstock/Scanrail1)

It’s no secret that data center operators are striving to become more effective, efficient and environmentally responsible as demand for space and power continues to grow. Many are harnessing innovative cooling techniques or committing to using carbon-zero renewable energy. At the same time, many more are replacing outdated and inefficient equipment in an effort to ensure their data centers are as efficient as possible and achieving the right performance levels.

While this signals progress for the industry, not all providers are looking holistically at optimizing their data center footprint. Instead, some are focusing on discrete initiatives rather than managing the data center lifecycle end-to-end.

In order to achieve optimum performance, providers must embark on a journey from design and construction, through deployment to operation and optimization, leveraging emulation, automation and analytics to ensure their customers’ needs are met every step of the way.

1. The first step: design, construction and build

The first consideration for data center operators is likely to be the location of their premises. A data center can be built almost anywhere with power and connectivity, but the location has an impact on the quality of service it can provide to its customer. VIRTUS’ data centers are all located within London’s metro - close enough to both London and other data center ecosystems to allow for mission-critical data replication services, but far enough from both to satisfy physical disaster recovery requirements. 

When it comes to design and construction, there are plenty of things to think about such as materials, time to market, and cost. But it’s not just about quick and efficient builds. Innovative data center designs are a way to stay ahead of the market, pushing standards forward. 

VIRTUS has been particularly innovative when it comes to cooling. When LONDON2 was designed and built back in 2014, the company included water sourced from a natural underground aquifer to minimize the usage of mains water. It also built air-flooded data halls that use hot aisle containment and are cooled using indirect evaporative air technology. This provides the cooling, but with very low energy use. At other VIRTUS sites, rainwater harvesting and reuse of heat waste are common features.  

Innovation must be aligned with ongoing sustainability, and it’s here where the BREEAM (Building Research Establishment Environmental Assessment Method) standards are important. The standards look at the green credentials of commercial buildings, verifying their performance and comparing them against sustainability benchmarks. And as well as the commitment to meeting BREEAM specifications, many providers also employ a modular build methodology to deploy capacity as and when required. This drives up utilization and maximizes efficiency (both from an operational and cost perspective).

2. Prioritising efficiency through deployment, operation and optimization

Power and cooling account for much of the operating costs of a data center, and as such they are a crucial consideration. Trends like immersion cooling, backup power and generation solutions are all interesting areas for innovation in the future.

Liquid cooling has fast made a comeback as a way of maintaining optimal operating temperatures, notably in the High Performance Computing (HPC) arena together with innovative techniques like using indirect evaporative air. VIRTUS data centers use a variety of these techniques within their facilities, alongside innovations in liquid cooling. This strives to produce a 1.0x PUE which, according to the Uptime Institute’s annual survey, is well below the 2020 average of 1.58x. All operators attempt to get the PUE ratio down to as near to 1.0x as possible, with most new builds falling between 1.2x and 1.4x.

In terms of power requirements, the uninterruptible power supply (UPS) will be determined by several factors including the criticality of the systems under load, the quality of the existing power supply and of course, the cost. When it comes to energy use, many providers are committed to using 100 percent renewable energy and carbon-zero energy sources – helping them to meet environmental goals while also providing cost savings and increased reliability. Supplies from renewable power, including wind, solar and hydro are now likely to surpass supplies of gas, oil and coal-fired stations used by UK data center providers. Late 2019 saw renewables surpass fossil fuels as the largest generation source of UK energy for the first time and falling prices through technology improvement and scale mean that it’s now more affordable than ever to harness and use these renewable energy sources.

For backup power, the industry continues to investigate alternative, sustainable sources - fuel cells are being looked at as a standby energy source. At present, this technology is not available at the scale required for large data centers. Unfortunately, nothing currently is workable at the scale some customers need, and in the UK, we are very fortunate to have extremely stable National Power. However, research into all new sustainability innovations is ongoing.  

3. Invest in your people – skills for today and tomorrow

For many, underpinning all of the efforts to optimize the efficiency of a data center, is the Datacenter Infrastructure Management (DCIM) system, or more recently, next-generation DCIM systems that offer increased visibility, with remote monitoring and management capabilities via Artificial Intelligence (AI). 

However, even with these developments, staffing is still a crucial part of the smooth running of any facility. Independent research commissioned by Future Facilities found that 40 percent of organizations who suffered outages in their data center did so because of human error. Indeed, although there’s plenty of talk about unstaffed data centers which rely entirely on automation and robotics, we are a long way from not needing human intervention. And this means that skills are still an important consideration for providers.

It goes without saying that technical skills are crucial - and requirements are evolving. In the past, having a solid background in networking or hardware was sufficient to be a successful candidate in the data center operations world, but the shift to cloud computing has meant that a new set of skills are required or desired - particularly around Artificial Intelligence (AI) and Big Data.

The Uptime Institute’s Global Data center Staffing Forecast 2021-2025, claims that data centers will need to find 300,000 more staff by 2025. There are two main issues that need to be addressed; some employers are making the skills crisis worse by demanding over-ambitious qualifications, and people don’t know the sector exists, so can’t even consider it as a career. 

Many of the skills required to operate data centers are widely available in other industries, so raising the profile of the sector is key. There is definitely an increased focus on, and opportunity to, “reskill” individuals from other sectors hit hard by the pandemic, such as those who have experience in the aviation industry. Finding and attracting people with the right skills, existing or transferable, as well as providing ongoing training is key to keeping organizations operating in this digital age, whilst improving efficiency and performance

Closing comments

It is clear that success comes as a result of looking holistically at the data center. Data centers must spend a great deal of time and investment on research and development of every aspect of their solutions - from cooling systems to distribution, to security, to monitoring. Data centers are the sum of many parts, and it’s only by putting these parts together that robust and secure solutions can be developed to support customers now and in the future.

Darren Watkins, managing director, VIRTUS Data Centres