How to reduce data centre energy waste without sinking it into the sea

null

We’re more conscious than ever of the effects that businesses have on the environment, and many start-ups are beginning to style themselves as ethical by taking steps towards reducing their carbon footprints. But when we look at the biggest energy drain of any organisation, our data centres, are businesses taking the time to consider measures they can take to reduce wasted IT energy?

This issue was recently thrown back into the public spotlight when Microsoft announced it is trialling the use of a new shipping container-sized data centre that it has sunk onto the sea bed, just off the coast of Scotland’s Orkney Islands. 

According to Microsoft, the main justifier for this was actually improving internet connectivity to coastal communities as the data used would have less of a distance to travel, leading to faster and smoother web surfing for its users. 

However, as part of ‘Project Natick’, as it has been dubbed, Microsoft has also acknowledged that data centres typically generate a lot of heat, and by placing them in the sea they will cool a lot faster. This would not only reduce energy usage and cut costs, it would also improve the longevity of the unit. 

According to research from the Global e-Sustainability Initiative (GeSI), data centres already consume over 3% of the world’s total electricity and generate 2% of our planet’s CO2 emissions. For context, that’s the equivalent of the entire global aviation industry or a small city.  

Many businesses, including the likes of Apple are starting to adopt the concept of a ‘green data centre’. But there’s still plenty of work that could – and more importantly should – take place.   

For most businesses, taking our IT systems for a dip in the deep blue is not really a viable option. There are however a few easy and relatively cheap steps you can make today to reduce energy waste within your data centre. 

Using a containment system 

The overheating of equipment is often the main culprit when it comes to power waste within data centres. It requires a lot of power and energy to keep systems cool, and often larger server rooms and data centres mix hot and cold air in order to keep them at the ideal temperature. 

This can however limit the capacity of the cooling system, which results in a power drain, and causes it to run less efficiently. 

A simple resolution to this can be achieved by fixing air tiles into the cold aisle of the system. Not only does this make the cooling more productive, it also raises return temperatures, allowing your computer room air conditioning (CRAC) units to operate more efficiently.

It’s worth considering space when applying this method, for example hot and cold aisle containment probably isn’t practical for smaller server rooms and data centres, where space restrictions and increased costs make the option prohibitive. 

Virtualise servers and storage

Within data centres you will often find a dedicated server for each application, which can be incredibly inefficient for energy use and budget, it’s also not very economical for space. 

The move to cloud is still fraught with scepticism about whether our data is really safe in the hands of another, but by moving at least some of your data centre infrastructure to the cloud and virtualising, you can share servers and storage onto one shared platform, whilst still maintaining a level of segregation between data, operating systems and applications. It’s also considered a good starting point or stepping stone for a business considering the move for all of its infrastructure to the cloud. 

This can allow your IT system to run more efficiently, saves space, and reduces the number of power consuming servers, which is great for cost and for reducing energy waste. As an extra benefit the improved speeds can mean more flexibility for users and improved IT workflows. 

Turn off idle IT equipment

Perhaps the simple and obvious example, but one that it incredibly effective is remembering to switch off where possible. Equipment left on idle mode often uses more energy than you would think. IT systems are often used far less than capacity allows. Servers for instance tend to only be around 5-15% utilised, for PC’s 10-20%.

When these systems are left on but unused, they still consume a large amount of the power needed to keep them running at full capacity. 

Before you start switching off equipment left right and centre, it’s vital to make a full  assessment of all the equipment used, how frequently it is used, and whether it could benefit from being powered down during quieter periods of its use. It may appear on the face of it to be a relatively minor action, but it’s a cheap and easy way to save energy usage and it can be actioned immediately. 

Move to a more energy efficient UPS system

An uninterruptable power supply (UPS) system at the heart of any data centre is critical,  

an electrical unit which is used to support critical mainstream IT and communications infrastructures when mains power fails or supply is inconsistent. They often prevent disaster, especially within organisations where critical operations take place. 

Previously UPS units were seen as part of the energy consumption problem. Incredibly robust but often large, standalone towers, which used older technology that could only achieve the optimised efficiency needed to prevent power failure when carrying heavy loads of 80-90%. 

Such fixed-capacity units often tended to be oversized during initial installation to provide the necessary redundancy, meaning they regularly ran inefficiently at lower loads, wasting huge amounts of energy. Much like the rest of the equipment you would find within a datacentre these sizeable towers also pumped out plenty of heat so needed lots of energy-intensive cooling.

Fortunately, in recent years the technology has vastly improved and now your UPS system could be a key to the solution. Just as cooling equipment has improved, so too has UPS. 

Modular systems – which replace sizable standalone units with compact individual rack-mount style power modules paralleled together to provide capacity and redundancy – deliver performance efficiency, scalability, and ‘smart’ interconnectivity far beyond the capabilities of their predecessors.

The modular approach ensures capacity corresponds closely to the data centre’s load requirements, removing the risk of oversizing and reducing day-to-day power consumption, cutting both energy bills and the site’s carbon footprint. It also gives facilities managers the flexibility to add extra power modules in whenever the need arises, minimising the initial investment while offering the in-built scalability to “pay as you grow”.

Act now, cut carbon costs

So there you have it, four simple ways to reduce your data centres impact on the environment whilst cutting carbon costs, that aren’t quite as drastic as Microsoft’s latest efforts. Improving the efficiency of our data centres should be at the top of every business agenda, we don’t all have to style ourselves as an ethical or green organisation in order to want to avoid impacting the environment, and these small adjustments can be made quickly and with relatively minimal cost. You could see your IT energy wastage drastically improve so there is no need to sink your data centre just yet.  

Leo Craig, General Manager at Riello UPS 

Image Credit: Bsdrouin / Pixabay