Computing’s next energy challenge

Computing is always facing an energy crisis. In the 1940s, mainframes were powered by power-hungry (and fragile) vacuum tubes. If you tried to make a Google data center out of early supercomputers like the ENIAC, it would consume as much energy as all of Manhattan. Transistors arrived and computing spread across the world.

Back in the 90s and early 2000s, chip designers warned that chips could begin to emit the same amount of heat—for their size—as rocket nozzles or nuclear power plants, a trend that was stemmed by a change in design with the advent of multithreaded and multicore devices.

Virtualization, new data management strategies, and innovative cooling technologies implemented over the past decade, meanwhile, helped pave the way for hyperscale data centers. Ebay, for instance, saved $2 million in data center energy costs by slightly changing its software code on some applications.

Rapid access to data is the lifeblood of the global economy. Businesses and organisations will soar or sink on their ability to leverage data to achieve new scientific breakthroughs, improve customer service or gain market share. With data center construction growing at 21 per cent a year and more countries implementing carbon policies, taking a business as usual approach to energy will only create headaches down the road.

In the next wave of efficiency, expect to see a tremendous amount of focus on software-defined storage (SDS) and flash memory. Why storage? For one thing, cloud computing and virtualization at the server level is already underway and raising utilization above the anemic 6 to 12 per cent levels of the recent past. Similarly, many data centers have already fine-tuned their air handling technologies to better match cooling to existing loads. Storage is the last low-hanging fruit.

Second, storage is in the midst of a once-in-a-generation transformation. Flash memory, the primary storage technology for digital cameras and cellular phones, has been moving into data centers over the past few years.

A hard drive-based storage system for a 50TB database, for example, might require a power budget of 8,800 watts (4,000 watts to run the storage system and 4,800 for cooling.) A similar system could be built with SSDs with a power budget 1250 watts - an 85 per cent saving. Part of the savings come from the need for fewer, but faster, storage systems (568 watts) which in turn lowers demand for cooling (682 for cooling).

Energy savings can further be increased by leveraging the additional IOPS to reduce the number of servers in a data center. Fewer servers means less electricity. It also means less demand for cooling. Companies such as Pandora and AT Internet, in fact, have managed to reduce server count by 40 to 75 per cent while increasing performance through flash. It’s a technology change that causes major ripple effects.

Flash will also open new markets like the Internet of Things. McKinsey & Co. estimates that $5.5 trillion worth of economic value could be generated by integrating IoT technologies into heavy industry with a substantial portion of the savings coming from efficiency. By our own calculations, industrial customers worldwide lose six times more electricity every year than is generated annually in the EU. Even if we could only harvest a fraction of that through intelligent systems, the impact would be significant.

You will also see flash and SDS expanding the reach of mobile technology and computer networks to emerging nations like Nigeria, India and China where the spread of technology can be hampered by blackouts, power theft and weak grid infrastructure. By using energy more efficiently, digital networks can be expanded. It’s that simple.

Energy concerns won’t stop the digital revolution. But we are going to need to take action so energy won’t slow it down.

Steve Wharton, Office of the CTO, Enterprise Solutions EMEA, SanDisk