Researchers fire up over 1 million computer cores for record calculation

US researchers this week completed what's being billed as the most computationally intensive calculation ever, using more than one million computing cores in the year-old Sequoia supercomputer to solve a complex fluid dynamics problem.

The Sequoia IBM Bluegene/Q supercomputer at Lawrence Livermore National Laboratories (LLNL) is no longer the world's most powerful - that title now belongs to Oak Ridge National Laboratory's Titan Cray XK7 - but it does have "1,572,864 compute cores and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect," according to the Stanford Engineering Department website.

Researchers from the department's Center for Turbulence Research (CTR) used about two-thirds of those cores to run a calculation related to the noise generated by a supersonic jet engine, the site reported.

The team, led by research associate Joseph Nichols, sought to model jet noise for the purposes of helping jet engine designers build quieter engines. The predictive simulations Nichols ran on Sequoia looked at engine variables like nozzle shape, which can make an engine run louder or quieter, according to the team.

But replicating such complexity takes a whole lot of computing horsepower, said Stanford engineering professor and CTR director Parviz Moin.

"Computational fluid dynamics simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed," Moin said.

It also turns out that harnessing so many cores in a supercomputer all at once poses some new challenges. While using more cores allows for faster and more complex calculations, it also means more bottlenecks between processor core interconnects that must be managed.

It took the Stanford team and LLNL computing staff "several weeks" to iron out all the issues related to using so much of Sequoia's processing power, but in the end it was all worth it, Nichols said.

"These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Center for Turbulence Research previously. The implications for predictive science are mind-boggling," the researcher said.

IBM delivered Sequoia to LLNL in 2011 and deployed it fully in June 2012, when it surpassed the K Computer as the world's most powerful supercomputer on the Top500 list, though Titan has since taken over the top spot. Running IBM's CNK operating system for computations and Red Hat Enterprise Linux for I/O functions, Sequoia has a custom 45-nanometer, 1.6GHz computer chip packing 18 processor cores. Sixteen of those 64-bit PowerPC A2 cores handle the computing, the 17th serves as a backstop on OS assist functions, and the 18th is there for redundancy.

Those chips are individually stationed on water-cooled, 3D-networked compute cards, with 32 cards to a drawer and 32 drawers to a rack in IBM Bluegene/Q systems like Sequoia.

For more, check out the top image from Stanford's jet noise simulation below, courtesy of the university's Center for Turbulence Research. The grey object at left is an experimental jet nozzle design, while the red and orange represent exhaust temperatures, and the blue/cyan is the sound field.