Lawrence Livermore National Laboratory Expands Supercomputing Projects

Appro, a provider of high-performance enterprise computing servers, storage and high-end workstations, announced the award of an additional supercomputing cluster from Lawrence Livermore National Laboratory.

This additional cluster deployment, named "Minos" will consist of a supercomputing Linux Cluster made of Appro 1U Quad XtremeServer with a total of 6 scalable units (SU) each with 144 nodes for a total of 864 nodes/6,912 cores based on Second-Generation AMD OpteronTM processors. Its peak performance is 33.18 TFLOPS with a total of 13,864GB of memory available.

This High-Performance Computing (HPC) Solution includes a two stage 20 Gb/s 4x Double Data Rate (DDR) InfiniBand fabric featuring Voltaire edge, spine switches and Mellanox dual port DDR InfiniBand HCAs. These clusters are estimated to be deployed by the end of July 2007.

The Minos cluster will join the already installed 4 SU, 576 node "Rhea" cluster and will be used for Stockpile Stewardship capacity computing workload: jobs with 256-4,192 way MPI Parallelism. Stockpile stewardship is the U.S. Department of Energy/National Nuclear Security Administration (NNSA) program to ensure the safety, security and reliability of the nation's nuclear deterrent without underground testing.

Minos will be one of the machines used by NNSA's Advanced Simulation and Computing program for stockpile stewardship calculations. This will run the integrated weapons design codes in support of the Reliable Replacement Warhead (RRW) design effort as well as laser plasma interaction studies for the National Ignition Facility (NIF), a stadium-sized 192-beam facility containing the world's largest laser.

"The Minos Linux cluster will provide needed capacity computing cycles for Livermore's Stockpile Stewardship scientific simulation," said Mark Seager ICCD ADH for Advanced Technology. "The Minos cluster will also contribute to multiple efforts within the Laboratory."

Beyond the Rhea and Minos clusters mentioned above, Lawrence Livermore last year also acquired two other supercomputing clusters from Appro: the 44 teraFLOP "Atlas" cluster and the 11 teraFLOP "Zeus" cluster. These supercomputing clusters are used for unclassified research under the Laboratory's Multi-programmatic and Institutional Computing program. For more information about Atlas Clusters go to http://www.llnl.gov/pao/news/news_releases/2007/NR-07-04-05.html. With the addition of the Minos cluster, Lawrence Livermore National Laboratory will have a total of 20 SU, 2,880 nodes or 23,040 cores of supercomputing power available for programmatic computing projects. For more information about all of these clusters go to http://www.llnl.gov/computing/tutorials/linux_clusters/#Systems

"This is the fourth HPC cluster LLNL awarded to Appro, affirming our strength in the HPC market and confirming our ability to provide the supercomputing power needed for their vital research," said Daniel Kim, CEO of Appro. "We are satisfied that Lawrence Livermore Laboratory acknowledges the performance improvements from Appro HPC solutions and benefit from the commodity, cost-effective, high-bandwidth supercomputing clusters."

"This latest collaboration between AMD and Appro will once again deliver outstanding computing performance to Lawrence Livermore National Laboratory," said Kevin Knox, vice president, worldwide commercial business, AMD (NYSE: AMD). "The performance benefits of AMD's revolutionary and unrivaled Direct Connect Architecture platforms based on the AMD Opteron processor address the high-performance computing needs of the research community while offering a low total cost of ownership."

"HPC is Appro's primary focus," said Earl Joseph, IDC program vice president, technical computing. "The HPC cluster market is growing faster than almost all other IT sectors and grew by over 37% in 2006. Users are now purchasing more HPC clusters each year than all other types of HPC servers. Appro's HPC clusters are designed to address the demanding requirements from HPC users like LLNL that require HPC cluster solutions that are flexible, powerful and scalable built with industry-standard technology and open source software, resulting in scalable HPC solutions with a strong price/performance."