Skip to main content

NVIDIA and Facebook to speed up deep learning machine

NVIDIA today announced that Facebook will power its next-generation computing system with the NVIDIA Tesla Accelerated Computing Platform, enabling it to drive a broad range of machine learning applications.

Training complex deep neural networks to conduct machine learning can take days or weeks on even the fastest computers, the company said in an official press release, but with the Tesla platform the time can be cut by 10-20x.

This will result in faster innovation and quicker trainings so that more improved capabilities can be delivered to consumers.

Facebook is the first company to adopt NVIDIA Tesla M40 GPU accelerators, introduced last month, to train deep neural networks. They will play a key role in the new “Big Sur” computing platform, Facebook AI Research’s (FAIR) purpose-built system designed specifically for neural network training.

“Deep learning has started a new era in computing,” said Ian Buck, vice president of accelerated computing at NVIDIA. “Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from web services and retail to healthcare and cars will be revolutionized. We are thrilled that NVIDIA GPUs have been adopted as the engine of deep learning. Our goal is to provide researchers and companies with the most productive platform to advance this exciting work.”

In addition to reducing neural network training time, GPUs offer a number of other advantages. Their architectural compatibility from generation to generation provides seamless speed-ups for future GPU upgrades. And the Tesla platform’s growing global adoption facilitates open collaboration with researchers around the world, fueling new waves of discovery and innovation in the machine learning field.