Samsung Galaxy Note 4 design, specs and launch rumours: LIVE

Feedback

Nvidia: Moore’s law will die without GPUs

HardwareNews
, 04 May 2010News

Nvidia’s chief scientist, Bill Dally, has warned that the long-established Moore’s Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale.

In a column for Forbes, Dally claimed that the CPU business "now faces a serious choice between innovation and stagnation.” According to Dally, it’s now time for the likes of Intel and AMD to start the push towards parallel processing, rather than clinging to the legacy of single-threaded processing.  

Of course, Intel could counter this with its big push towards multi-core CPUs with two, four and now six cores on desktop CPUs, but Dally says that this is nowhere near enough to keep Moore’s Law alive. In fact, he says this approach is "analogous to trying to build an airplane by putting wings on a train."

Back in 1965, Intel’s co-founder Gordon Moore predicted that the number of transistors on a processor would double every year, and later revised this to every 18 months. It’s a rule that’s held true for several decades, with the increasing transistor-count also resulting in faster performance in the old days when single-core CPUs were judged primarily on their clock speed.  

However, with the size of transistors gradually approaching the atomic level, it’s clear that something has to change if the computing industry is going to stay alive. Dally points out that instead of inefficiently adding more transistors to an out-dated processing model, Moore’s Law could be saved by adding more transistors to a highly-parallel system such as Nvidia’s CUDA architecture.  

Dally points out that computers based on many smaller cores processing in parallel respond much more efficiently to the addition of more transistors. In these cases, says Dally, "doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance, at a tremendous expense in energy."

Power consumption is another case in point, and Dally notes that the power-scaling predicted by Moore, where the power consumption decreases in line with the transistor count, has now stopped. He has a point; we might have six cores rather than four at the top end, but Intel’s flagship chip consistently has a maximum TDP of 130W in each generation.  

"Every three years we can increase the number of transistors (and cores) by a factor of four," says Dally. "By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power."

Of course, Bill Dally would say that. He is, after all, the chief scientist of Nvidia; a company with no x86 licence whose primary business is based on GPUs with multiple stream processors. That said, he’s right to say that parallelism is undoubtedly the way of the future.

Interestingly, though, only one company has the technology and IP needed to integrate a highly parallel GPU into a CPU... and that’s AMD.

Topics
blog comments powered by Disqus