AMD aims for HPC future with Fusion evolution

AMD's Phil Rogers has unveiled his company's plans for the Fusion accelerated processing unit architecture, and if you thought it only went as far as laptops that can play games you might want to take a second look.

Taking to the stage to provide the keynote at the company's Fusion Developer Summit, corporate fellow Phil Rogers explained that Fusion is going to evolve over the next few years until it becomes an integral part of AMD's entire product offering.

"The first APUs from AMD dramatically increase processing performance while consuming less power," Rogers explained, "and now we are building upon that achievement with our next generation of products. Future innovations are intended to make the different processor cores more transparent to programmers. They can then seamlessly tap into the gigaflops of power-efficient performance available on the APU and design even faster, more visually stunning applications on a wide range of form factors."

Rogers explained that by 2014, AMD hopes to have completed full system integration for the Fusion architecture. While current APUs feature physical integration, with the CPU and GPU sharing the same silicon and a unified memory controller, he outlined an evolution strategy that will see the APU concept grow to take over the system.

The first step is what Rogers referred to as 'optimised platforms': the addition of C++ support in the general-purpose GPU compute offering to make it easier for programmers to leverage the power of massively parallel processing; user-mode scheduling; and bi-directional power management between the CPU and GPU, allowing the system to scale down one aspect when the other is loaded to improve battery life or boost performance.

This stage will be followed by 'architectural integration': a unified address space for CPU and GPU, including the ability for the GPU to harness pageable system memory via CPU pointers, and fully coherent memory between CPU and GPU.

That latter feature promises a major step forward for GPGPU computing: applications written to take advantage of the upcoming Fusion APU architecture improvements would be able to execute code on the GPU dealing with data held in system memory, and also execute code on the CPU dealing with data held in graphics memory, all without having to waste precious cycles moving between the two.

Finally, Rogers promised something he described as 'system integration', featuring GPU compute context switching, GPU graphics pre-emption, quality of service controls, and the ability to fuse together the APU and additional discrete GPUs into a single cohesive whole. It's a big shift forward for computing, and something AMD is hoping to develop by 2014.

It's hard to see the promises made in the keynote as anything other than a commitment from AMD to retake the server market from Intel: with the inclusion of direct C++ execution support on the GPU and a fully coherent memory addressing structure , AMD's APUs will potentially offer a massive performance boost for highly parallel tasks that rival Intel will be ill-equipped to match.

The move to C++ programmability on the GPU is, in particular, an important one: parallel-processing specialists such as Adapteva make much of their ability to run unmodified code, while GPGPU computing requires applications to be rewritten to take advantage of CUDA, OpenCL, or DirectCompute. By allowing largely unmodified C++ code to run directly on the GPU, AMD scores a major win over its rivals and all-but guarantees itself a place in future Top500 supercomputer rankings.

With the company looking to transition to the new architecture by 2014, the future looks bright for the underdog. If Fusion delivers on its promises, Intel would do well to watch its back in the high-performance computing and server markets.