CUDA technology promised a move from the CPU to the GPU for high-speed number crunching, but in a bizarre twist the technology looks to be going back to its roots with the announcement of CUDA-x86.
As part of the keynote speech at Nvidia's GTC 2010 conference, the company announced a partnership with the Portland Group to produce a new compiler, dubbed CUDA-x86.
The idea behind CUDA-x86 is to take the company's Compute Unified Device Architecture GPU-offload technology, which allows code to be written for running on the company's range of graphics processing units in order to gain better performance for certain massively-parallel operations than is possible via a traditional general-purpose CPU, and run it directly on a standard x86 processor.
While the project might seem counter-intuitive - and in many ways it is - the pair believe that it will help developers with producing and testing code for CUDA-based applications without the need to actually have an Nvidia chip inside their development box.
Further, the technology allows companies to add general purpose computers to a computing cluster and run CUDA code, albeit at a slower speed than on dedicated GPU-based CUDA-enabled hardware.
If you're hoping that the launch of CUDA-x86 will give you the opportunity to play with the technology yourself, be prepared to dig deep - the new compiler won't be free, with the Portland Group planning to market the technology as a commercial offering for an as-yet unknown price.