Professional chip rebrander Nvidia has already snapped back at AMD, after an AMD spokesperson told us that most game developers only use GPU-accelerated PhysX because they're paid to.
Speaking to Xbit Labs, Nvidia’s worldwide director of developer technology, Ashutosh Rege, said: “There could be no deal under which we would cash somebody in for using PhysX.” Rege explained that the physics engine is a crucial foundation of a game, and that “game developers are not going to choose a physics engines based on any kinds of incentives if that is going to jeopardise the game itself.”
Rege was responding to AMD claims made here, that developers can only be bothered with Nvidian PhysX when offered incentives.
Rege reckons that the diversity of platforms supported is also the primary consideration for developers when choosing a physics engine. “The most important factor for game developer today is market platform,” says Rege. “In other words is whether that supports X360, PS3, PC and some developers are targeting iPhone and Wii with our PhysX engine. We support all of those, which is the reason why PhysX has become so popular.”
However, while these comments are true, they only really apply to the common software implementation of PhysX. After all, you can’t run GPU-accelerated PhysX on a Wii, iPhone, or indeed anything other than a PC with a CUDA-supporting graphics card. While a software physics engine, whether it’s Havok or PhysX, is indeed a fundamental element of a game, you can’t then add GPU-accelerated PhysX effects that will alter the fundamental game on certain hardware. It’s for this reason that GPU-accelerated physics effects are usually just eye candy.
It was specifically GPU-accelerated PhysX that AMD’s Richard Huddy was discussing when he told us that “I’m not aware of any GPU-accelerated PhysX code which is there because the games developer wanted it, with the exception of the Unreal stuff. I don’t know of any games company that’s actually said ‘you know what, I really want GPU-accelerated PhysX, I’d like to tie myself to Nvidia and that sounds like a great plan.’”
Rege touches on Nvidia’s work with game developers with GPU-accelerated PhysX, saying that “we will, of course, help them to do that; we will help them with engineering and we will help them even with artists, who also go on-site and spend a lot of time with their artists to [help creating content]. Adding GPU PhysX to a game is a lot more different than adding just general physics effects. There is more work than adding post-processing effects, so we help them with that. We also help them with marketing with any kind of bundle deals with add-in-card makers if the latter are interested in bundling those games.”
It’s these sorts of deals that more than likely prompted Huddy to say “Nvidia create a marketing deal with a title, and then as part of that marketing deal, they have the right to go in and implement PhysX in the game.” Nvidia might not directly offer money in exchange for GPU-accelerated PhysX support, but it certainly offers incentives in the form of marketing deals.
Either way, it’s still a moot point until we start seeing games with GPU-accelerated physics that works on ATI GPUs. AMD’s Open Physics Initiative might be a good idea in theory, but it’s irrelevant if no one uses it. Currently, the only games with GPU-accelerated physics, such as Mirror’s Edge and Batman Arkham Asylum, only use Nvidia’s PhysX technology.
It’s also worth noting that Rege says that Nvidia is more than happy to support GPU-accelerated physics through open standards such as OpenCL and Microsoft’s DirectCompute. “If a developer asks us to help implement certain feature, we will add it. If he asks to port something to DirectCompute, we will certainly do our best to get that to him,” he says.
Rege also points out that, like AMD, Nvidia is working with the Bullet Engine team, pointing out that, “At the end, we are selling GPUs, not PhysX.”