Nvidia's GTX Titan: How it will affect the future of PC gaming, and AMD

Nvidia’s GTX Titan officially launches today. We covered the chip’s announcement a couple of days ago and we’re still working on an in-depth project that explores Titan’s capabilities more fully than standard FPS numbers would indicate. For now, though, we’ll explore Titan’s impact on the graphics market and PC gaming itself. This is a graphics card that’s going to get a lot of people excited. It’s impressive, even if its £830 price point is far out of reach for most consumers.

The rise of multi-monitor gaming, the increasing popularity of so-called “second screens,” and the advent of cards like Titan – which can push multiple displays without breaking a sweat – are all part of the same trend. At a time when game development costs are skyrocketing, developers are looking to create a more immersive experience. The idea of 3D gaming is all but dead – but multi-screen gaming, in some form, has more momentum behind it.

With the PS4 now officially announced and set to make a debut at the end of the year, Nvidia will undoubtedly position the Titan as the high-end enthusiast’s card of choice, even if AMD manufactures the GPU inside all three of the new consoles.

Did Nvidia win the GPU design war?

For Nvidia, today’s Titan launch will be viewed as vindicating the company’s long-term vision for GPU design. For the past six years, Nvidia and AMD have pursued different strategies for their respective graphics chips – and for most of that time AMD was judged to have the upper hand.

In mid-2008, AMD announced a new strategy for itself. Rather than building monolithic GPUs with ever-increasing core counts and a focus on top-end performance, AMD declared that it would target the midrange of the market with its single-GPU products. The company’s top-end cards would consist of two GPUs on a single PCB. The image below shows the die sizes of AMD vs. Nvidia through 2009.

AMD’s HD 4000 series walloped Nvidia’s GT200 family as far as price/performance ratio went. Nvidia made the 65nm-55nm transition from 2008 to 2009, but the high-end HD 4870 and HD 4850 debuted on 55nm in the summer of 2008. Opting for a smaller, less complicated die had paid off.

It paid off again in 2009 when the HD 5000 family launched. Again, Nvidia was left gasping; the company’s own GTX 480 was delayed for months. The GF110 (GTX 580) narrowed the gap between Team Red and Green, the HD 7000 family briefly grabbed the performance crown for AMD once more… and then Kepler happened.

If GK104 (GTX 680) was an excellent example of what Kepler could do when Nvidia emphasised game performance over scientific workloads, GK110 (Titan) is proof that the company can build supercomputing products and then integrate those chips at the top of the consumer space.

The problem here isn’t that Nvidia has somehow “won” the GPU market – it’s that Nvidia, not AMD, is now firmly controlling the conversation, and thus the various price points.

AMD’s options

There are two ways to think about GTX Titan and its impact on AMD’s competitive positioning. From a purely logical perspective, AMD doesn’t need to do anything. Radeon 7970s are available for as little as £300, and based strictly on price, Titan doesn’t threaten AMD’s market because the price gap is simply too wide.

Unfortunately, human beings aren’t purely rational creatures. The GTX 690 already commanded the top end of the graphics market – a grip Titan cements further. The halo effect is very real; a potential customer who sees GeForce cards dominating the high end is more likely to pick a midrange card based on the same technology.

Make no mistake – AMD could do something. The company’s S10000 GPU supposedly launched last November, though it’s not clear whether it’s actually commercially available. We checked AMD’s own product pages and searched the system configurators of several server partners, but found no mention of the chip.

Let’s assume, for argument’s sake, that the silicon exists and could be launched in a consumer variant. That doesn’t mean it makes sense for AMD to do so. The S10000 is based on Tahiti Pro/Tahiti LE, not the full 7970 chip. Power and heat considerations are another factor: AMD’s listed power consumption for the S10000 is 375W at a clock speed of just 825MHz. Bringing clock speeds up to the 7950 Boost Edition’s 925MHz would drive power consumption even higher. The final card would contain nearly nine billion transistors (4.3B across two chips). It’s an expensive, power-sucking configuration that won’t match the performance of two 7970 GHz Editions.

Personally, I think Rory Read and his executives chose this path because it offered the chance to secure long-term console revenue. The warning signs have been in the air for over a year – the departure of Carrell Killebrew, the original architect of AMD’s middle-of-the-road strategy, was a clear sign of impending change.

There’s a road forward for AMD out of this. The problem is perception and possibly channel support. Rumours that AMD attempted to reduce its investment in OEM design wins have been substantiated by data from Mercury Research. The below image shows AMD notebook share vs. Nvidia:

If AMD spends the next six to nine months refining its next-generation architecture, it should be prepared to tango with Nvidia’s Maxwell GPU by the time that chip is ready for market. If GCN 2.0 won’t drop before the end of the year, that’s just the way it is – but when the new core arrives, it’ll need to be a hands-down winner if the company wants to retake the lead.

While you’re here, you might also want to take a look at: The danger Intel's Haswell poses to Nvidia and AMD.