Skip to main content

Nvidia’s GTX Titan: Supercomputing power for consumers

In the past six years, AMD and Nvidia have traded graphics leadership multiple times. From 2006 through to 2008, Nvidia held the pole position. AMD’s Radeon HD 4000 series chipped away at that standing, while the HD 5000 took it outright. Team Green reclaimed the lead at the high end of the market with the GTX 580 in late 2010, and AMD snatched it back in January 2012.

Then, almost a year ago, Nvidia launched the GTX 680. The GK104 (codename Kepler) at the heart of this new card was far more efficient than the GTX 400 and GTX 500 families based on the Fermi GPU. It was smaller than AMD’s Graphics Core Next, drew less power, and delivered a higher price/performance ratio.

The dual-GPU GTX 690 and the lower-end GTX 670 and GTX 660 followed not long after. AMD has kept itself competitive with price cuts and aggressive game bundles, but Nvidia has been in the driver’s seat. Now the company is launching its new GTX Titan graphics card. Today, we’re talking about the £830 card’s features and capabilities – benchmark data and reviews are being kept under wraps until Thursday morning.

Titan stands alone

First off, know this: The Titan has no numeric designation or model number. Nvidia’s contention is that it doesn’t need one.

Let’s talk about GK110 and how it compares to the existing GTX 680 (GK104). First, the Titan is bigger – a lot bigger. The GTX 680 packs 1,536 shader cores into eight Streaming Multiprocessor units, called SMXs. Like Nvidia’s K20X supercomputing GPU, GTX Titan has 2,668 shader cores and 14 SMXs – 75 per cent more than the GTX 680. The simplest way to think of the GTX Titan’s gaming credentials is that it’s “more of what works.” More cores, more RAM bandwidth, more RAM, full-stop.

The image below shows the 15 SMX’s that Nvidia claims make up the GTX Titan/GK110; one group is disabled to improve yields.

The Titan’s non-gaming features are a bit more complex. GK110 is a supercomputing GPU first and foremost, and it includes support for features that games can’t take advantage of. The Tesla K20X is capable of spinning off its own work threads (Dynamic Parallelism), it allows the CPU to spin off multiple GPU workloads simultaneously (Hyper-Q), includes a Grid Management Unit (GMU) to manage multi-threaded scenarios effectively, and includes new virtualisation technology (GPU Direct).

The GTX Titan supports all these features, and is capable of running double-precision floating point operations at full GPU speed, as shown above. GK104, in contrast, limits double-precision code to 1/24 of single-precision performance. It supports compute level 3.5 (GK104 uses 3.0), and can dedicate up to 255 registers to a single thread, compared to 63 for GK104.

What do these features mean for gaming? Little to nothing. But that doesn’t make them irrelevant. CUDA programmers and developers who’ve wanted better double-precision performance from a GPU without paying for a Tesla could be very pleased with the £830 price tag on the GTX Titan.

Introducing GPU Boost 2.0

One of the major features Nvidia is introducing with Titan is a second generation of GPU Boost technology. GPU Boost is Nvidia’s overclocking technology that increases a card’s clock speed automatically provided that its TDP is within a certain range. Like Intel’s Turbo Mode, GPU Boost increases performance when there’s sufficient TDP headroom to do so.

GPU Boost (GB) can also work in reverse. If board TDP rises past a certain point, the card will lower its own operating frequency and voltages to reduce power consumption. The problem with GB 1.0 is that its metric, board TDP, isn’t a perfect stand in for either goal. GB 2.0 shifts the relevant metric from board TDP to GPU temperature. According to Nvidia, measuring temperature instead of TDP gives the company better visibility on whether a card can afford to throttle up to higher frequencies or not.

And GPU Boost 2.0…

Measuring by temperature instead of TDP gives Nvidia better visibility on what maximum voltages should be allowed, and slightly increases the GPU’s operating range. For those customers who want even more control, the company is also going to allow deliberate over-volting – but warns that this will impact board longevity.

The yellow line is GPU Boost 2.0, the white line is GPU Boost 1.0. This also demonstrates why board temperature is a better metric for controlling overclocking. GB 1.0’s maximum frequency was lower than 2.0’s, and its voltage/frequency throttles didn’t kick in as sharply to prevent board damage.

Again, customers who want more control over their GPU thermals will have it. The Titan targets 80 degrees C by default, but users can change the number.

Set the thermal target to 90 degrees C using a tool like EVGA’s Precision, and the GPU will adjust its frequency headroom accordingly.

The Titan is the first GPU to feature GPU Boost 2.0, but it won’t be the last – Nvidia plans to bring this technology to market with its next generation of mainstream GeForce products.

Titan, GTX 690, and the £800 video card market

Titan doesn’t replace the GTX 690 – it complements it. We can’t show you performance figures yet, but you’re going to be impressed. So why did Nvidia build a second £800-odd GPU when such cards ship in very low volumes?

The technical answer to that question has to do with the nature of simultaneous multiple video card operation (such as Nvidia’s SLI). The more GPUs in a system, the more difficult it is to maintain low latency frame times. This leads to the problem known as microstuttering – extremely short hiccups in what should be a smooth display.

Games already have to be optimised to take advantage of SLI, and the more GPUs you have, the more difficult that process becomes. As a result, multi-GPU scaling above two GPUs is dicey. Two GTX 680s or a GTX 690 might be twice as fast as a single card, but two GTX 690s (or four GTX 680s) won’t scale nearly as well.

Two Titans, on the other hand – well, that’s just standard SLI. And for gamers who don’t want to utilise SLI but are willing to pay top dollar for single card performance, Titan is a huge step forward.

The other reason Nvidia is launching Titan is simple: Because it can. Because, when you’ve got an incredible piece of graphics silicon, people want to show it off. And Titan is well worth showing.