In gaming, the frame rate – measured in frames per second, or fps – is king. That’s been true for the 12 years I’ve been reviewing computer hardware and then some. Frames per second has ruled the roost, virtually unchallenged. Some sites now incorporate minimum frame rates or display the frame rate at each second to give gamers a better sense of what the range is – but the metric hasn’t really changed. We’re still talking about the number of frames produced in one second.
Now, that’s changing. It started in September, 2011, when Scott Wasson over at Tech Report decided to dive into what was happening inside the second.
I’ll let him explain why: “The fundamental problem is that, in terms of both computer time and human visual perception, one second is a very long time. Averaging results over a single second can obscure some big and important performance differences between systems.”
There’s a relationship between the number of frames displayed in a single second and the frame latency (how long it takes to draw each successive frame). A video running at a constant 60 fps has a frame latency of 16.7 milliseconds. A video at 30 fps has a frame latency of 33.3 milliseconds. In a video game, frame latencies vary significantly depending on what’s going on inside the engine and how many GPUs are in the system. The human eye picks up on these variations, but because they occur within one second, our humble FPS metric can’t capture them. Frame time analysis, on the other hand, can – and what it shows has proven to be increasingly interesting.
Here’s an example video that shows the same scene running on a single GPU versus a multi-GPU configuration. The frame rate has been locked to 30 fps in both cases, which means the update speed should be identical. As you can see, it isn’t. The multi-GPU video shows a pair of quick updates followed by a slower, laggy update. The single GPU configuration shifts from frame-to-frame at the same speed.
This data is valuable to both AMD and Nvidia. Frame latency evaluation allowed Tech Report to demonstrate objectively why playing games on Nvidia cards used to result in a subjectively smoother experience. AMD, to its credit, has rapidly released drivers that substantially improve the overall situation on single GPUs. Multi-GPU configurations are still prone to more stuttering than single-GPUs, and again, AMD is scrambling to improve the subjective experience on Crossfire configurations.
The question of how to best integrate frame latency results into benchmark analysis has become more complicated of late. Nvidia has developed a set of tools it calls FCAT (Frame Capture Analysis Tool). Unlike Fraps, which captures data when the game engine hands a frame over to DirectX, Nvidia’s FCAT captures the final frame buffer output. Nvidia has built a comprehensive set of tools to aid this capture process and from what I’ve seen thus far, it’s an extremely sophisticated product. There’s no question that FCAT gives us more data to play with. Is it better than Fraps? Not necessarily. Fraps shows us what’s happening early in the pipeline, FCAT shows what completed output looks like. Long-term, I suspect both tools will be extremely useful.
Frames per second isn’t going to go away. It’s still an effective and succinct way to show game performance. We’re going to start supplementing fps data with frame latency information via Fraps, and possibly add FCAT as well a little further down the line. As a reviewer, my principle concern is making sure that the data we gather from these new methods is leveraged in a way that provides gamers with a better representation of what gameplay “feels” like, as opposed to simply burying readers in raw data, or hitting them with graphs that distort the actual play experience.
Long term, these tools will, I think, lead to better game experiences. Both AMD and Nvidia are paying close attention to the results, and working to create optimised drivers that deliver smooth experiences within a second, as well as excelling in more traditional measures of performance.
While you're here, you might want to check out our Triple monitor shootout (the Nvidia GTX Titan and GTX 680 and Radeon 7970 facing off at a resolution of 5760 x 1080).