A year ago, after attending the 2012 Intel Developer Forum (IDF), I asked a question inspired by things I'd been hearing from the various presenters and PR folks at the show: "How do you benchmark experiences?"
The notion people were floating then was that what really matters about a system is how well it functions, not how well it scores on synthetic performance tests. Today, apparently, not much has changed.
Since immersing myself once again in the world of IDF last week, I've found myself facing an on-going barrage of insistence that benchmark tests are passé at best and deceptive at worst, and that focusing on the usage of the final product is what's most valuable to consumers – or at least it’s what should be. One instance was particularly vocal, even going so far as to dissect the code of some major pieces of benchmarking software (no, we were never told exactly which pieces of software) to analyse the reasons they couldn't be trusted in the first place.
Strangely, time after time during the IDF 2013 show, Intel representatives touted how this processor is so-and-so per cent faster than that processor, and how we should be seeing scores however much better this time around. And I was invited to an event at the Intel campus in Santa Clara specifically for the purpose of running tests on the company's new Bay Trail tablet processing platform. Apparently benchmarking results are still important once in a while.
On one level, Intel is absolutely correct in this line of thinking. No, benchmark scores don't tell you everything about a product, and they will never solely be the way anyone decides to buy this device rather than that one. No one, from tech companies to tech reviewers to tech consumers, should rely on them exclusively, even if they know how to properly interpret them.
But benchmark scores are useful, perhaps even vital, for the point at which questions of "experiences" stop being relevant. Not everyone may know whether the result they're seeing from one kind of activity is actually good or merely okay, or whether a certain game looks unreasonably jerky on a device, or if that's just the best they can expect for the money they want to pay.
Scores from an objective – or, heck, even an admittedly non-objective – third-party provide the crucial final piece of the purchasing puzzle. If two tablets appear to play video in exactly the same way, and that's what you care about, which should you choose? If you know you want to play games but don't personally know FRAPS from a Frappuccino, isn't seeing a list of comparable frame rates the best way for you to get the best system for the best price?
Ultimately, experiences don't tell you everything, either. Focusing on those, just like focusing on benchmarks, provides an incomplete picture that may inspire more confusion than clarity. Paying attention to a finalised product makes sense for a lot of reasons, foremost among them being the fact that it makes a company's fairly esoteric products easier for the company to sell and easier for the consumer to understand.
Of course, what happens in a testing lab and what happens in your living room aren't always (if ever) the same thing; a lot more variables come into play once you get the system home, and no company can test for everything you may want to do with your computer. Intel's rigorous scientific applications are a good place to start but, as is the case with the test result numbers people at Intel so frequently decry, they’re not a good place to stop.
In a computing landscape that is changing every day and, more importantly, becoming more and more mainstream with each passing generation, the move from benchmarks to experiences is a good idea. But until there's a repeatable way to literally benchmark those experiences, so that the interested consumer knows not only what matters but why it matters and how it can help, experiences and benchmarks must work together to help consumers get the information, the answers, and the systems they need.