Google Glass tech specs reveal next to nothing about end-user experience

Yesterday, Google released the specs (pun intended) for Google Glass. The company promises “all day” battery life, includes a 16GB SSD (12GB of usable memory), a 5-megapixel camera, 720p video camera, and states that the attached display is the equivalent of a 25in “high definition” (1080p presumably) image from eight feet away. The glasses support 802.11b/g and Bluetooth, and will include a micro-USB cable for charging.

Battery life is a little squirrely; Google notes that “some features, like Hangouts and video recording, are more battery intensive.” We suspect battery life will heavily depend on workload; wireless communication is a significant drain on battery power in mobile devices. Presumably, that’s why Google opted for 802.11g as the maximum transfer rate over the faster, but more power-hungry 802.11n.

Google has also updated its Glass-specific Mirror API documents to specify that developers are not allowed to stream advertising to the device, though Google itself still retains that option. Data gathering for the purposes of transmitting said information from the API client to third-party sources is also prohibited.

But what do the specs actually mean? No one really knows yet.

When specs break down

The intrinsic nature of specifications is that they tell us something about what using the device is like. If your old phone had 802.11g and the new one uses 802.11n, you expect generally better network performance, presuming you have an 802.11n router. 28nm devices deliver lower power consumption and higher clock speeds. More storage means more room for images, music, or video. Google Glass scatters such assumptions because we have very little idea what the device will be used for.

Sit back and ask yourself – how much storage do your glasses need?

Up until now, that’s been a ridiculous question. Looking back over the past decade, it seems as though the companies most willing to ask ridiculous questions are the companies that have built the products that changed the way we interact with the Internet and each other.

“What if my phone was nothing but a touchscreen?” was a ridiculous question, once upon a time. While I’m not arguing that Google Glass will spark an equivalent smartphone revolution, the privacy questions and security concerns swirling around Google’s glasses are a sign that even the technological “elite” aren’t sure what to make of the hardware.

The screen resolution is an easy point that people grab on to, but keep in mind that the actual display is quite small.

Granted, looking at a video mock-up of something isn’t the same as seeing it in person, but the point of these images is to illustrate what the end-user sees. The shot above is the “Preview” image immediately after snapping a picture; the GUI is typically even more transparent than this. The point of Glass isn’t to shove a conventional LCD in your face, thereby creating an enormous blind spot – it’s to create an augmented display that you can ignore when it isn’t showing pertinent data.

How well will this work in practice? That’s unclear. One of the oddities of human visual processing is that things that look like enormous distractions in still images aren’t issues in real life. When we covered multi-screen gaming a month ago, multiple readers wrote in to describe how the black bezels between displays would ruin the experience. Based on screenshots, I can see why; I used to be one of the people who said bezels would destroy the experience. Then I actually tried it, and found bezels were no obstacle. The eye glosses over them and pays attention to the moving parts of the screen. According to my colleague David Cardinal, who briefly tried a pair of Google glasses, the test videos were razor sharp, while the screen stayed neatly out of the way of your primary field of vision.

Google Glass challenges how we perceive our environment. The technology could have a profound impact on what we see as “reality.” Or it might just drive a further boom in cute cat videos. Tech specs are obviously important for developers who want to build GG applications, but there’s a great deal of uncertainty over what’s going to be possible or how people will want to use the technology. While I’m dubious of some of the privacy implications, it’s admittedly fascinating to watch the development process.

For more on Google’s glasses, check out our look at Google Glass one year on, and also Google Glass: So what's the big fuss about a glorified camcorder?