My main point of scepticism concerning Google Glass is not whether it can do what it claims to (it can) but whether we care. Smartphones have had voice controls for a long time now, and headsets have been hands-free for even longer. I have never wanted to use Siri when she was in my hand, and I don’t think that would change if she was perched on my face.
So when a Kickstarter for a product called Meta started up, saddled with the inevitable label of “Glass killer,” the world didn’t quite care. It wasn’t until a few days ago that I came to pay to attention to the product at all – that’s when Meta announced that it hired Steve Mann as its head scientist.
For those who don’t know, Steve Mann is sometimes referred to as the “father of wearable computing.” He’s been involved in the field of computational photography for as long as the field has existed, and his famous skull-grafted Digital Eye device has earned him the (disputed) title of the world’s first cyborg.
There is essentially no name to drop that could carry more weight for a fledgling cyborg technology. Mann’s street-cred means that, if nothing else, Meta must have at least some features worth getting excited about.
And it does. Far from being the pure Glass competitor many thought, Meta simply puts a screen into glasses – and that’s about where the similarities end. Glass is a whole computer on your face, while Meta is simply a connected display and a sensor bar.
If you gutted Glass of its computational power and super-sized the screen to cover the whole eye, then duct-taped a Kinect to the top, you might have something roughly comparable to Meta.
Though the technology has made remarkably quick progress in terms of reducing its bulk and improving its style, the unfortunate truth is that it still looks quite a bit like the Kinect metaphor.
To hear Mann tell it, Meta represents a real step forward for display and interaction technology. As seen in the video above, Meta uses its range-finding cameras to realise the first truly practical, consumer-level version of the interface that Steven Spielberg has so successfully planted deep in the collective nerd unconscious.
This video shows its ability to infer occlusion and to hold both real and virtual images in the same 3D space. Importantly, this occlusion technology does not require the use of so-called “fiducial” markers, or reference points for the camera. It’s a level of fidelity we haven’t seen before, though it is much easier to believe in the wake of all the alleged improvements to the Xbox One’s new version of Kinect. We seem to have reached a threshold in real-time range finding cameras, and past that threshold lies many a geek’s fondest dreams.
Since Meta outsources all its computational problems to an external computer, software is also an issue – and it’s another area where Meta shines. The device has its roots with the DIY hacker crowd, originally billing itself as a way to turn any third-party glasses display into a working augmented reality device. Today Meta is a more packaged product, but the software side has remained very open. It uses the Unity engine for development, which is quickly becoming some sort of open development standard, and this should help it in terms of both versatility and intuitiveness. Unity is already replete with libraries and bits of pre-written code, so even amateur developers should find significant help realising their Minority Report imaginings.
Meta is being geared mostly towards the business crowd for presentations and the like, but the whole point of its software-side approach is to let the users and developers figure out the best uses for the technology. By bringing in Steve Mann, the inventors of Meta have made it clear that they mean business. Mann seems like the sort who would jump on board out of pure enthusiasm, rather than for a lavish pay cheque, but if Meta takes off to the extent that it could, it will end up being a very smart bet indeed.