Augmented Reality: Fact or Science Fiction?

Augmented Reality: Fact or Science Fiction?

For years, Augmented Reality has been a technology threatening to happen. Like it’s cousin, virtual reality, it’s something of a science-fiction staple, and a technology that has been hovering on the sidelines for at least a decade. In the last four years it has crept steadily out onto smartphones and games systems without ever really doing much to invade the mainstream conscience. At times it’s looked like the future, and at other times like next year’s big damp squib. With new software and new devices, however, that situation is beginning to change. The time may have come for Augmented Reality to step out of the wings and into the limelight.

What is Augmented Reality?

Augmented Reality is best described as the combination of sensory experience of the real-world with computer-generated visuals and audio to produce something that ‘augments’ reality with relevant imagery, speech, sound and/or text. In a way, it’s something many of us already do, for instance, while looking at Google Maps on our smartphone while navigating to a bar where we’re meeting friends. But the difference is that instead of looking at the smartphone screen, looking up at the view in front of us and trying to relate the two in our heads, Augmented Reality presents us with the information as a visual overlay; the technology takes the directions from your navigation software and maps them directly onto the world in front of you.

Unlike virtual reality, it’s not about presenting us with a computer-generated alternate reality, where actions in the real world are mapped to actions in the virtual realm, but about giving us a ‘reality plus’ bringing electronic and sensory data together in the most immediate and logical way, whether to inform, guide, educate or simply entertain. 

It’s a concept that goes back to Boeing in the early 1990s, where an engineer called Tom Caudell designed a system to help workers install cables into aircraft using computer-generated overlays. Further research was conducted by the US Air Force Research Laboratory over the next few years, and that fed into work at the US Department of Defence’s research wing, DARPA, looking into finding some way to personalise the kinds of heads-up displays used in aircraft. In fiction, meanwhile, a pair of augmented reality glasses became the key prop in William Gibson’s 1993 novel, Virtual Light.

The first step towards widespread implementation came in 1999, with the creation of the ARToolKit by Hirokazu Kato of Nara Institute of Science and Tech (the software was later released by the University of Washington’s HIT Lab). Designed originally for desktop PCs, this software library was eventually ported to the early smartphones, and it was with these that Augmented Reality began to take a more useful form. You see, to work, Augmented Reality requires certain key elements: a camera to take in a real world video signal, a processor capable of processing that signal and overlaying computer-generated imagery on it and some form of display technology to show the results. Meanwhile, motion sensors, GPS, voice recognition and Internet connectivity all make it possible to gather more appropriate data and enable a higher level of interaction. If all of that sounds familiar, it’s because smartphones have it all built-in.  Smartphones might be great for making calls, taking snaps and logging Facebook updates, but they might have been designed with augmented reality in mind.

AR on your Phone

The most obvious form this takes is the AR browser, where Wikitude and Layar are the leading apps. AR browsers combine visual recognition systems with compass, GPS and motion sensor data to identify your location and your current field of view and overlay labels for any points of interest on the screen. These points of interest are drawn from online resources like Yelp or Wikipedia, or from ‘layers’ or ‘worlds’ created by developers, users and interested companies.

Using Wikitude or Layar is an interesting experience, and your mileage will vary depending on where you are and what you’re actually looking at. The amount and quality of data available in the different layers or worlds varies wildly, and while in the centre of a major city there’s an overload, as pubs, landmarks, venues and eateries all jostle for attention, in more rural areas you might find yourself short of content. It’s not unknown for the browser to tell you that the pub you’re actually looking at is located on the other side of the street, but it can also tell you things about the world around you that you never knew, or didn’t know existed. The more you play, the more fascinating it becomes.

Other AR apps provide simple navigation for pedestrians (smartphone AR isn’t such a great idea for drivers), apps to tag and later find your car in a busy car park and apps to

find and rate local pubs and restaurants. You’ll also find some fascinating apps like Pocket Universe or Google Skymap that turn a view of the sky at night into a navigable star chart with every planet, object and constellation clearly labelled.  

The new big thing in AR is the use of real-world objects to trigger AR visuals. In the past, simple square black-and-white targets have been used to summon and position computer-generated imagery, but now Layar and another company, Aurasma, have systems that recognise, say, a poster, a newspaper ad or a magazine cover and launch a relevant, interactive, fully-animated 3D object that can be seamlessly overlaid on the real-world view. Examples might include magazines with interactive elements that only come to life when viewed through a browser, movie posters that transform into trailers once within the browser window or ads that reveal interactive surveys or special offers.

Aurasma can work with conventional print magazines to bring the pages to AR life.

Understandably, there’s a certain level of excitement about AR in the advertising and marketing industries. A 2010 campaign by Tissot, for example, saw in-store sales of watches rise by 85 per cent, while a recent report by YouGov claimed that “Augmented reality will migrate on to the high street, and once there, it will certainly be used extensively in shopping.”

Games and Entertainment 

However, there’s another area where AR is finding a natural home: games. It certainly helps that the idea of mixing CG visuals with real-world video is nothing new in the gaming sphere. Sony was combining the two in 2003 with EyeToy on the PS2, and it has been a recurring theme with EyePet, Eye of Judgement and the PlayStation Move, not to mention Invizimals on the PSP.

Reality Fighters on PS Vita combines real-life locations and CG fighters in one big AR beat-em-up.

However, AR has come closer to the mainstream with its adoption by the camera-packing, AR-friendly Nintendo 3DS and PlayStation Vita. The 3DS launched with a selection of AR games, launched using target-sporting AR cards and bringing the worlds of Mario, Zelda and Kid Icarus to life on the table in front of you. With Vita, Sony has gone even further, mixing real-world video feeds with fully playable CG graphics in Little Deviants, Reality Fighters, Cliff Divers and Fireworks, while promising to take it out into the wider world with the GPS powered virtual-graffiti game, Tag.

With all this going on, games could be the Trojan horse that takes AR into the mainstream. Even those who don’t want to invest in a new gaming handheld will like the novelty factor of using a smartphone to blast ghosts or space invaders from their general vicinity, as they can in AR Invaders or SpecTrek, while Aurasma has been demonstrating a new 3D engine that combines detailed CGI creatures with real-world scenes, placing dinosaurs in the public spaces of Paris and compositing them with reasonable success into the 3D scene. Today’s AR-games are simple, experimental and very basic. Tomorrow’s could be far more compelling.

Google’s AR Revolution

All of this stuff is interesting and even sort-of cool, but it might not be enough to make AR the next must-have tech.  What might is an entirely new class of device: virtual reality glasses. These aren’t a new concept to sci-fi fans, having appeared in Virtual Light, a selection of Cyberpunk and, more recently, Charles Stross’ fine Halting State. DARPA has been trialling various concepts with soldiers in the field over the last few years, and last year Zeal Optics was showcasing AR ski goggles that could overlay GPS, speed and altitude data on the glass while you skied. Eye-display specialist Vuzix was  even demonstrating prototype augmented reality glasses at this year’s CES. However, what has really put Augmented Reality back on the map is Google’s April announcement of its own research project, codenamed Project Glass.

At present details are sketchy; just a brief introduction, some photos and a two-and-a-half minute concept video. The glasses themselves come to little more than a streamlined light bar with an integrated, semi-transparent video screen, though it’s known that they incorporate a camera and audio, while a new patent reveals that they may be controlled with the aid of an infra-red reflecting ring, bracelet or fingernail decal. Meanwhile, the video showcases video chat, social networking, GPS navigation and, oddly, ukulele training.

Not everyone is convinced. Blair Macintyre, director of the Augmented Environments Lab at Georgia Institute of Technology told the US magazine Wired that “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with.” “In one simple fake video” Macintyre complained, “Google has created a level of over-hype and expectation that their hardware cannot possibly live up to.”

To be fair, Google has admitted that the video doesn’t accurately reflect the current state of research; where the video shows data appearing across the full visual frame, existing prototypes are restricted to a smaller display above the right eye, and at the moment features and functions are either limited or simply not working. All the same, Project Glass had proved a compelling vision of the future, and Project Glass researcher Sebastian Thrun has been able to post photos taken using a prototype to his Google + account, including shots that would only be possible with a hands-free device.

More long-term arguments against Project Glass persist. Not everyone wants their experience of life augmented by a constant flow of information, and there are the obvious risks with any obstruction to vision, not to mention the distraction of using Project Glass as you try to navigate urban spaces. For some, a smartphone is a more realistic portal for data.

However, there are strong arguments in the concept’s favour. With data coming hands-free, wherever and whenever you want it and powered by natural speech and gesture interfaces, AR glasses could provide the ideal window for the huge wealth of data out there on the Web. Imagine being able to look up information on a building as you pass it, or getting sat-nav instructions beamed into your field of view without having to wander around staring down at a mobile phone. Why struggle for a name when your AR glasses can recognise the face and feed you a Google + or Facebook profile? And what could be more simple or appealing than a camera that’s always there, and that snaps without you even touching a button?

In education the potential is immense. Imagine history field-trips that transform ruins into gleaming AR cities or regress modern streets back through the ages before your eyes. And that’s before we even think of gaming, with convincing AR spook hunts, multiplayer war games and AR World of Warcraft-style MMORPGS all a distinct possibility.

A Window for the Cloud

At the moment, the cloud is becoming indispensable to the way we work, learn, socialise and live. Social networking and messaging services, reference resources like Wikipedia or the IMDB, business intelligence applications and travel and navigation services are all making it easier for us to find our way to meet a friend, find somewhere to eat in the nearby area, let friends and family know what we’re up to and get us the information we need to make critical decisions, or simply find out whether a product we’re about to buy is any good.

For now the smartphone is the best and most convenient device for accessing this information, at least on the move. However, there are times when we need both hands, or where we don’t want to pull a phone out of our pocket and wave it around. If we want always on access to data, then AR is the next, obvious step, and Google’s research into it now looks like a bid to make headway in this arena before the obvious rivals, like Apple, can crowd on in.

With Project Glass, all the data from Google’s cloud-based services can be right there in your field of view.

Google is far from the only interested party. Beyond Vuzix, the leading eyewear manufacture Oakley has revealed that it has been working on technology related to AR glasses for the last fifteen years, with over 600 patents in the bag. In an interview with Bloomberg, Oakley CEO described the technological barriers as “significant” but said that “Ultimately, everything happens through your eyes, and the closer we can bring it to your eyes, the quicker the consumer is going to adopt the platform.”  

Glasses might not even be the final step on the road to AR; Innovega and the US Department of Defence have already talked about AR contact lenses with a heads-up display projected onto the lens. And for the true CyberPunk enthusiast, implants are the ultimate goal, sending data directly to the central nervous system. Whether the wider world is ready for that is something for a whole other feature.

Leave a comment on this article

Topics