Thoughts after using Google Glass for almost a year: Is this the future of wearables?

I can say with some certainty that there has never been an experience quite like Google Glass before. And I don’t simply mean a computer with a glass prism that projects an image into your eye, rather I’m referring to the entire project to date. The journey from a fantastic, partially vapourware video on YouTube, through a crazy skydiving demonstration, to the tens-of-thousands-strong Explorer program today has been a wholly unique and almost entirely public experience.

It feels strange to be sitting here writing about a piece of technology that I have had in my possession for almost a year that comparatively few people have ever seen in person, let alone touched. All the same, this device has become a significant part of my day-to-day life, so it felt important to update my initial impressions of Google Glass with everything I have learned along the way.

Almost a year with Glass

When I first stepped out of the Google Glass space in NYC’s Chelsea Market, right across the street from Google’s New York office, I had a number of concerns. Google didn’t seem to have any plans for those of us with prescription lenses, and my attempts to warp the titanium nose pieces over my existing frames had made the headset difficult for anyone else to wear comfortably.

The interface was almost entirely text-based, the Bluetooth connection with my HTC One was questionable on good days (occasionally requiring a total reset just to pair again), and the battery had dropped 35 per cent on the 20 minute walk to Penn Station.

On top of all that, I was walking around with a foreign piece of technology on my face, which initially caused a heightened sense of self-awareness that bordered on paranoia whenever someone looked at me. By the time I got home on that first day I was filled with equal parts excitement about this headfirst dive into something new, and dread that I had just spent $1600 (£960) on a big mistake.

If it wasn’t obvious by now, the dread wore off quickly and I began to enjoy an ever-evolving piece of technology.

Just as promised when the hardware was first announced, Google has released monthly updates to Glass. Each update brought new features, bug fixes, battery life improvements, and even a dramatic update to the quality of images the simple 5-megapixel camera takes. Everything about Glass has changed, even the physical hardware now that version two units have been shipped to nearly every Explorer in exchange for the originals.

There are prescription frames for Glass now, and the apps for both iOS and Android offer strong Bluetooth connections that even make solid video chats possible. To have experienced the initial release of Google Glass and to experience the same device now, they may as well be entirely different products, because every facet of what the user can see and do has changed since those early days.

Currently I am wearing Google Glass for the majority of my day. I used to take it off when I was writing because the combination of glasses, Glass, and headphones caused unnecessary trouble behind my right ear. Now that the prescription frames are a part of the experience, it’s much more comfortable to wear throughout the day. The frames rest on a hook by my bed where they charge overnight, and in the morning they are the first thing I reach for after turning the alarm off on my phone.

Because I am constantly trying out new apps from developers that haven’t yet made it to the Glass store, my battery life is often all over the place. In fact, the nature of apps like Google Hangouts make it hard for me to issue a blanket statement about battery life. When Glass was first made available, the battery would last for roughly 45 minutes of screen-on time. Currently that is a lot closer to two hours unless you’re trying to record video, watch YouTube, or participate in a video Hangout. It is dramatically improved over the initial release, but your mileage is going to vary considerably depending on usage patterns.

The primary user interface is split unevenly between touch gestures and voice commands. You can reach most of the important things with just your voice, meaning you can access your main menu of commands, issue commands, and continue on with life. This is great for asking Glass to take a photo and then sharing it to your social network of choice (complete with voice transcription to add a note to the photo), or asking for turn-by-turn navigation to a specific location. It’s great for accessing a recipe when you’ve got both hands buried in a bowl of ground beef, and it’s even great for discretely translating a menu from German to English.

What it’s not great for is checking your battery life, connecting to a wireless network, or going back to that card you dismissed 20 minutes ago that is now floating somewhere in your timeline. All of these things require your fingers still, and maybe that’s how it should be since all of those things require a great deal more of your attention and probably shouldn’t be attempted when you’re driving, for example. Still, being able to ask Glass how much battery life is left and getting a verbal response seems like a no-brainer.

The camera for Glass is dramatically better than it was at launch, but the experience is still somewhat hit or miss for me. It takes some training to learn how to frame a shot without seeing what the camera can see. The idea behind the location of the Glass camera is that you are taking a photo of what you see, but in reality you’re taking a photo of just slightly to the right of what you see. Even with practice, if your head is tilted and you don’t notice or you are on an uneven pavement, the photo winds up crooked and occasionally even useless. Video is a little easier, because you can see what is being recorded through the display. It’s an awkward trade-off to keep the display off when taking a photo due to the potential strain it puts on the battery by having the display on, but it seems like being able to frame your shot would mean a lot fewer misfires and that would also consume less battery.

You can always edit photos after the fact, since Glass dumps every photo and video you have captured into your Google+ account as soon as your device is connected to power and Internet at the same time. You don’t really have a choice about this, it just sort of happens. If you’ve taken a lot of photos or videos you don’t care about, you now have to go and delete them from your account to stop them from taking up space. Ultimately this saves people from having to navigate the storage unit on Glass with their PC, which is better for a lot of people, but not having any control over this particular function isn’t great if you don’t use Google+ or you don’t want the content eating up your storage space.

Glass Vignettes is a surprisingly useful feature for the platform. It allows you to use whatever is on the display at the time to add context to a photo by embedding a screenshot of your display into the top right of the photo. It’s a fun tool for adding information, and is frequently used by Explorers to enhance a photo taken with Glass. It requires using the physical camera button, and navigating a menu or two to add the screenshot, but once you have it there’s a lot of fun to be had. When this feature was first added I didn’t see much use for it outside of an educational demonstration of the platform, but I find now that I use the feature at least once a day for something fun.

Driving with Glass

It’s probably important to talk a little more about using a car while wearing Glass. Driving while wearing Google Glass is the absolute safest way to use turn-by-turn navigation. It’s not even a conversation, Glass is by far the best possible experience for drivers that need directions. Before you start the car you can either ask Glass for navigation to a specific location, or start navigation from a location that is in Google Now due to a calendar invite or previous search. The navigation starts but the screen is completely off unless you either wake the display by tilting your head up slightly or you need to know the next set of directions. The display quickly goes back to sleep after delivering the next set of directions, but continues to navigate until you have arrived at your destination. Importantly, there’s no need to ever look away from the important parts of driving, which includes glancing at a window-mounted GPS unit, the in-car display system, or your phone as it sits in the cup holder.

While there’s currently a national conversation going on in the US regarding the safety of Google Glass in vehicles, and whether or not the technology should be allowed while driving, it cannot be overstated how vastly superior this experience is to every other navigation system I have ever used.

Google Glass apps

Glassware apps have made a huge difference in how Glass is used for most people. These apps exist more like native Android apps, just optimised for this experience. Google has worked with small and large developers to fill the store with the current list of apps, which includes a tiny bit of everything right now. Unlike the free range Google Play Store, Glassware submission is a significantly more difficult club to get into. In fact, some developers have opted to work in their own environments for the time being, like the Skybox app.

On top of a difficult acceptance process, there’s no current way to monetise Glassware apps without Google stepping in and shutting you down. As a result, there are only a few Glassware apps. Those apps, however, are both impressively fun to use and visually stunning additions to your user experience. Google may have a heavy hand here, but the results can’t be argued with.

With these apps and a good pair of earbuds, the computer on my face becomes my streaming music player when I walk to the store, a quick glance into what is interesting in my RSS feeds, and even the controller for my Hue light bulbs thanks to some IFTTT magic, all with nothing more than my voice.

Glass has a lot more to do in order to make this platform viable for most people. The first place to look is probably the timeline. Glass treats every single event that crosses your eye exactly the same, and once you have seen or dismissed it that item gets placed in a single strip timeline that goes all the way back to when you first turned the hardware on. This works well if your Glass unit gets less than 30 notifications in a day, but when you get five notifications an hour it causes problems.

In fact, if you use Google Hangouts on Glass and talk to more than two people in a day, you can forget about going back to those conversations after an hour or two without cranking away at the touch sensor on Glass. The timeline system is a great idea, but it feels like it needs to be a multiple lane highway instead of a single dirt path.

Google needs to take a huge step forward with its own apps and own this platform, as well as make the tools available for others to do so. I should be able to issue a voice search for my Gmail, or interact in real time with a To-Do list on Google Keep that is synced to my phone. Ultimately it comes down to how well the device syncs with my phone. Google does a great job working through its cloud to make a lot of this work, but since I have a local connection to my phone it just seems to make sense that I’d be able to use it. My phone should be able to push content to Glass, like launching navigation from the Google Maps app. Glass floats in between being an amazing companion device for my phone and existing as its own standalone piece of hardware. Unfortunately as long as it continues to straddle that line I don’t think it will truly excel at being either.

Google Glass and other people

If there’s one early perception that seems to have happily fallen by the wayside, it’s the confusion surrounding whether or not Glass is an augmented reality device. I think it is possible for developers to create apps for Glass that offer basic AR features, but Glass was not built for this. Augmented reality headsets are going to exist in parallel to devices like Glass, but will ultimately serve other purposes. Wearables like the Epson BT-200 are designed with those experiences in mind, but aren’t designed to be worn all day, every day. It’s an important distinction to make as these two hardware markets continue to define themselves, especially since there’s some natural bleed over between the two markets.

Perception is an important part of the Google Glass experience right now, and unfortunately that hasn’t changed much at all since people started wandering the planet with them almost a year ago. It’s a rarity that I encounter someone in public who asks about the hardware I am wearing and doesn’t immediately follow their initial inquiry with “are you recording everything right now?” or some variant thereof. Any time something negative or socially inappropriate happens surrounding Glass, be it a traffic violation in California or an incident at a cinema involving Homeland Security, it’s the first thing that comes to mind when someone starts up a conversation with me about the hardware. Fortunately for me I don’t mind talking through those concerns and explaining why it’s so crazy that those things happened, but that’s not universally true of all people and is likely to cause friction with some.

I’ve yet to meet anyone in person who has expressed a desire for me to remove the hardware, but I have been in work environments where it has been requested. Roughly half of the people I talk to about the hardware will ask to try them on, and most of the other half are excited to try them on if they are invited to. I’ve lost count of how many people have worn my Glass unit, but I can say for sure that I have never had a single person fail to be impressed with what was on the other side of the display.

The expressions range from “hey that’s really cool” to a total jaw drop as the display hovers in front of them, but it is always a positive experience once someone tries them on. If I’m in a public place, it spreads like wildfire. I’ve been in coffee shops where a single positive experience led to 20 more people wanting to try the hardware out for themselves. It’s exciting and futuristic and new, and watching the cynicism and doubt melt away into smiles is an incredible experience.

The future of Glass

Lots of people want to know about the future of Glass – when will they be able to buy one, what will the retail cost be, and whether or not it is really worth spending the money. At this point, I think it’s safe to say that no one really knows the answer to the first two questions. Google seems perfectly happy to continue using the Explorers to build new features and work with developers to push the limits of the hardware. When all is said and done, when you factor in the cost of the stereo earbuds and frames and prescription lenses, Google Glass is the most expensive piece of technology I own that doesn’t have four wheels and an engine. From a price standpoint it is clearly not as useful as my MacBook or my Nexus 5, but it cost me the same as those two things combined. This price is clearly not going to work in a retail environment, but if you’re in it for the ride then now is a great time to pick one up in the States.