When Apple introduced the iPhone with the touch-based iOS, it drastically changed the way people interact with their smartphones. When it extended the touch interface to the iPad, it set in motion an industry stampede to create PCs, laptops, tablets, and smartphones with touch-based interfaces.
This was a real milestone in the tech world. For decades we navigated our PCs with a keyboard, mouse, or trackpad. Apple was not the first to bring touch to tablets or smartphones, but it gets the credit for commercialising it and making it the de facto standard for today's generation of UIs.
Recently, however, I've tested two new products that I believe give us an early glimpse of the next evolution in user interfaces. They will be just as ground-breaking as the graphical user interfaces and touch UIs in the market today.
The first is the new Samsung Galaxy S4, and specifically two of its features – Air View and Air Gesture. If you are in the email application on the S4, Air View lets you "hover" your finger over the email to preview the subject line or body. It only works on the email app now but the software community will soon get the tools to apply it to other apps. This gesture alone is a game changer because it takes limited info on a small screen and blows it up in context so you can access more.
The second feature, Air Gesture, is just as cool. Have you ever found yourself with messy hands while following a recipe but needing to turn (or swipe) to the next page to read the rest of the instructions? With Air Gesture, you can simply wave your hand in front of the tablet to proceed; you never need to touch the screen.
To be fair, Microsoft has had gesture-based user interfaces on the Xbox Kinect for almost two years. To date though, this has only been designed for game consoles and has not yet migrated to PCs. While these two gestures are only on the S4 today, I understand they will eventually find their way to Samsung's Galaxy Tab later this year.
The second product that has inspired my confidence in gesture-based interfaces is the Leap Motion Controller. It sits in front of a PC monitor and, if the software supports it, turns Windows into a gesture-based UI. It can also be used with a laptop via a USB dongle. Leap Motion has seeded over 10,000 developers with SDKs to make their apps work with the device and after it ships this July, we should start to see a good amount of enabled apps. HP considers this so innovative that it has committed to using it in many of its future products.
As of now, Leap Motion lets you interact with a game in 3D, essentially using your hands as the controller. With eventual support from the software community though, it's easy to imagine being able to navigate web pages or even virtually mould pottery on the screen just by gesturing with your hands. The key thing here is that the Leap Motion technology is an enabler and once the software community gets behind it, it could greatly transform the user interfaces of today.
Apple, Microsoft, Intel, and others are also working on gesture-based UI technologies and they all believe gestures represent the next significant development in computing interfaces. In fact, Intel has a human factors project underway that focuses on gestures, and while not much is known about it, I would not be surprised to see the controller for gesture UIs as part of the SoC in the future.
While many had hoped voice would be the next big thing in user interfaces, there is still a lot of work to be done to bring it mainstream. I have no doubt that voice commands will someday be our primary means of computer interaction, but for now it's all about gestures.