Ending hardware design battles in 2016 make way for an AI-enabled mobile future

Interacting as little as possible with a device is becoming a key requirement to a successful technology experience.

In 2016, the mobile industry has again witnessed major developments, both across hardware and software. Looking at how the mobile market performed in general, the manufacturing issues that some phone makers faced with their devices earlier in the year, making negative headlines, were balanced out by major new releases such as the Google Pixel phone and the iPhone 7 later in the year, both of which were received with enthusiasm. 

Both the Google Pixel and the iPhone 7 are hardware devices of highest spec. Whilst the Google Pixel phone is available in Quite Black or Very Silver and has a myriad of colourful cases available for individual customisation, the iPhone 7 is available in five colours from Jet Black to Rose Gold. But slick hardware design can only go so far to attract potential customers and new users; what sets new mobile devices apart from the rest nowadays are their software capabilities. In the last two decades, competition in the mobile market has been mainly about hardware innovations, from the brick-sized phones of the early nineties to the tiny Motorola PEBL which was released in 2005 and was only 1.8 inches tall. 

Since the introduction of smartphones with the first iPhone in 2007, new releases have mainly centred around bigger screens and more powerful cameras. This year however, core software innovation has finally moved back to centre stage, ringing in a new age for mobile device manufacturing that goes way beyond camera capabilities and screen sizes. Now is a good opportunity therefore to look at what happened in the mobile industry in 2016 and how the industry is set to evolve over the coming months.

2016, the year when Slippy overtook Sticky and AI moved in  

The days of designing apps around sticky experiences, designed to keep people in apps (where success was measured by time spent in-app), are rapidly coming to an end. Interactive notifications, Bluetooth Beacons, wearables, continuity and omni-device working means that certain app experiences need to be designed for minimal interaction. 

Mobile is meant to help users do things more easily than they can elsewhere, so creating experiences that are smarter and slippy, as opposed to being sticky, has been one of the biggest trends this year. From an enterprise mobility perspective, creating slippy experiences that require minimal interaction has always been a priority. Apps that help users complete tasks easily and efficiently are more productive to the user than apps that have numerous features but require staff training due to their complex structure. For example, an app that offers contextual push notifications, where a user can accept or reject a suggested action, such as a safety notification, with one tap is more useful than an app that requires various steps for the user to obtain the information that is most relevant to them at any given time.    

It is advisable therefore to ensure that apps work in the most user friendly way by making them intelligent and contextual. A basic example is Google Maps, whereby the user always receives information based on his location, meaning that if you are in Cambridge and look up ‘London’, Google Maps will find London, England, rather than London, USA. In keeping with these UX principles, AI-powered software has also seen major investment across a range of devices this year, most notably mobile phones and home assistants. In October, Google introduced Google Assistant, built both into the Google Pixel phone and Google Home, an AI-enabled home speaker. 

Google Assistant can answer questions, play music and control other smart devices around the home, for example the lights in your living room. Amazon also introduced its own AI-enabled device with Amazon Echo, a voice-enabled wireless speaker for the home with its built-in AI-enabled assistant, Alexa. Both tech giants are now racing to provide the user with the best AI-powered services and capabilities to dominate this relatively young market.

Will voice outgrow tap in 2017?

Over the coming months, one key difference that this year’s surge in AI-enabled assistants will make is that controlling your phone and any connected devices will no longer have to take place via tapping on your phone screen. AI-enabled assistants mostly operate through voice control, so the days when a user checks their phone 150 times a day on average are slowly coming to an end. In future, users will manage and control their devices by simply speaking to them. In order for these AI-enabled assistants to process voice commands, it is essential that they can complete direct actions as and when requested by the user, such as ordering a taxi or playing music. 

However, this is only possible if these actions have been enabled for the assistant’s AI platforms by the businesses that provide the service. For example, Amazon’s Alexa currently has the skill—Amazon’s term for the Echo speaker’s actionable commands—to track a flight, but will be unable to order flowers from a local florist if this skill has not been made available by the local business itself. The florist will therefore first have to create a skill specifically programmed for Alexa, for example via an API, so that the assistant can connect to his back-end systems. In summary, this means that the functions of AI-enabled devices will be limited until commercial providers have caught up.   

The platforms that enable their respective AI assistants with commercial extensions (so service providers are connected, for example via the aforementioned Amazon Echo ‘skills’) are Amazon Voice Service (AVS) for Alexa, Google Actions for Google Pixel (and its Amazon Echo equivalent Google Home) and SiriKit for Apple’s Siri. With all three tech giants racing to provide the best AI experience, the question in 2017 will be which of the tech giants’ platform will become the most populated by businesses. Whether it’s Amazon, Google or Apple, whoever offers the assistant with the most resources to draw on (i.e. has the most business services connected), will become the most popular choice for consumers.   

Even with more resources becoming available, technology supporting AI assistants still has a long way to go before it becomes truly human-like and conversational. For example, some AI assistants are only available in English and actionable commands need to be vocalised step by step in order to be executed. For example, you can’t simply ask Amazon’s Alexa to ‘order the latest range of Domino’s pizzas’. Instead, you have to ask Alexa to ‘open the Domino’s app’ etc. This issue has opened the field of voice command architecture, whereby programmers are now trying to find ways of programing the AI assistants to find the optimal set of voice commands to complete an action. Too many steps and the user will find it easier to keep tapping away on their phone. 

Too few, and an AI assistant might assume the wrong information as it hasn’t checked every detail with the user, potentially leading to security and privacy breaches. With online security concerns making headlines again in the last few weeks, it will be interesting to see how AI-enabled assistants will approach these issues next year.   

Whereas up until recently, the mobile environment was aiming to keep a user in-app for as long as possible, in 2016 principles such as Slippy UX and AI-enabled user support have set a trend towards a new kind of user experience. Today, interacting as little as possible with a device, whilst ensuring optimal functionality in line with security requirements, is becoming a key requirement to a successful technology experience. Software innovations that embrace this trend will therefore dominate the mobile landscape in the coming months.

Claudia Beaufort, marketing, Mubaloo
Image source: Shutterstock/Chinnapong

ABOUT THE AUTHOR

Claudia Beaufort works in marketing at Mubaloo, where she looks after content for Mubaloo’s website and social channels.