Skip to main content

Telling is selling for the car industry

(Image credit: Image Credit: SplitShire / Pexels)

Automakers have long claimed that engine capabilities and performance are the prime reasons people buy cars.  But as their marketing departments have always known, people’s choices are far more complex and emotional than that.

Drivers expect their vehicles to reflect their needs and lifestyles, and investments in in-car and connected car technology are becoming greater and greater differentiators.   Automation, in particular voice automation, is likely to shape future developments in this sphere as it has elsewhere.  The IoT market is predicted to reach $267bn by 2020, and there is no denying its pervasive expansion into all areas of life.

As with any growth area of technology, trends and user requirements change extremely quickly, and software developers need to be able to react to those trends in real-time to ensure market leadership.  The automotive industry is used to this fast-paced and fiercely competitive environment, and its leaders are already deploying considerable resources to give their brands the competitive edge.  Last year Mercedes enabled support for Amazon Alexa; this year BMW (opens in new tab) revealed an even deeper Alexa integration, and Audi is following suit next year on its new line of E-Tron electric sports cars.

Key to first-mover advantage will be the ability to adjust and be agile, and central to that is the ability to rapidly test and capture feedback from stakeholders and customers. Given the extraordinary complexity of connected car environments (not to mention the significant price associated with obtaining and maintaining these vehicles), the only truly feasible mechanism to facilitate real-time, real-world testing is crowdtesting. Through leveraging the crowd, developers can gain a true user perspective on quality, enabling them to then focus on optimising issues that would affect real customers.

The numbers are compelling. According to comScore (opens in new tab), fifty per cent of all searches will be voice searches by 2020.  The reasons?  For many, voice is quicker; we can speak an average of 150 words per minute, compared to typing an average of just 40 (opens in new tab).  Voice is also hands-free, which is particularly important when driving and will certainly lead to radical changes in the driving and riding experiences.

The challenges

Chief among those issues already encountered by automotive manufacturers looking to introduce In-Car voice assistants is influence of background noise. Standard voice models are trained for ideal conditions, and may not accurately represent the sound of a driver speaking in a moving, running vehicle. In addition, the presence of multiple passengers, and therefore multiple voices, adds both technological and UX complexity. If several people are speaking at once, the system needs to be intuitive enough to respond to the right person at the right time. This could involve voice assistants being configured to prioritise adult passengers’ commands over children’s, for example.

Similarly, voice activation must be able to understand different accents, ages, genders, and turns-of-phrase, in addition to deciphering contextual information.  Much of this is dependent on the sophistication of Natural Language Understanding (NLU) and Automated Speech Recognition (ASR). At its most basic, a Voice Assistant needs to be robust enough to properly respond if a sentence is interspersed with coughing or filler words. A more complex use case could be Voice Assistant hearing “It’s hot today, isn’t it?” and responding “I think you want to do something with the climate control. Possible commands are....”

Another potential area of concern for customers is privacy.  While this issue is more of a general feature of the wider connected environment, it should still be considered by vehicle manufacturers and developers. Some people may be uncomfortable with the fact that their in-car Sat Nav - which stores location data - connects to their mobile phone, which in turn provides access to contacts and personal data. Such concerns need to be taken seriously and handled sensitively – for example, by allowing manual override in certain instances – and such systems need to be thoroughly tested in the real world to ensure proper consumer protection.

In order to overcome these challenges and ensure their systems have the required level of sophistication, developers must work to understand exactly how their customers interact with voice assistants, in their specific use cases, on a daily (or ideally even more frequent) basis.

On safe ground

Currently, voice assistants in cars are mainly focused around the infotainment system; however, even simple tasks like those have still been implicated in encouraging distracted driving. As voice commands become ever more sophisticated, the grounds for such an argument increases.  This is particularly true if you consider the frustration that can be caused by ASR and NLU issues leading to incorrect execution of commands.

Ultimately, it’s up to car manufacturers to decide which features will enrich the overall driving experience while keeping drivers and passengers safe. In addition, skill developers and voice platforms have a nearly Hippocratic responsibility to only enable functionality that can be used safely and ethically. For an application like Pandora, embedding music streaming into vehicles makes perfect sense; for a video game developer, it may introduce an unsafe level of distraction. Given a lack of legislation in this burgeoning field (but no lack of risk from the litigious), auto manufacturers and voice developers must think clearly and carefully about how much responsibility and control they want to grant to the vehicle operator.

Of course, no discussion of vehicle technology can ignore the subject of safety.  Safety is understandably paramount to vehicle manufacturers, and in a broader sense is one of the driving forces behind the adoption of voice, over display- and tactile-input-based, technology.

Uncharted waters

Voice Assistants in vehicles are currently in their infancy. Customers have not yet fully embraced the technology, data on general usage trends is still uncodified and informal, and the supporting technology, while evolving rapidly, has not yet reached the sophistication necessary to be fully commonplace. For this reason, many manufacturers and their partners are uncertain exactly what functionality will resonate with customers and increase vehicle purchases in the long run. Brands hoping to take advantage of these uncharted waters will need to provide a smart mix of what they know their customers need vs. what they don’t know they need yet.

To do either of these things effectively, it is vital organisations understand their target demographic and its needs. Only by doing this will they be able to identify what use cases will actually make a difference with these consumers, the optimal level of functionality needed, and the requisite level of quality to delight users.

By combining in-lab manual and automated testing to validate ‘the known’ and real-world testing and feedback to help identify and validate ‘the unknown’, many of the world’s most renowned auto manufacturers and voice developers are already making exciting discoveries and differentiating themselves in a competitive market. As with any disruptive technology, this window of opportunity is not open for long.  Ultimately, having the flexibility and agility to act on this information will be just as important as the information itself.

Emerson Sklar, Senior Solutions Consultant, Applause (opens in new tab)
Image Credit: SplitShire / Pexels

Emerson Sklar is a Senior Solutions Consultant at Applause.