Artificial Intelligence: is it the future of customer experience?

null

AFTER graduating from the pages of Sci-Fi novels to the screens of computer scientists in mere decades, Artificial Intelligence (AI) has morphed from concept to reality quicker than you can say “buzzword”.

Despite protestations, we have moved swiftly past the point of AI’s place as a buzzword in the corporate lexicon. Artificial Intelligence brings with it a confluence of fear and excitement; of expectations and radical change. But questions about its capacity to solve real-world problems linger. 

We brought together a panel of AI, robotics, and customer experience (CX) experts to deliberate the current state of AI––the challenges for brands and consumers–– and what the future holds.

Success is in the AI of the beholder

Chief executive of re:infer, Edward Challis, fired the starting gun, saying that the term ‘Artificial Intelligence’ can be an “unhelpful” one, preferring instead to use the term “new capabilities”. 

Humans by default are inherently both excited and wary of technological advancements, and AI is no different. To dispel our current fear, “we need to recognise that at the moment we’re only developing AI software that helps us solve perceptual problems (such as image recognition and self-driving cars) not conceptual problems, such as forward planning that’s outside of our existing knowledge”.

“Thinking of AI purely as ‘consciousness’ is just too broad and flabby” added Tribal’s chief strategy officer, Darren Savage. In the eCommerce space, “AI is already helping to free up a lot of our time without us having to think about it. Customers are able to make quick decisions because of the data and software available, meaning that rather than having to trawl through thousands of options for a suitable item, the choices are partly navigated and curated for them.”

Kortical chief executive, Alex Allan, used chatbots as an example. “They’re already in regular use, and consumers have accepted them readily. Machine learning is already anticipating our behaviour and helping to solve problems you might be experiencing in real-time. The next stage is creating chatbots that truly represent your brand.”

When does interacting with AI matter to consumers?

Questions about the ethical implications of AI dominate discourse in the tech space. As the machines learn, interaction by interaction, so their ability to emulate human behaviour will grow. UBS last week launched an eerily-lifelike digital clone of its chief economist, powered by IBM’s Watson AI, to interact with private banking clients. Should there be legal precedent to ensure humans know when they are dealing with a machine, rather than another human?  

“We’re a long way away from people not knowing that they’re talking to a robot,” insisted Allan, “it’s only uncomfortable for customers when you try and pass it off as a real person.”

Savage added that a global protocol should be put in place so that consumers are told it they’re dealing with a machine. “AI needs to be transparent and clear,” he said. “There is software out there that has the ability to manipulate huge populations by changing language and sentiment without anyone realising. In an era of fake news, we need to make sure decision making is done morally”.

Do people care who they’re dealing with?

This, believes Savage, is the crux of the discussion; arguing that the real crunch point is when you ask AI software to go into the world and make important decisions for you. “We do such a poor job of explaining how it all works. And especially in the advertising industry, trust is hard to build.”

Equating this lack of transparency to an “avoidance in reading the instruction manual”, he argued that developing consumer trust in AI needs two things need to be done well: “mitigate risk (and the feeling of deception) and do the job you set out to do.”

This year marked a turning point for how firms deal with personal data, in the form of the EU’s GDPR. Trust and transparency must be ingrained by design into every product or service a brand builds. It’s no different for AI bias. While cold calculations uncoloured by inherent human prejudice may seem a panacea, algorithmic decision-making already stands accused of malpractice from racism to sexism, and much in between. 

Understanding why a system harnessing AI made a decision, and how it came to that decision, is crucial, said moderator Michael Nutley. Demonstrating what he meant with a simple anecdote, Nutley told the audience about asking Siri what the best Rolling Stones album was, and receiving different answers each time.  “What is really going on behind the scenes to make these decisions? Doesn’t this feel manipulative – doesn’t this create suspicion?,” he asked.

Challis believes this is something we already experience with Google. “How do we know that Google isn’t already choosing what it wants us to see?” The difference, he believes, “is being able to clearly see which posts are sponsored ads, and which aren’t. However, with Siri and Alexa, this isn’t immediately obvious, and can create a dilemma: what is serving a corporate interest, and what is answering a genuine query?”

Where do we go from here?

The panel agreed that technology will always outpace regulation, as it has done since time immemorial. The pace of change is such that governments struggle to stay ahead of the curve. The UK government has recognised that world-beating AI is being born here daily, pouring millions into the AI space race, and insisting on a hands-off approach to allow innovation to breathe. 

As such, our panel believe that regulations around AI will be publicly-policed, not dissimilarly to climate change. Allan believes that “enough general paranoia should kick governments into action when it’s really needed.” In the meantime, Savage believes that many brands and businesses will opt to see short-term benefits over long-term usage. He was clear: firms must break the curse of short-termism.

“Reputations take years to build and seconds to destroy,” he posited. “Those that shift their goal from profits to customer satisfaction in the long-run will be the winners. O2 recently took this route and are outselling all of their competitors”.

Despite agreeing, Allan also argues that the long term-view might create situations where we lose control. “AI is used to trade on the stock market in complete self-interest, yet we see these flash crashes that we can’t explain…if we leave this software to its own devices in every aspect of commerce, how can we regulate it? Our biggest fear is we cease to control this software well. We can’t end up in a situation where small groups of people have the largest impact.”

Despite the potential for misuse and mismanagement, all agree that AI is has the ability to revolutionise customer experience over the next three to five years. “I’m excited to see virtual assistants work completely in an ecosystem of customer management––looking after everything from shopping to banking––freeing up people’s time for other things,” said Allan.

It's clear that the road to our AI future will be long, and laden with hurdles, from the ethical to the practical and much in between. Humans will need to constantly adapt, pushback and evolve to remain one step ahead of technology. And whilst a light-touch approach to regulation is welcome in the name of innovation, governments will need to ensure they listen to consumers concerns. AI isn’t a buzzword. It will completely change the way people interact with brands, with machines and with one another.  We’re through the looking glass, and there’s no going back. 

Tom Roberts, CEO of Tribal Worldwide UK 

Image Credit: Jirsak / Shutterstock