Despite its pervasiveness, artificial intelligence is still not met with broad acceptance. Many members of the general public are wary of AI, mistrusting the technology even as they increasingly come to rely on it.
AI incites fears of job loss, spying, and a robotic takeover – to name but a few propagated concerns. Crucially, these concerns reside in the technophobe and the technologist alike. With these misgivings so universal, will AI ever shed its suspect skin?
Here, Howard Williams from UK based software house Parker Software (opens in new tab), explores the ethical side of artificial intelligence, and how it relates to wider acceptance. How can AI gain acceptance from the man on the Clapham omnibus?
Meet the man on the Clapham omnibus
The ‘man on the Clapham omnibus’ is a legal term (opens in new tab) for a hypothetical average, reasonable person. The actions of another are weighed against this judicious person to determine negligence or liability.
More broadly, the reasonable person is a way to understand whether a behaviour, viewpoint or action is considered ‘normal’ and understandable. Would a reasonable, average person have acted the same way?
Here, the reasonable person will act as a guide, to see what uses of AI are most likely to garner acceptance. In other words, what uses of AI would the average Joe — the man on the Clapham omnibus — deem acceptable?
The ethics of AI
The concerns raised by the influential names in tech revolve around the ethics of AI. So, it makes sense that, for a reasonable person to embrace the acceptance of AI, we need to address these concerns. Otherwise, the costs of accepting AI are often perceived as much higher than the benefits.
The issues of ethics in AI (opens in new tab) revolve around four key tenets: security, transparency, fairness and liability. Each of these concerns, when addressed, promote trust in the application of artificial intelligence. In other words, the trust generated by ethical use can be viewed as a driving factor to AI acceptance by consumers.
There are plenty of products and services that incorporate AI (opens in new tab) to streamline their offering. By looking at those that have succeeded, and those that have failed, you get a clear view of what it takes for the reasonable consumer to embrace artificial intelligence.
The Facebook Cambridge Analytica scandal brought our data into the spotlight, and GDPR made us think about where our personal data is going. As Stephen Parker writes (opens in new tab), ‘Businesses are being targeted by cybercriminals on a scale not seen since the launch of the world wide web’.
With all this going on, is it any wonder that the man on the Clapham omnibus is concerned about digital security?
This concern extends to AI. Firstly, there’s the security of the training data to consider. Then, there’s the question of what happens if an AI tool faces a cyber-attack or has easy-to-bypass security. Fooling AI (opens in new tab) could result in driverless cars mistaking stop signs for speed limits. Or it could allow criminals to bypass facial recognition security, for example.
A reasonable person is going to question the security of AI before they can accept it.
The reasonable person is less likely to embrace AI if they feel uncertain about when and how it’s used.
Google Duplex is a perfect example of the role that transparency plays in the reasonable person’s acceptance of an AI tool. Initially, Google Duplex faced rejection (opens in new tab) from many due to lack of transparency. Concerns were raised that it was unethical and uncanny to trick people into thinking they’re speaking with a human rather than AI.
In response, the company promised (opens in new tab) to have Google Duplex always introduce itself as AI before a conversation. So, the reasonable person is willing to interact with AI, but it needs to be kept differentiated from humans to gain acceptance.
Fairness for artificial intelligence means a lack of bias. Artificial intelligence is trained on data generated by humans and past examples. Unfortunately, this means that this data can be subject to human bias. This creates another ethical concern for AI.
And, as Amazon’s recruitment AI program shows, the reasonable person cares about this bias as well. The ecommerce giant rejected its AI recruitment tool (opens in new tab) when it taught itself to be sexist against women. The result was a widespread backlash (opens in new tab) against the use of AI in recruitment.
This example shows that when ethics are breached, the reasonable person is at risk of losing trust in AI technology. This is true even where the use of the AI may still prove valuable.
As AI continues to improve, it stands to infiltrate tasks and areas of decision-making that impact individual lives. When something goes wrong and AI makes a mistake, who is liable?
This question of responsibility is yet another ethical concern to address if the wider public is to accept AI.
Take, for instance, the issues around AI used in health. Artificial intelligence is now making its way into oncology. Here, an incorrect recommendation has a serious impact on the individual involved. When doctors viewed AI recommendations as unsafe (opens in new tab), many hospitals pulled the plug on the program.
The AI, while not fully rejected, isn’t yet trusted (opens in new tab) either. When it comes to important decisions like this, a reasonable person wants someone to take responsibility — they need to know that they’re in safe hands.
AI acceptance beyond ethics
But there’s more to AI acceptance than addressing ethics — AI needs to be useful. It needs to offer value. Why, after all, would a reasonable person adopt anything with artificial intelligence, if it isn’t helpful?
Navigation systems, machine learning chatbots, and the popularity of home assistants (opens in new tab) like Alexa and Google Assistant are examples of times that consumers have accepted AI. All are helpful and are great examples of sensible uses of artificial intelligence.
Chatbots (opens in new tab) improve the accessibility of businesses and services. Navigational systems help us avoid traffic on our usual routes. And our smart speakers run a host of services that help us look after our homes, enjoy our free time, and keep up to date with news. In other words, they all provide value.
As such, these technologies are in common use — a use that’s built up over time. This creates a layer of familiarity. These AI tools have built trust through perceived ethical practice and utility. As a result, they’ve earned acceptance from the reasonable person.
AI acceptance: generate an aura of trust
The man on the Clapham omnibus isn’t exactly against AI. Rather, AI acceptance by the reasonable person is dependent on helpful and sensible applications.
Unethical, intrusive or pointless AI uses result in AI rejection. But when AI is transparent, carefully managed and patently value-added, the man on the Clapham omnibus is unlikely to object.
There’s still a long road ahead for artificial intelligence technology. One thing is certain: AI acceptance from the reasonable people that make up the public relies on AI tools building trust.
Howard Williams, customer experience, Parker Software (opens in new tab)
Image Credit: Razum / Shutterstock