Skip to main content

Why trust in AI could become a casualty of this pandemic

(Image credit: Image Credit: Geralt / Pixabay)

Authorities soon discovered that a lack of quality data severely hampered their ability to make accurate predictions. And as the socioeconomic impact of lockdowns has worsened, there is a growing cacophony of voices starting to question the accuracy and effectiveness of the models. If the modelling and data continue to fall short, people’s confidence in AI could be left in tatters in a post-pandemic world

The Covid-19 pandemic and the resulting lockdowns have spurred businesses to accelerate their automation and AI plans. There’s work to be done, however, on one key element of AI that is often overlooked: trust. A study last year found that only one in four people trust the decisions made by AI. But that was when the going was good. When the going gets tough, society has becomes more reliant on algorithmic modelling. We’ve now come to a point where trust in AI is going to be crucial. On its current trajectory, trust could become a casualty of this pandemic.

Not surprisingly, early into the crisis, governments and healthcare authorities turned to AI platforms and algorithmic models to start shaping their Covid-19 response policies. Different countries have adopted different models and even the all-important R0 value and the predicted R0 value have been calculated in different ways.

Garbage in, garbage out

Sadly, there are plenty of high-profile examples of AI failures. And what do many of these have in common? Bad data. Even the most expertly designed models and algorithms can only perform as well as the data being fed into them.

To build trust in the future, it is important to learn lessons from the past. Amazon’s now infamous recruitment tool favoured men. Why? The data used for the scanning tool was taken from 10 years’ worth of CVs. The root cause of the problem was not poor AI, but bias in the data.

In another example, an AI bot to predict premature births was trained using a data set covering 300 patient records drawn from 24 published studies. Researchers recently found that 11 of the studies had created fake records in an attempt to make their training data more comprehensive. Then the 11 studies made the critical error of mixing the artificial training data with the real testing data used to validate the AI predictions. Once the flawed data set was discovered, the accuracy scores of the software fell from over 94 per cent to about 50 per cent.

Five steps to build trust in AI

Oftentimes, fundamental mistakes are made at the start of projects during the scoping and design phase. As the errors are baked in from the outset, there needs to be a robust strategy. Here are five parameters to strengthen AI modelling. Some may seem patently obvious, but the industry needs to go back to basics on some aspects of design.

  • Keep it accurate: First, put in place rigorous processes around data collection and curation. Examine the AI inputs and ask a few questions to separate the wheat from the chaff: Does this data accurately represent the system to be modelled? Are the assumptions about data collection biased? Can you trust the source and what is the quality of the underlying data?
  • Keep it truthful: AI decision making should not be shrouded in secrecy. Transparency strengthens trust. And when there’s clear visibility into the data and the algorithms being fed, developers can spot errors in the AI rather than let problems fester. AI designers should ask: Can the AI rationalise why it decided to offer the user this piece of information, rather than another? Does too much transparency make the AI vulnerable to attack?
  • Keep it simple: AI needs to be intuitive and valuable for the end user. Think Netflix’s simple yet highly complex AI-based recommendations. The time it takes someone to fully trust the AI will be relative to the complexity and risk of failure. Questions to consider: What training and support are needed for different users? How can we retain confidence if the AI gets it wrong? How do we make human users feel the AI is accountable?
  • Keep it fair: Like it or not, AI may conclude that certain groups are more likely to reoffend or default on loans. This may be true based on broader socioeconomic reasons. AI must be trained to be impartial. To achieve this, anyone designing AI to be ethically compliant should ask: Are we aligned with prevailing ethical standards? Is it proportionate? Are we transparent about what it is doing, or is it doing something else in the background?
  • Keep it real: As we have seen with the AI to predict premature births, some AI works well in the lab but fails in the real world. That is because the AI design fails to integrate technically and practically. To survive real-world conditions, the design must be fused with accuracy, security and an ability to balance the raw predictive power with transparent analysis of data. To ensure that AI can survive the real world, designers should ask: What safety barriers are needed if the AI makes a mistake? How robust is the testing, validation and verification policy? Do we have a plan to continuously assess and improve in-service performance?

The power of now

We have reached a pivotal moment in the fight against the coronavirus pandemic. To beat Covid-19, trust is critical. For all the jigsaw pieces to fall into place, individuals need to be willing to give up some of their information – albeit temporarily. Locations need to be tracked, people traced and tests conducted. Their data needs to be gathered for the science to do its work. This can’t happen without the consent and trust of people. If we get this part right, we are far more likely to retain that trust throughout the rollout of something like a contact tracing app that governments are relying on.

The willingness to share this information also varies from country to country. A study found that 41 per cent of Singaporeans are said to be comfortable sharing a positive Covid-19 test result. Americans, British and the Germans were found to be the least likely to share a positive Covid-19 test result with an app.

Whether it is an app or an AI model, organisations and governments must form a pact with the public, levelling with them about how and where their data will be used. Issues with data integrity must be ironed out at the onset during the design phase and must involve subject matter experts, data scientists and software engineers. Developers need to be willing to pull back the curtain and reveal – at least partly – what’s going on behind the scenes if they are to gain, and more importantly, retain the public’s trust. Trust should not be collateral damage in the midst of a pandemic.

Matt Jones, PhD, Lead Analytics Strategist, Tessella, part of the Altran Group