Execs need to know how AI makes decisions, before mass adoption

null

If businesses want to use artificial intelligence (AI), especially in regulated markets like accounting or finance, they need to know how AI makes certain decisions. They need to understand, and be able to explain, the decision-making process behind AI algorithms.

This is according to a new IBM report, which says 60 per cent of execs are worried about being able to explain their AI machines. More than 5,000 executives were polled for the report. This number is up from 26 per cent, back in 2016.

So, explainability is a challenge, and multiple organisations are rising to tackle it. IBM recently announced new cloud-based AI tools which can show users the major factors which led to an AI-based recommendation.

KPMG is also building its own explainability tools, in-house, as well as using some of IBM's tools, as well, according to the Wall Street Journal.

Capital One Financial, Bank of America, but also Google and Microsoft are all researching ways to deliver explainability. Vinodh Swaminathan, principal of intelligent automation, cognitive and AI at KPMG’s innovation and enterprise solutions, believes AI can't scale without this feature.

“Until you have confidence in explainability, you have to be cautious about the algorithms you use,” said Rob Alexander, chief information officer of Capital One.

David Kenny, senior vice president of cognitive solutions at IBM said: “Being able to unpack those models and understand where everything came from helps you understand how you’re reaching the decision.“

Image Credit: John Williams RUS / Shutterstock