Explainable AI ensures there’s a human present in the loop of the AI process, where the machine provides transparent and reliable explanations, and the human can correct the machine in cases where its decisions are wrong. The sooner such an AI strategy is explored, the sooner an organisation can start reaping AI’s incredible rewards.
There is a growing concern in the business community about the implications of artificial intelligence and its automated decision making. We know AI as a set of techniques that allow computer software to learn from old data, and make sense of new data in a way that resembles human intelligence.
As humans, we tend to hold machines to a higher bar than humans when we lend them control to perform tasks on our behalf. As machines start making decisions, whether someone gets a loan or not, or a patient has cancer - it becomes important for humans to know why and how these decisions are being made to be able to trust the machine. In all the talk around AI, it’s easy to forget that it’s a business tool like any other – only more holistic and powerful than most.
Thankfully, a new way of approaching AI and machine-decision making is quietly emerging.
The strategy to adoption can best be developed at five levels.
Introduce AI Learning: To harness the power of AI, and help build trustworthy products for end users, it’s not as easy as asking your consulting firm to deploy an AI solution that optimises your profits. Your organisation needs to own the process of building AI and it needs to start from the top. This does not mean that everyone in your company needs to be an expert, but it’s important to know the best practices of doing AI. Here are a few concrete steps to take:
- With Explainable AI, you need people who can build or at least understand AI models. Start with building some minimum AI expertise, and if hiring data scientists is hard, start with a team of analysts.
- Everyone in the company should have some knowledge of AI, so train them with one of the many online courses available today.
- If you’re in a regulated industry, make sure your AI solutions are compliant with proper checks and reviews of the AI models.
Identify the Business Problem: AI projects should be oriented around core business problems, opportunities or challenges. A good first step is to list as many business challenges you currently have, to help identify where AI can help. Consider the following:
- What parts of the business generate revenue, but currently have low profit margins? These revenue streams could provide fertile ground for automation and acceleration via AI.
- Where would we like to cut costs? Review your costs and pinpoint the ones you’d like to reduce. AI can help you better understand what generates costs and identify areas that could be optimised or changed to reduce them.
- Where do errors most often occur? A well-trained AI model has the capacity to perform with far less margin of error than humans.
- What work do our employees do that they don’t particularly like? If it’s repetitive or annoying for a human to do, there might be a component of the task better done by AI.
Organise Your Data: Data is the fuel to any AI solution and can present itself in unexpected places. Look for possible sources of relevant data for each problem on your list. Here are a few questions to consider as you start to organise the relevant data that you have already:
- What data is associated with the problems your business has now?
- How is the data structured and formatted? Is it scattered across your company?
- Do the right people have access to the data?
- Is the data updated continuously?
- Who is responsible for our data?
- And if there isn’t much data there or none at all – why?
Choosing the Right Platform & Tools: AI tools used to be only for academic research and proof of concepts. But a new generation is emerging, allowing all organisations to build reliable AI fast and for a reasonable cost. Since software and hardware used for AI is going through rapid development, make sure the solutions you choose are scalable and future-proof – to avoid costly maintenance. Choose tools, depending on goals, budget and available in-house competencies, time to market and total cost of ownership.
Ensuring a Transparent AI Workflow: There will be multiple stakeholders in your AI workflow starting from engineers and scientists who are building AI solutions, the IT personnel operating them, and business executives delivering them to your end users. At the end of the day, your AI workflow must provide visibility and insights for all these teams to ensure there is trust built into the process. Ask yourself these questions to test if your AI workflow is transparent:
- Do you know why your AI solutions are making those decisions?
- Can someone in your organisation explain how the AI solution works?
- Do you have a way to handle customer issues when they receive decisions from AI that they don’t like?
- Is there a quick and easy way to get AI related insights for your organisation?
Explainable AI brings a fundamental change to the AI workflow by providing a lens into how the AI process works. At the end of the day, people need to see the positive benefits from AI, but shouldn’t have to do so at the cost of transparency and trust.
With Explainable AI, we can achieve the most powerful form of AI where machines and humans work in tandem, while helping companies satisfy upcoming regulations and reduce business risks using AI.
Krishna Gade, Co-Founder and CEO, Fiddler Labs