Unlocking the black box: Governance encourages the responsible use of AI

null

Companies are beginning to entrust many of their business-critical operations and decisions not to senior executives or top performing staff members, but to artificial intelligence (AI). Now, rather than traditional, rule-based programming, users can provide a machine data, define outcomes, and let it figure out its own algorithms and provide recommendations to the business.

With AI arriving at conclusions on its own, governance over these system is critical – for the sake of business executives and customers alike. After all, the decisions guided through these machines can have significant impact on company assets as well as customers’ lives. With the right measures in place, organizations can ensure they are using these tools responsibly to the benefit of all parties.

Trace the machine’s decision

In a recent Genpact study of C-suite and other senior executives, 63 per cent of respondents said that they find it important to be able to trace an AI-enabled machine’s reasoning path. There are established AI methods that enable companies to do this. For example, with computational linguistics, users can easily trace the system’s reasoning down to the specific words that spurred a decision. Traceability also helps with articulating decisions to customers, such as in a loan approval process. If the system recommends denying the customer a loan, then the loan officer should be able to follow the decision back to a specific form or document to explain the reasoning to the customer, rather than angering them with a flat denial. 

Traceability is also critical for compliance and meeting regulatory requirements, especially as enforcement of the General Data Protection Regulation (GDPR) goes underway. The new regulation aims to give citizens of the European Union total control over the information that businesses worldwide collect, store, and use from them. One critical component of the GDPR is that it requires any organization using automation in decision making to disclose the logic involved in the processing to the data subject. Without traceability, companies can struggle to articulate these decisions and face hefty penalties from regulatory bodies.

Assess and govern the technology

While its logic may at times be difficult to explain, there is no doubting the power of AI. By design, it enables enterprises to sift through large amounts of information and delivers intelligence to make decisions at far greater scale and speed than humanly possible. However, organizations cannot leave these systems to run in autopilot. There needs to be some form of command and control by humans.

For example, a social media or online discussion platform can use natural language processing to review users’ posts for warning signs of violence or suicidal thoughts. The system can comb through billions of posts and connect the dots–which would be impossible for even the largest team of staff–and alert customer agents of a potential danger to human life. But, not every post that it picks up will be a legitimate concern so it is up to a team member to review the machine’s decision. These types of cases highlight why people will still be important in the AI-driven future, as only we possess domain knowledge–business, industry, and customer intelligence acquired through experience–to validate the machines and make the ultimate call.

In cases where the machines have achieved very high levels of accuracy, and the decisions are not as critical as saving lives, command and control is necessary to ensure algorithms are not being fooled or malfunctioning. For example, machines trained to identify certain types of images can be fooled by feeding completely different images that have inherently the same pixel patterns. Why? Because the machine is analysing patterns in pictures for matching purposes, and not looking at the image the human beings do.

Manage biases from the start

Since AI-enabled machines constantly absorb data and information, it is highly likely for biases or unwanted outcomes to emerge, such as a chatbot that picks up inappropriate language from interactions over time. However, there can be biases from the very beginning of an AI implementation. After all, if there is bias in the data going in, then there will be bias in what the system puts out. For example, a lender may have data that historically shows more approvals for Caucasian applicants than minorities. Race may not be the fundamental reason for this pattern, but the system may pick up on it and deny all minority applicants. A bias like this in the lending decision process would not sit well with customers or regulators.

Individual users have to review the data that goes into these machines to prevent possible bias and then maintain governance to make sure that none emerge over time–these are areas where again it is essential to apply domain knowledge. With more visibility, understanding of their data, and governance over AI, companies can proactively assess the machine’s business rules or acquired patterns before they are adopted and rolled out across the enterprise and to customers.

Responsible use of AI boils down to trust. Companies, customers, and regulatory agencies want to trust that these new, powerful systems are doing what they are supposed to do. They want to be clear that the basis for the outcomes from these AI models are in everyone’s best interest. By applying the various techniques discussed above, organizations can strengthen this trust with better understanding of the AI’s reasoning path, communication of decisions to customers, compliance, and governance to prevent biases and ensure the best decisions.

Vikram Mahidhar,business leader, Genpact
Image Credit: Razum / Shutterstock