It is no secret that Artificial Intelligence is having a big moment, and has the potential to transform businesses in every industry. AI systems are forecast to rake in $77.6 billion by 2022 – a 200 per cent increase on the $24 billion projected for 2018. With this colossal rise to prominence, however, there is growing scrutiny of the ethical implications of AI. As with all innovative technologies, AI has its downsides: most pressingly, the potential for bias. Just this month, 600 000 images were removed from AI database ImageNet after an art project went viral on social media after showcasing just how prejudiced the technology can be.
Without the right controls in place, bias can trickle into the machine learning process and quickly twist an objective analysis into a discriminatory one. Managing the accuracy and morality of AI systems is often overlooked, but it is essential that companies combat bias within data and algorithms that could skew insights and influence customers.
Typically, bias seeps into the AI process in one of three ways: in design, data, or selection.
Creating the right design
Deep learning models are informed by and produced on the basis of a company’s goals. Problems normally arise in the design process if those goals are not properly framed so as to guarantee fairness, since an algorithm can set parameters that encourage bias. Companies can work towards eliminating bias by avoiding a framework that is too focused on a particular company goal and making sure to build fairness into the algorithm itself.
Discriminatory or unethical practices can also penetrate the design process when a company scales up. Templates that were originally meant for a specific business unit or geographical region are often distributed more widely. Having the correct controls in place will effectively prevent those applications from losing context: what is fair is one scenario could easily be discriminatory in another.
Feeding in the right data
AI is an innovative process where a machine is trained by feeding it large amounts of data. But if those datasets under- or over-represent certain groups, or use out-of-date or skewed historical records or societal norms, then any outcomes will necessarily be biased. For example, if a machine is trained to identify the best college recruits based on the backgrounds of its top students, good candidates outside of these criteria could be excluded. Similarly, algorithms using hiring data to vet candidates of a certain age could unfairly eliminate qualified candidates.
It is often difficult to identify that you are using the wrong datasets before they begin to do harm, and since machines learn at such a high speed, any mistakes are quickly amplified exponentially. A small flaw can turn into a huge burden in no time at all. In order to combat this, it is absolutely critical to have high-quality data to feed into the machine, and to create a Know Your Customer tool that uses separate AI and machine learning. AI can also be part of the solution – you can run a ‘policing’ algorithm based on ethical and societal norms in the background as a ‘moral compass’ for the model. Not only does this technology ensure everyone is assessed fairly, but it also enables a continual fine-tuning of the AI framework.
Bias in selection
Algorithms are written by analysing data from selected parameters or qualities, for example in healthcare they may look at weight, age, and medical history. Bias can easily infiltrate the selection process if companies put too much emphasis on certain attributes and how they interact with other data fields. Separate factors might appear objective on their own, but when combined could favour one group over another.
So, what can a company do to benefit from AI while still maintaining ethical business practices? Businesses are increasingly relying on Digital Intelligence to achieve breakthrough business outcomes from AI. Digital Intelligence relies on the right combination of domain, data, and knowledge, and requires a tight orchestration of both people and technology.
Retaining the human decision
There is a lot of reporting which suggests that robots will steal people’s jobs, with many fearing that AI will have the capacity to perform tasks more efficiently than human workers. However, only 17 per cent of respondents believe AI will reduce headcount, and in fact a large majority of business leaders, 77 per cent, believe AI will actually create jobs. This contradicts the popular narrative that AI will eliminate jobs on a mass scale, and reminds us of the essential paradox at the heart of AI: it depends on humans! Robots require people to build, train, and manage them; their promise lies in their ability to augment human workers, not replace them.
Ethical issues tend to arise because businesses find it hard to integrate AI with their human workforce – from getting started, to scaling up, through to actually attaining value. 57 per cent of companies surveyed cite change management as a challenge, and 48 per cent are struggling to find talent to build or work alongside AI. The agility to adapt in rapidly changing work environments and the capacity to re-train workforces to match the rapid rate of innovation will be key for businesses looking to make the most out of the potential of AI.
Just as successful AI systems require machines and humans to work together seamlessly, their oversight should include people and technology as well. The creation of a Quality Assurance Committee to review and test the data, monitor the training process, and audit the outcomes would be the best framework to eliminate bias. Another, perhaps obvious, method of prevention is to provide company-wide training. Just as businesses organise diversity training, so should they educate their staff on the potential issue of bias so that every single employee knows how to detect anomalies. Appointing a dedicated Chief Ethics Officer or adding these responsibilities to the Chief Compliance Officer’s role would also be a valuable step in the right direction.
Mapping AI projects
A recent report conducted by EXL, ‘Orchestrating AI’, found that over half (57 per cent) of companies stated that they did not have a defined governance strategy to combat against bias and errors, and only 28 per cent agreed or strongly agreed that they had an effective process to validate, train, and retrain algorithms. When we are thinking about how successfully companies are handling these ethical dilemmas, it is important to remember that most companies are just beginning their AI journeys, and few have managed to crack the code – literally! 90 per cent of businesses have future plans in place to use AI, but nearly 60 per cent are still somewhere in either the planning- or pilot-stage, and just 18 per cent report having achieved significant results so far.
The fundamental reason why AI implementation as part of a digital transformation strategy tends to fail is the gap between digital transformation capabilities and expectations. This expectation gap arises due to a lack of understanding of how to leverage AI for the particular context of the problem the business is trying to solve. This disconnect between the business and the technology inevitably results in a failure to match the company’s expectation from a technology standpoint.
Don’t rush in
In order to overcome the fear of being left behind in the AI scramble, businesses must view AI digital transformation as a journey of small steps. Lofty ambitions lead to escalating costs, longer timeframes, failure to deliver on promises, and unfair and biased results. Smaller steps are easier to manage, require less investment, and result in more immediate results – this strategy works far better than a “big bang” approach. Although perhaps a less attractive, less exciting approach to implementing AI, smaller steps are much more achievable, more budget-friendly, and a lot safer in the long run.
Nigel Edwards, Senior Vice President of Insurance and Head of UK and Europe, EXL