Skip to main content

AI’s bias problem: the importance of returning humanity to AI

(Image credit: Image source: Shutterstock/PHOTOCREO Michal Bednarek)

When we talk about AI, it’s often shrouded in an air of mystery. We read about algorithms which can do the seemingly impossible – even predict when we’ll die. However, the majority of the artificial intelligence we encounter on a daily basis helps us with far more mundane tasks. AI is much more likely to be used to check your credit score, approve a loan application, monitor your driving, or vet your CV.

AI has been touted as the great equaliser; a tool which allows decisions previously biased by racial, gender, or ideological prejudices to be made by an impartial judge. In truth, however, we’re still a long way from achieving the level of equality needed to eradicate bias from the systems which govern our lives.

This is because AI can only be as good as the data we feed it: summarised best by the age-old adage,  “garbage in, garbage out”. When not addressed, poor quality data can actually amplify the problems that tools such as AI are trying to solve.

The first step to tackling this problem is to understand how algorithms become biased in the first place. In order to make decisions and judgements, AI relies on sequences of training data which are gathered from either private or public databases. Large quantities of data are then fed by developers into the machine learning algorithm, and used to help the AI spot patterns based on precedent and probability. However, if the data is unrepresentative of individual members of society, or the patterns reflect historical patterns of prejudice, then the decision the AI makes will also be prejudiced. This is the phenomena known as ‘bias’. The origins of this can be nuanced and hard to spot, ranging from bias in the form of historic prejudices based on race and gender, to a lack of diversity within training sets which leaves certain groups disproportionately represented.

Gathering the right data

Once a bias has been built into the algorithm, it can become challenging to root out. Algorithms which are designed to be impartial can end up reinforcing the very biases they were introduced to remove, whilst subtly undermining decisions and eroding trust.

 The consequences of biased algorithms can be severe, both in terms of reputational damage for businesses, and the possibility of causing long lasting damage to societal progression. Take for example Apple’s new credit card – the algorithm gave women a lower credit limit compared to men with extremely similar (to practically identical) credit histories. This was despite the fact (or maybe because of it) that the AI did not include gender as part of its decision making process. In this case, the “gender blind” algorithm drew on other correlating factors which could be used to infer gender, such as the applicant’s address, where the applicant shops, or what they do for a living.

Tackling biases like this begins with gathering the right training data. Many AI issues stem from a lack of diversity within both the development and training pools, and companies must take a proactive approach to ensure that their data sets encompass people of all races, genders, ideologies, and interests. This will help to prevent the algorithm from further perpetuating societal inequalities, and can go a long way towards creating a product which will deliver value to the end user.

When it comes to training AI, there is no such thing as too much data. Most importantly, data must be relevant and representative of users in the real world, which can only be achieved by putting humans back at the centre of the process.

There are several ways to ensure this, including using publicly available databases, or via a global community of real-world testers who can provide data inputs that represent multiple cross sections of society. Businesses should also look closely at their internal HR functions and teams. Diversity should also be replicated within the developer team behind the programming of the algorithm. A diverse team can provide a wealth of different insights and views which can all be fed into the algorithm’s development. By preventing bias at its source, companies can save significant issues further down the line.

Only as good as human programmers

Once the issue of training data has been addressed, it’s advisable to implement a structure which allows for continual feedback and modification. It may be that some users report difficulties with certain aspects of the product, for example voice or facial recognition technology. This feedback should be monitored, and incorporated into the next version of the algorithm to improve for future users. 

Despite its technical prowess, AI can only ever be as good as the humans who programme it. This raises considerable issues when we factor in all of the conscious and unconscious biases that every person carries to some degree. As we reach a point where AI has the power to influence the decisions which govern the individual and collective future of our society, it’s vital that the companies developing these algorithms take an active role in making AI fairer for all.

Kristin Simonini, VP Product, Applause (opens in new tab)

Kristin Simonini has over 20 years of Product leadership experience, and currently leads product organisation at Applause, the industry-leading crowdsourced testing platform.