Skip to main content

Bias busting – Who must plug the AI accountability gap?

(Image credit: Image Credit: John Williams RUS / Shutterstock)

The last ten years have seen an explosion in funding and focus on artificial intelligence (AI) research and technical development. AI is automating an increasingly vast array of tasks, such as fraud prevention and healthcare schedules for patients. It’s also augmenting human decisions on everything from investment strategies to customer retention and go-to-market plans for new products.

Add to that, the contribution to the global economy by 2030 which is expected to be $15.7 trillion and venture capital funding in at $9.3 billion in 2019 alone -  it’s clear AI is not only here to stay, but is becoming the foundational technology with which we’ll use to solve problems in business and the world at large.

However, while the insights offered by AI are invaluable, it’s important to realise that AI isn’t the faultless system that always provide a perfect answer that technologists would sometimes have us believe. This is partly due to the fact that the AI systems we create are made up of algorithms and uses data, that, however inadvertently, are imbued with many of the biases that the people who created them hold.

The result is the creation of powerful AI systems that aren’t always fair or without prejudice, and so when coupled with the vast power of modern computing, are inadvertently helping to perpetuate and accelerate gross inequality. 

No silver bullet

An example of how human input can corrupt benign AI systems could be seen in the rollout of Microsoft’s ill-fated digital assistant Tay, which was quickly removed from Twitter after its interactions with humans led it to start tweeting racist, sexist and xenophobic comments.

Another example could be seen in Amazon’s aborted AI hiring and recruitment system which was developed with the goal to score job candidates based on their applications. However, after examining patterns from submitted applications over 10 years, most of which were from men, the system began to penalise women and applications that included the word "women's" or mention of all-women colleges received lower scores.

These are but high-profile examples of when AI can go wrong. Yet, it’s important to note that the actual damage done from bias in AI can be a lot more subtle and insidious. When one group of people is marginally preferred over another, given enough time, this can lead to massive divergences in where the two groups finally end up. 

That’s why we recently conducted research exploring this and found that over a third (41 per cent) of UK respondents thought that AI in its current state is biased with 38 per cent blame inaccurate data for this bias. Therefore, as AI implementation quickens pace, there obviously needs to be a robust response to the social and ethical dilemmas AI poses in the form of implicit racial, gender, or ideological biases.

Who watches the watchers?

Unfortunately, efforts at making sure this new technology is fit for purpose has not received the same funding or attention, in our ‘move fast and break things’ technology sector culture. That’s why there needs to be clarity over who is ultimately responsible for making sure that this powerful technology is safe and fair.

However, in a 2018 report called ‘AI now’, a New York University think tank summed up one of the key issues: the accountability gap is growing. Recent scandals demonstrate that the gap “between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller.”

Interestingly, our research also found that it’s in this space that the UK public feel as though the government is best placed to step into, with over a third (31 per cent) wanting the UK government to take on this responsibility and be more accountable for the future of tackling bias in AI.

Yet, there has been progress in this space, with the emergence of numerous ethics committees, government reports and guidelines for the creation and deployment of AI technologies. However, these tools are not without their difficulties, as evidenced by the failure of Google’s ‘Advanced Technology External Advisory Council’ following outcry over the makeup of its membership.

The human touch

Currently, we do not have a mature, global-standards body to help shape global governance of AI. That’s why it’s important that data analytics be combined with human intuition – particularly as the capabilities of AI grow in complexity. Afterall, we humans bring awareness, perception and ultimately decision making. That’s why rather than replacing business intelligence tools or teams, augmenting users will expand adoption by helping them become more data literate and allowing them to uncover insights in an easier and more ‘governed’ manner.

When developing autonomous technologies, we must remember to keep humans in mind. Not just in terms of the end goal, in which we try to visualise how an AI innovation could assist with our day to day lives, but by remembering how a particular AI system or algorithm can be built to function alongside human intuition. Autonomous AI technologies must be always developed with a goal of being an assistive technology – not one that amplifies our differences.

Elif Tutuk, R&D, Qlik (opens in new tab)

At the helm of Qlik’s R&D function, Elif leads a team of engineers and scientists whose only role is to explore the latest innovations in data science and other emerging technologies.