Practicing ethical AI: How data scientists and business leaders can eliminate bias from machine learning algorithms

(Image credit: Image Credit: PHOTOCREO Michal Bednarek / Shutterstock)

In early April, the EU announced a host of ethical guidelines on artificial intelligence (AI), with the aim of addressing whether AI is actually good for society; an important question that has already resulted in the closure of a number of AI applications. This announcement made me think back to a report published in Reuters at the end of last year (1), which detailed the series of events that led Amazon to shut down a proprietary machine learning algorithm built to identify the top candidates in its job applicant pool. Despite the company’s best efforts to extract the most qualified job-seekers, Amazon found that this AI tool consistently penalised CVs that included the word “women’s” and the names of two all-women’s universities.

For data scientists, the story and others like it serve as a reminder that, if we’re not careful, our pursuit of objective innovation may instead perpetuate some of the pernicious biases that permeate our society. Indeed, in this instance, the structural problem is easy to spot! Machine learning-driven algorithms used to predict future successes are generally based on past ones — and since the majority of Amazon’s past software development hires have been men, its algorithm reached the conclusion that future hires ought to be as well.

Amazon’s creation of a structurally-biased algorithm is far from an isolated incident. In an industry that prizes innovation and a “move fast and break things” ethos, we often fail to consider all the implications of the technology we build. When left unchecked, biased algorithms have done real harm to women, people of colour, and members of other marginalised groups – preventing them from accessing job opportunities (2), portraying them negatively in the media (3), and perpetuating the ugliest of racial stereotypes.

Ironically, it is precisely the attempt to automate away the fallibility of individual human judgment that leads to this algorithmic amplification of our collective societal biases. And from data science practitioners, to AI team leads and all the way up to the C-suite, the solution is to reintroduce our thoughtful, ethical human perspective back into the picture.

Building a culture that cares

Ethical application of data science begins at the very top. When a company’s products or services rely on algorithmic use of data, its executives — whether or not their role entails any data-related expertise — should make a practice of inquiring into the potential for bias in the underlying data and whether the algorithms can create biased feedback loops.

Asking these questions, and following up on problematic answers, tells data science practitioners that ethical algorithms are an important part of their job, and that they’ll be held accountable for providing substantive, honest responses.

With this information at hand, it’s incumbent upon leaders to make ethical business decisions — especially when their algorithms prioritise sexist, racist, or otherwise inappropriate feedback. Google, for instance, did quietly adjust its search algorithm after the social scientist Dr. Safiya U. Noble reported that her searches for the term “black girls” resulted in pornographic content at the top of the very first page (4). Better yet is for leaders to catch and address issues before they reach the public, and even shut down irremediably-biased tools, as Amazon did.

Diversity matters

Functional leaders in data, machine learning, and AI organisations have additional responsibility at the people level: namely, to create diverse teams of practitioners. At a minimum, they should strive to hire people to build algorithms that are representative of the customers who will interact with the finished products and services. A team comprising diverse demographics will provide a more robust perspective on demographic data being used in a sensitive manner. 

Really, no matter the industry, this is just good business. Consider: B2B companies, whose products the public may never see, nonetheless need employees familiar with their various aspects of their own business, as well as those who understand their clients’ perspectives and can assess whether potentially harmful algorithmic feedback loops are being created.

Good statistical hygiene protects customers

Those who work directly with the data – machine learning engineers, data scientists – have tremendous power to ensure that the tools they build create equitable, ethical results, and a personal responsibility to report potential concerns to leadership. What is most important is that they temper the “move fast and break things” ethos with steadfast attention to good statistical hygiene and an eye to the context in which their models operate.

Before putting an algorithm to use, data practitioners must stop to ask whether the data sets on which their models train appropriately reflect the populations from which they’re drawn, and if not, what bias that might introduce into model results.  Such biases can lead to embarrassing and offensive consequences, as when Google’s image recognition algorithm infamously mislabeled black people as gorillas because it had not been fed an adequately diverse set of images when taught to identify humans. That algorithmic flaw was avoidable; a more thoughtful and rigorous approach by a diverse group of data scientists could almost certainly have prevented it.

---

As astute minds have noted, AI is transforming the 21st century the way electricity did the 20th. Among the many parallels is this one: we have breakthrough technology on our hands, but we desperately need to make sure we’re harnessing it correctly. In its early days, electricity caused harm and destruction through fires and electric shocks. Over time, electricians’ organisations and government agencies developed installation codes that kept people safe.

A similar approach is needed to craft best practices for effective, unbiased, and especially non-discriminatory AI. I am encouraged that these guidelines are emerging through research and experimentation; only then will our industry start taking the problem of algorithmic bias seriously, so we can we begin moving along the path to solving it.

Catherine Williams, Chief Data Scientist, Xandr
Image Credit: PHOTOCREO Michal Bednarek / Shutterstock