Skip to main content

AI bias: blame the workman, not his tools

(Image credit: Image Credit: Geralt / Pixabay)

First it was taking our jobs, now it’s discriminating against our customers and causing some major crises for brands. Artificial intelligence (AI) has brought efficiencies and improvements to customer experience and engagement. However, a number of headline-grabbing stories have revealed that a significant and pervasive issue remains: AI bias.

Big data’s big value

Customer data is a highly valuable asset to any brand. It allows personalization of online experiences, something that’s particularly valuable to e-commerce. After all, if you have data on a customer’s previous browsing and buying behavior and their demographic sector, you can predict what they’re likely to buy – plus when they’re likely to buy and how much they’re likely to pay – and serve them relevant information. It enables brands to target and deliver that holy triplet of: ‘right customer, right time, right message’. The shopper gets a seamless and expedited journey from browse to buy, while the brand converts sales and builds customer loyalty. Win-win.

The endless online ecosystem requires brands to be able to do this accurately and at scale. And this is where AI comes in. In addition to personalizing customer engagement, brands are using AI tools for supply chain management, inventory management, programmatic advertising, smart recommendations, and in chatbots.

Like any new technology, there are barriers to early adoption, including the need to upskill workforces to use AI platforms, the fear of ‘humans replaced by robots’ job losses, and initial outlay on the technology. We’re seeing a tailing off of the first two challenges, as AI moves into the mainstream. As for the latter challenge, while outlay is required, AI is swiftly becoming the only way to manage vast amounts of customer data and serve the best possible digital experiences. 

Fortunately, the results of adoption are being acknowledged. A 2019 Deloitte survey of 1,100 US executives from advertising and marketing firms considered early AI adopters, found that 82 percent reported a positive return on their investment for their AI initiatives. Adoption is increasing too. 58 percent of respondents to a 2019 McKinsey survey said that their organization had embedded at least one AI capability into a process or product in at least one function or business unit. This was up from 47 percent in 2018.

AI’s dark side?

That’s the good of AI. But how about the other side of artificial intelligence that is often a focus of media stories? This narrative thread seems to demonize AI by exposing it as racist, sexist, and discriminatory. AI tools are trained on data, and the way this is collected, ingested, analyzed and utilized will impact the results. If AI is trained on biased data sets, the outcomes may too be biased. If a facial recognition tool, is fed photos of predominantly white men, for instance, the tool may be unable to accurately recognize and analyze faces of non-white women, resulting in major errors.

This is a problem not only for inexperienced developers; some of the biggest names in tech have come under fire for accusations of AI bias. Amazon, for instance, was accused of using a sexist AI-enabled tool that automatically sorted CVs. The algorithm used in the tool self-learnt to favor male candidates over female, due to the fact that it had been trained on data sets which involved predominantly males. A study in 2015, meanwhile, found that the 11 percent of the results of a Google image search for the term ‘CEO’ showed photos of women, despite the fact that 27 percent of CEOs in the US are female.  Science and tech research university MIT has also fallen into the AI bias trap. It was forced to take down its 80 Million Tiny Images database – which is used to train image-recognition AI – after accusations that it trained other AI systems to describe people using offensive terms. It labelled some photos of women with sexist and misogynistic words, and some of those of non-white people with racist and derogatory language.

The price (discrimination) isn’t right

In the marketing and e-commerce sector, there are also potential pitfalls to using AI platforms. These include price discrimination. Sometimes called dynamic or personalized pricing, the practice involves charging different people different amounts for goods on ecommerce sites. Prices will automatically adjust based on the data a brand has on the customer. This could include things like their IP address, location, browsing history and past purchases, through to more personal information like their occupation, level of education, age, race and gender. And this is where problems arise.

An AI might recommend a product for a certain shopper to buy online. It could be observed that they are less likely to buy something online, and assume the shopper is poorer, versus someone who might buy quickly and who is therefore likely to be more affluent. The algorithm doesn't know if the shopper is rich or poor, but as its purpose is to increase conversion, it might offer larger discounts to poorer people and charge richer people more. This is technically price discrimination in order to increase sales.

The only way to avoid this is to not utilize any kind of demographic pattern. There are examples from the past where a brand may have used gender, or the prominent ethnicity in a given area, to affect the kind of contract that they offer a prospective client. AI might provide really accurate results, but the problem remains that it’s unethical. While data can be used for things like classifying product lines, using demographic data puts retailers at risk of discriminating, even if the data provides accurate results.

We’re talking here just about people shopping online and buying products. Even bigger challenges arise – and potentially even more harmful impacts – when AI uses demographic data to make decisions about things like providing credit and financial services to an individual, or deciding on insurance payments.

Wield it wisely

Delivering customized digital experiences based on a visitor's unique characteristics and browsing behavior has been shown to improve engagement metrics and increase conversion rates. As tools have evolved, the way we personalize content and the type of content that we personalize has also changed. The only way of doing this at the kind of scale (and with the kind of accuracy) required by brands today is via AI tools.

That word, ‘tools’ is critical here. Bad AI makes for attention-grabbing headlines, but amidst all the headlines calling out AI for being racist, sexist and discriminatory, it’s important to remember that AI is simply a tool. It can therefore only ever be as biased as the people who produce the data upon which it relies. As with any tool, AI can be used in a ‘good’ or ‘bad’ way; wielded for positive or negative. The key for brands is to have the right underlying infrastructure; that which enables in-depth analysis of data, and transparent, open means of data collection.

Data platforms must stitch together once siloed databases to create a holistic – but anonymized – view of every customer. This can then be used for analysis and engagement, to engage with new customers and optimize the experience with old.

Omer Artun, Chief Science Officer, Acquia (opens in new tab)

Omer is Chief Science Officer at Acquia and the founder of AgilOne. He holds a Ph.D. in Computational Neuroscience and was a consultant with McKinsey & Company, consulting high-tech and retail companies on strategy development.