AI is everywhere and the potential is incredible, but as the saying goes, with great power comes great responsibility. Unfortunately, the likes of Uber’s “God” mode or Deliveroo’s rider “Hunger Games” with its now deemed discriminatory algorithm, have both become infamous examples of what not to do.
Across all sectors, organizations are increasingly turning to AI to overcome business challenges and to propel their business forward, but how do we make sure to not be the next one grabbing those headlines for the wrong reasons? Where do the obstacles and particular pitfalls lie and how can organizations better look to get it right? The problems do not lie with AI itself but in how it is developed and used.
First things first, organizations need to understand the reasons why ethical AI is important – beyond just not receiving bad press.
A huge benefit of AI is that it is used to make an impact at scale but that means getting it wrong or right will also have widespread consequences. Returning to those previously mentioned examples of Deliveroo and Uber, those algorithms ultimately affected many jobs and people. It’s vital not to forget that AI has a human consequence.
Incorporating the ethical notions to AI is in fact the best means to help prevent bias and other risks. If the same traditional approaches and processes continue to be used, the same problems will continue to arise, and discrimination (albeit unconscious) will certainly continue too. With carefully developed AI it is possible to remove that human bias, but only if the AI is ethical. This is not just a theoretical statement or study (which don’t get me wrong, are very important too), but a tool for conducting everyday business in the right way, for the organization and for their clients and users. It is as simple as that.
Pinpointing the pain points
One of the initial biggest problems with ethical AI is that the very notion of what is ‘ethical’ is vague and open to interpretation. Before organizations can hope to create ethical AI, their understanding of ethical needs to be clearly defined to ensure everyone is on the same page. Everyone also needs to be sure about why they want to develop that AI technology and its intended purpose. Clearly outlining and stating why and how the AI tech is to be used and laying down rules should also help to safeguard against possible misuse as in Uber’s case.
Turning these ethical concepts into practical applications of AI poses the next big challenge that companies face. Here, when businesses are looking to start creating and developing an algorithm, is where issues can start: how we build an algorithm is as important as how we use it. The initial data input and the consequent computational model formed will determine whether organizations get their intended outcome or create an ethical dilemma instead. Many algorithms, for example, have previously been designed based on historical data, but this often means old patterns can simply be repeated – continuing the same problems and potentially the same discrimination or bias.
- Practicing ethical AI: How data scientists and business leaders can eliminate bias from machine learning algorithms
Progressing past problems
Organizations need to stop this repetition of the same patterns and the same problems in their tracks. A big part of that boils down to data.
AI forms its patterns to produce processes for performing tasks from the data it is fed. As such, an algorithm is only as good as its data. If that data is skewed in some way, it will affect the eventual output and once patterns have been founded, AI will continue to simply follow them. Consequently, quality data is of the upmost importance, as well as understanding where that data comes from. organizations must use current, clean data and if needed, clean up data before taking any steps. In the end, the algorithm essentially implements the patterns hidden in the data; it’s data that does the heavy lifting.
There is also a key element of possible human error to be aware of at this stage. While the data may have been cleaned up to hopefully protect against possible bias or discrimination, it is possible that those responsible for inputting that data and creating the algorithm can unconsciously project their prejudices or bias. The development of AI needs to be approached with an open mind, devoid of bias to ensure against the formation of discriminatory patterns. We often talk about diversity and inclusion in the workplace, we need to start thinking about diversity and inclusion in the algorithm!
Even if organizations have taken these precautions with what data is initially inputted into the algorithm, issues may still occur. organizations need to create and build models to continually test their data, and applications of AI must be analyzed and reviewed regularly to make sure the algorithms are not repeating incorrect patterns or misleading. Any additional new data going in also has to be analyzed and as more people engage with the technology, it will begin to learn behaviors from others. No one wishes to repeat Microsoft’s mistake with ‘TayTweets’, so, organizations must constantly check what new data is being inputted and what the algorithm is learning. If organizations are not conscious of this and are getting patterns wrong, they may be blind to these issues – until they develop into a bigger problem. Beyond a shared vision and understanding of what is ethical, all those developing or working with and using the AI technology, from the data scientists to end-users, need to have a solid understanding of the algorithm itself, so they can spot any problems and identify if it is going against the principles and purposes intended.
organizations and their business hopes and aspirations are only going to be as good as their AI algorithms. As such, these algorithms must be developed carefully, thoughtfully, be tested regularly and not used in haste – how we use them is as important as how we build them. Luckily for businesses, there is increasing external support for creating ethical AI, such as the EU’s seven principles of AI, which grants data scientists and AI experts with a framework for making ethical AI applications operational. However, ultimately, responsibility for the success and impact of their AI technology, and whether it is being praised for innovation or condemned for discrimination, lies with individual organizations and how they develop and use AI.
With great power comes great responsibility, and no, this quote is not originally from the Spiderman series!
José Alberto Rodríguez Ruiz, Chief Data Protection Officer, Cornerstone