Avoiding AI's darkest future - is ethical AI beyond our control?

(Image credit: Image Credit: Geralt / Pixabay)

Over the past few decades the combination of Artificial Intelligence (AI) and advanced robotics usurping and eliminating the human race has become a popular science fiction trope. You only need marvel at Will Smith leaping across self-driving vehicles on a fully automated highway, battling out-of-control – or very much in control – machines in I, Robot to know that AI (theoretically) could decide that the only way to preserve human life – is to master it.

Despite the hyperbole, issues that titles such as I, Robot, Ex Machina, and Avengers; Age of Ultron all raise around AI ethics are all very real and represent the most monumental and imminent change ever to be recognised by humankind.

Differentiating between AI and AGI

Before we get buried into the ethics we need to understand what we mean by AI. Often, we confuse the term Artificial Intelligence (AI) with Artificial General Intelligence (AGI), with the latter being exaggerated across popular culture. AI functions are preprogramed beforehand, i.e. the decisions AI machine learning makes are based on empirical existing data, whereas AGI is sentient, self-aware and can understand its position in the world and the actions of others.

Today our most pressing problem centres on the use of AI rather than AGI as it becomes ever more invasive in our world without us even realising. Highly specialised AI’s are already used to control decision processes and reveal hidden data relationships, which is having a dramatic impact on society and shaping everyday decisions such as deciding what adverts we see to the level of healthcare we get.

AI is becoming responsible for our own personal view of the world, and what’s worrying is that those responsible for these decisions cannot guarantee this process is conducted in a fair and unbiased way. Yes, AI is being used for good: to discover new medicines, better treatments or crop yields. They are also being leveraged for commercial advantage, such as to provide threat detection within a firewall. That’s also fine, but what about more nefarious objectives, such as manipulating voters in a parliamentary election? Is this a competitive edge or is it illegal?

How far do we go?

Perhaps one of the most difficult questions to answer now is at what point do we allow AI to decide on our behalf? Today there are some obvious areas where as a society we would naturally draw the line, either through an understanding of the risk or driven by fear of the unknown, but there are plenty of examples where it is not that simple. Using science fiction again to demonstrate my point, take the decision of the machine in I, Robot that saves Will Smith’s character over a girl because his chance of survival is higher. Is this ethical? It is in the eyes of the creator.

Enterprises sit at the heart of the AI revolution and will be the decision makers of what is and is not ethical. The problem with this is that what is likely to happen is that the leaders will repeatedly leap at the wonders of using AI as a great problem solver or competitive edge without fully realising or considering the ethical dilemma first.

Driverless cars are a great example. We cannot influence the outcome for every eventuality and we do not have direct control over the AI decision process, so we must rely on something that can assess its environment and decide for itself. However, we need to put boundaries on those decisions and one such boundary is the decision to avoid an action that may endanger human life inside or outside the vehicle. This is a clear and extreme ethical problem to solve. Who would we blame for a fatal collision? The car company? Okay, but who is personally prosecuted, the CEO? The designer of the car? The AI? If we choose the latter, what are the consequences of attributing blame to something that isn’t sentient? Will this be used as a scapegoat for enterprises?

An Uber-issue

It’s taken over a year for US courts to determine that Uber will not face criminal charges for a fatal crash in Arizona involving one of its self-driving cars, and that in fact the back-up driver is liable and likely to be prosecuted. Indeed, it’s a lot easier to blame a person than an enterprise – but what happens when there’s no back-up driver? Are a few deaths here and there because of autonomous vehicles ‘ethical’ in the bigger picture when we consider just how many fatal road traffic accidents there are globally?

The more we delve into AI decision making the more we find ourselves faced with the same problem: is there a direct or indirect detrimental effect on a human being that could be considered unfair, unethical or uncivilised? Bigger still, how do we detect such a problem?

Laws and guidelines

The current overarching conundrum surrounding AI ethics is really in who decides what is ethical. AI is developing in a global economy, and there is a high likelihood of data exchange between multiple AI solutions. Without clear testing guidelines or even in most cases the ability to test, we can’t know if a system hasn’t been intentionally corrupted or simply built from a flawed set of principles. Take the Uber case. This incident is unprecedented. The state of Arizona has no legal guidelines whatsoever to work from – globally or locally – so appropriate punishment is determined solely by the principles of the judges in Arizona.  

Some believe that if we make an ethical miscalculation in the use of AI we can always go back and try again in that area. This is incorrect. Consider market drivers and social dependency in the world today. Just look at our dependency on social media, which lacks many necessary controls and yet people still use it religiously. Also, consider the introduction of Internet of Things or “smart devices” which the consumer has lapped up despite having no clear regulation around security.

Although we are still some way off sophisticated AI solutions becoming commonplace, this much is clear. We need to establish the minimum global guidelines for AI security and ethics before enterprises leading the charge drive us down a route we cannot back out of. If we adopt the same blind faith in AI solutions and platforms as we have with companies like Facebook, perhaps science fiction won’t be fiction after all.

Colin Truran
Image Credit: Geralt / Pixabay