Skip to main content

If ethics are inherently human, how can machines be ethical?

(Image credit: Image Credit: Alex KNight / Unsplash)

The Association for Computing Machinery (ACM), the world's largest educational and scientific computing society, has its own Code of Ethics and another set of ethical principles that were also approved by the IEEE as the standard for teaching and practicing software engineering. In essence, the ethical code outlines how and why machines must positively contribute to society and human well-being. It states that programmers should work to develop computer systems that can reduce negative consequences to society, such as threats to safety and health, and that can make everyday activities and work easier. With recent progress in the field of artificial intelligence (AI), these ethical codes are coming under more scrutiny.

In order to understand the scale of the issue around AI and ethics, we first need to understand what an objective function is. Take, for example, a football team. The team's objective function is to score as many goals as possible (though I would argue that a football team's objective function is to maximise the probability of winning a match). So if a football team score 12 goals, the maximum it had the opportunity to score, then it doesn't matter if the opposing team then scores 50. Similarly, most businesses have an objective function to maximise profits over the lifetime of the business, rather than just the amount of revenue they make or the amount of profit made in any one year.

Objective functions have evolved rapidly with the advances in technology in the last few decades, from linear system programming to neural networks that enable the optimisation of non-linear objective functions. Using machine learning, engineers must write the checks at functions down in code, and can’t simply instruct a computer to maximise the probability of winning a football match. Those engineers must code it systematically and analytically, to establish the rules around how the machine will ensure the football match is won. 

The systems that we need to solve these objective functions get more and more powerful. So what has this got to do with ethics?

A common objective function theme that we've seen in the last few years lies with a lot of media and internet companies. What these businesses are trying to do is maximise engagement, which is another term for how much time people are spending on their website or using their programme. That can’t be written in code, so the techies at these businesses tend to have a proxy measure for it, which is to maximise the probability of someone clicking on a link. One way of achieving this outcome is to publish ‘clickbait’ - content that isn’t likely to be interesting to the audience but encourages the reader/user to click on it. The main issue with watching auto-play YouTube videos? It’s likely to be a waste of time.

Showing people videos that they care about and find interesting is a worthwhile task if those videos are legitimate and do not cause any harm. But you can't write that down in code. So the programmers must write the code as an order to maximise the probability users click on “Up Next”, regardless of the content or the psychological or political message the videos are pushing.

An example of how businesses can programme machines to be ethical or unethical lies with videos that were posted online (from both the Republicans and the Democrats) during the 2016 US Presidential election campaign. Viewers were systematically exploited by an algorithm that could tell they were vulnerable to such messaging. If coding is deliberately misinforming people, or preying on vulnerability to deliver a party-political message, then this becomes an ethical issue.

Facebook became so popular in Myanmar that the United Nations published a report looking into the role the social media site had played in the spread of misinformation that led to the 2017 genocide. Some 400,000 Muslims had to flee Myanmar to avoid persecution after a group had used the popularity of the site to share slanderous, and false, stories about the threat Muslims posed in the country. Facebook had no intention of inciting hatred or invoking genocide in Myanmar. But it did have an objective function to push content that users would click on, and spread the content that got high hit-rates more widely. What the algorithm pushed was the most popular content - which happened to be inflammatory - with terrible consequences.

So what can we do? We come back to the programmers and the ethics required to “positively contribute to society”. One thing that can be done is to keep a human in the loop. If a human is overseeing content being spread then they will be able to prevent situations such as those in Myanmar. It’s not always possible to keep a human involved if there are millions of posts/videos being shared every minute, so without doubt, better and stronger algorithms are needed.

Technology will keep improving and our systems will get smarter and smarter, though this relies on a few assumptions - that we can make technological improvements, and that we want to make technological improvements. Another presumption we have to make when we assume our technology will continue to develop is that the scale of human or machine intelligence is indefinite, or at least the peak is significantly far away from current human-level intelligence.

Biggest challenge - ever

If we look at the development of the human brain, evolution reaches a cap at some point. The term “super intelligence” is something our brains are just not designed to think about as we haven’t evolved to a point where we can understand the concept.

How long until we reach super intelligence? When asked, many researchers in the industry guess somewhere between 50 and 100 years. The bottom line is that no one really knows, but this seems a sensible estimation.

If we imagine, for a moment, that we are at some point in the future and have successfully managed to create a super intelligence. We must now give it an objective function. Perhaps we are the owners of a paperclip company and our super intelligence is given the task of creating more paperclips. It starts off doing a great job - churning out paperclips faster than we could have possibly imagined. The system is designed to make itself better and improve its machines. It realises that the warehouse it is housed in contains some steel, so it removes the metal from the warehouse building and turns it into paperclips. Then it realises that humans have small amounts of metallic atoms in their bodies...you can see where this is going. The super intelligence would not stop until it had wiped out the entire human race and used up all the resources on the planet. This may sound like some dystopian future but the machine was simply doing what it was asked to do.

If we can programme a super intelligence to simply “do what I would do” then it very much depends on who is doing the programming to set the ethical standard. Assuming we can build a super intelligence, we will have one solitary chance to use an objective function that is 100 per cent correct. The people responsible for implementing this are likely to cut corners - they are likely to be governments in a race with one another to be the first to achieve something extraordinary.

There are huge incentives for being the one who did it first, rather than the one who did it right. All governments must take responsibility for working together simultaneously because it only takes one government to do it wrong. And the negative consequences for everyone will be very, very serious. We need an unprecedented level of international cooperation to make sure that everyone does this correctly. Perhaps we need a Hippocratic oath for programming that all governments are bound to adhere to.

If we accept the idea that creating super intelligence is likely to be the biggest challenge humanity has ever faced, resulting in the most powerful weapon ever invented, then it’s also likely to be the hardest to get right.

We must start thinking about the solutions around regulation, licensing and monitoring - about laying the groundwork for success - because if we get it right then the upsides are extraordinary. If we get it wrong, it could mean the end of everything we know.

Sam Ringer, Machine Learning Engineer, Speechmatics