Skip to main content

Q&A: Understanding the role of ethics in AI

(Image credit: Image Credit: Geralt / Pixabay)

Q: What will human/machine partnerships look like in the future workplace and in our everyday lives?

A: The human/machine partnership is essentially how we accelerate the progress of human thought and creation. The modern history of humanity is intrinsically linked with machines. Just look at the industrial revolution, a period defined by our utilisation of machines to accelerate our growth. Of course, there has always been significant tension in this relationship – even one of the terms that is used today for technology’s detractors, luddites, is a throwback to labourers who opposed the use of labour-saving machinery in the 19th Century. The current anxiety around technology, especially automation-based technologies, may seem fresh but it’s a power struggle that we’ve been grappling with for centuries.

We have many problems facing humanity today that we need to tackle quickly – AI has the potential to help us accelerate our response. One example is the agriculture industry. We are used to seeing how technology can address the increasingly difficult task of feeding the world. Now we’re using AI and automation to continue that evolution, helping the industry to increase crop yields and move us closer to mitigating the problem.

As with most iterations that have preceded it, we as a people have a history of utilising technology for good and for bad. The future of our partnerships with technology ultimately falls into our hands, and we all have a responsibility to ensure that the scales fall in favour of good.

Q: Are there any ethical implications that businesses need to consider when introducing AI applications into their businesses?

A: As an industry, we’ve talked breathlessly about the threat of AI and how powerful it is. This narrative has become very powerful and pervasive. It’s popular to write about it polarly – it’s either going to save the world or lead us to ruin. This doesn’t leave us with a lot of grey area in between to work with, which is far closer to the reality.

I think that what businesses need to be mindful of is what data they feed into their AI algorithms. As we’ve seen in several unfortunate examples, if you don’t train your AI with a wide set of data it can significantly amplify bias in the end-product. For example, if your facial recognition programme is only trained on white men, then you’re going to see some unbalanced outcomes.

If businesses are going to implement these technologies, its leaders have a responsibility to ensure that the algorithms they’re creating are reflective of the world at large. This goes beyond technology and reflects the need for wider diversity across the business, from developer teams right up to the business leadership team.

Q: Do you think that AI should be regulated? If so, what should this look like?

A: Regulation clearly has a role to play in the application of technology. In the last year, we’ve already seen significant change to the regulatory landscape in Europe from a privacy perspective. However, I do firmly believe that the regulation of technology should primarily be focused on the outcomes rather than the nuts and bolts of the technology. All technology, including AI, has the potential for incredible good in the right hands, but as any powerful tool, they can also be used for things we consider morally wrong.

I think that when we consider regulation, we should think of it through a lens of purpose and intent. We need to ask ourselves whether the end-usage application of a technology is “good” or not and then build an ethical framework out from there. I see this as a more sustainable approach than regulating the development of technology that could have a significantly good effect on society and the progression of humanity.

In recent years the conversation around AI has become a lot more nationalistic in nature because people see it as a critical tool for competitiveness. Personally, I do not see a need for any global regulation on AI, I think that given the universal nature of technology, it’s impossible and irresponsible for us to paint on regulation in a “one-size-fits-all” fashion. Technology is being applied in different ways across the globe, so the question of regulation often must be one that works on a country by country basis. However, that absolutely isn’t to say that we shouldn’t have common moral imperatives that are at the heart of the application of technology. As a global community I agree that we need to seriously consider the end results. For example, there are some unsettling use cases for weaponised AI that absolutely should be addressed. This could potentially take the form of international treaties.

Q: Democratisation of AI is a question that often comes up in discussions around the ethics of AI. Do you think it’s possible for it to become a technology that benefits everyone?

A: I think that in theory, AI is already a tool just like any other that anyone with access to a computer and an internet connection can utilise and ultimately monetise. Right now there are scores of free frameworks online where people can download an application that comes with built-in, opensource algorithms. The real problem is in the education piece – how do we make everyone aware that these tools are available, and how can we prepare them with the skills that enable them to do so?

For me the answer lies in the strength of the educational curriculum and how much it will prepare today’s learners for tomorrow’s work. That isn’t to say we should focus purely on STEM to the detriment of more liberal arts. To the contrary, to ensure that the technology we create is used responsibly and for the good, we cannot lose sight of the subjects that reflect our humanity. In the next era of human-machine partnerships, at the same time as encouraging our children to count and to read, we must also encourage a diversity in their thinking. That means recognising the importance of the arts, humanities, and social sciences in nurturing creative, critical thinkers. Core skills like emotional intelligence and moral reasoning are vital if we are to train out the bias and single-minded thinking that exists in our industry and in our AI programmes. As forecast in a report we recently conducted with The Institute for the Future, “Future of Work 2030,” understanding the value of responsibility, transparency and accountability will be critical for developing true AI fluency.

Thankfully, we have educators out there today like Susan Etlinger, who are actively researching the ethics of AI and the use of data, which makes me feel significantly more positive about how the future of this technology will develop, and the great things we will be able to achieve as a result.

Matt Baker, Senior Vice President of Strategy and Planning, Dell EMC