Skip to main content

Armed with AI, but time to pull the trigger?

(Image credit: Image Credit: Alex KNight / Unsplash)

While artificial intelligence is often hyped-up as a business saviour and derided as a job killer, the question of AI ethics also comes to mind in military uses of the technology. Particularly in the wake of a report that Apple has cancelled Xnor.ai’s Pentagon contract for military drone work following its acquisition of the AI company.

Xnor.ai was reportedly working on the controversial Project Maven, which is using AI to identify people and objects in drone videos and photos. Once again, it is bringing the relationship between the world's armed forces and technology into sharp focus.

More and more militaries using AI

Around the world, AI is already seen as the next big military advantage. Last year, the US announced a strategy for harnessing artificial intelligence in many areas of the military, including intelligence analysis, decision-making, vehicle autonomy, logistics, and weaponry. The Department of Defence’s budget for 2020 allocated $927 million for AI and machine learning.

Earlier this month, the US Army announced that it is to deploy its Aided Threat Recognition from Mobile Cooperative and Autonomous Sensors (ATR-MCAS) system to transform how it plans and conducts operations. The technology is comprised of a network of air and ground vehicles equipped with sensors that identify potential threats and autonomously notify soldiers. The information collected would then be analysed by an AI-enabled decision support agent that can recommend responses, such as which threats to prioritise.

AI for good

While AI weapons are a stark reality, many deployments involve uses of the tech in automated diagnostics, defensive cybersecurity and hardware maintenance assistance. Examples include testing whether AI can predict when tanks and trucks need maintenance or simply better logistical and administrative processes. In fact, the use cases for AI in defence are plentiful.

For instance, NASA uses AI to ensure all the systems in the Orion spacecrafts digital cockpit are behaving correctly. That these instruments are showing the correct information and entering information into the instrument has the correct effect - something that is clearly critical to mission success. The Federal Aviation Administration also uses AI testing technology to ensure its digital displays are correct: i.e. if an aircraft comes into the monitored airspace, it shows on the appropriate screen in the appropriate way.

How should AI be approached?

According to an Electronic Frontier Foundation (EFF) white paper geared towards militaries, there are certain things that can be done to approach AI in a thoughtful way. These include supporting civilian leadership of AI research, supporting international agreements and institutions on the issues, focusing on predictability and robustness, encouraging open research and dialogue between nations, and placing a higher priority on defensive cybersecurity measures.

In spite of this, there are many leading human rights organisations that argue that the use of weapons such as armed drones will lead to an increase in civilian deaths and unlawful killings. Others are concerned that unregulated AI will lead to an international arms race. Like any technology, as soon as the criminals and bad guys of the world start to use it, it almost forces the hand of militaries that don’t want to be caught out or left behind.

More awareness of AI among software acquirers is key when it comes to adding context. AI breaks many of the assumptions that people make about software and its potential negative impacts, so anyone acquiring a product that includes AI must understand what that AI is doing, how it works, and how it is going to impact the behaviour of the software. They must also understand what safety mechanisms have been built in to protect against errant algorithms.

AI ethics can be unclear

AI ethics are complex at a global level precisely because different cultures have different values. However, because the fine lines of war and peace are at stake, the military arena can actually be where a global consensus is found if and when the international community comes together around a table. 

From the Treaty to the Geneva Convention to the Good Friday Agreement, there is a long history of creating good faith agreements about the rules of war. However, there will also be rogue states that openly disavow any agreed AI ethical framework and those who choose not to act in its spirit. What is critical is the willingness of nations to discuss these issues and continue to research the benefits and pitfalls of widespread AI application and implementation within military usage, to further inform the ethics of AI.

Who makes the call?

Ultimately these are ethical quandaries that will likely take years to find an answer to, if such a feat is even possible.

The next number of years will be a critical period in determining how militaries will use AI. Either the defence community will figure out how to contribute to the complex problem of building safe and controllable AI systems, or buy into the hype and build AI into vulnerable systems and processes that we may come to regret in decades to come.

Dr. John Bates, technologist, Eggplant Software (opens in new tab)

John is a visionary technologist and highly accomplished business leader. He pioneered the space of streaming analytics as Co-founder, President and CTO of Apama (acquired by Progress Software in 2005 and now part of Software AG).