AI could be misued by hackers, experts warn

null

While artificial intelligence has tremendous potential to change the way we live and work, a new report warns that this emerging technology could easily be exploited by criminals, terrorists and rogue states. 

The 100 page “Malicious Use of Artificial Intelligence” report was written by 26 different authors from academia, civil society and industry to explore how the world could be transformed by AI in the next five to 10 years.    

To prevent this new technology from being exploited those creating AI systems must do everything within their power to reduce possible misuses while governments must also consider passing new laws to protect their citizens and organisations. 

The report urges policy-policy makers and technical researchers to work together in an effort to determine how AI could be used be maliciously and what safeguards could be put in place to prevent this.  The authors also noted that AI is a dual-use technology and that researchers and engineers ought to be both mindful and proactive about its potential for misuse. 

One area that was considered particularly troubling was reinforcement learning where AIs are trained without human examples or guidance to reach superhuman levels of intelligence.  The report highlighted a number of areas in which AI could turn rogue such as hackers use speech synthesis to impersonate high-profile targets, drones being trained with facial recognition software to target individuals, AI being utilised by hackers to search for exploits in code and other possible scenarios that could realistically occur within the next five to 10 years. 

Research fellow at Oxford University's Future of Humanity Institute, Miles Brundage offered further details on the security implications of AI, saying: 

"AI will alter the landscape of risk for citizens, organisations and states - whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast. It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour." 

The full report provides a great deal of insight into how AI could be exploited and those currently working with AI or planning to implement an AI system in the future should look it over to better understand how this new technology could be used against them in the not to distant future. 

Image Credit: Welcomia / Shutterstock