Skip to main content

Weaponising AI – Elon Musk has a point

(Image credit: Image Credit: Computerizer / Pixabay)

Last week Elon Musk hit the headlines when he said, and I paraphrase, we should be more worried about AI than North Korea. That really isn’t something you want to read as you tuck into your cornflakes.   

The truth is though, at the moment, no one really knows what AI can do for mankind definitively. It is a great unknown. Elon Musk’s comments follow robots playing games and beating humans. They show the human brain isn’t quick enough or better than certain kinds of automation.   

But, as a species we don’t need robots to play games. We need robots to change life for the better. We believe we need smart cities to make life cleaner, safe and easier. We think we need to replace mundane tasks with automation and provide knowledge workers with more scope to make a difference in terms of the quality of service they can provide to a customer. We need AI to help beat cancer.   

That said, we have to start somewhere and games test concepts and theories that help us on the road to making the unknown known.   

Experiments and research will inevitably inspire debate and question our ethics. When is it right to use a robot and who should decide when a robot is used?   

These are big questions and must be answered. For if Mr Musk is right then weaponising AI is a truth waiting to emerge, and begs the question ‘is it a world we want?’. 

Enter the cyber security specialists who will argue it is. We are already facing a barrage of bad bots fighting good bots. Anyone responsible for network or application security in an organisation will be seeing how automated cyber attacks have become - the black market for off the shelf attacks is starting to mature. Keeping up to date with the threats is harder for researchers and as the human brain simply can’t process information quickly enough to beat the bots so our only hope is to turn to AI.   

The numbers underline the threat and how our values will be challenged - research shows that 62% of cyber security professionals think AI will be weaponised, and in use, within the next 12 months.

It’s a prediction we should expect to materialise. Why? Well, machine learning (ML) and deep learning (DL) are both specific approaches to AI. ML and DL with supervised and/or unsupervised learning derive from pattern analysis and classification. They have been used successfully to solve a rich number of problems, including those in the field of cyber security.   

In fact, most of the applications today in the field of cyber security are based on detecting attack patterns and detecting anomalies. It is being used in different domains from host-based security (malware) to network security (intrusion/DOS). Bottom line it’s mainly about finding meaningful information in huge pools/lakes of data and exploiting it.    

One common area of research between white and black hats is the hunt for vulnerabilities and as such 0days whereby ML/DL can be leveraged to collect information and use it to fix the problem, or, in the case of unethical hackers, create one. 

Another approach in the hunt for 0days is searching for patterns in source code, reversed code or binary code and identifying suspect pieces of code that might lead to the discovery of new 0days – a task that can easily be automated. 

There is also the exploitation of genetic algorithms (GA) which can be used by hackers to help devise the ultimate attack method - ‘brute force’.   

That’s because GA does not limit itself to a predictable decision tree of known attack methods. GAs would be able to find new ways, those never thought about before to devise an attack. Defending it relies on ML/DL to detect and defend all these fast paced and complex attacks. People simply can’t do it.      

Now it feels like a race. Will the white or black hats find the vulnerabilities first?   

Security specialists who discover vulnerabilities will inevitably arrive at the question of what happens if the same weaknesses are found by criminals and in turn, if AI is used to turn it into an offensive weapon? Some think this could slow the development of AI for defensive use. Why so? 

Take for example poisoning tactics where attackers are seeding false information into new and young deployments of ML solutions and by doing so affect the way they classify or recognise good and bad patterns. The goal of poisoning is to seed the environment first and then perform the attack. As the infected ML won’t be working effectively it will not be able to detect the attack. To be effective, attackers would need to know how ML behaves, naturally then, any research on the topic will be welcome information to criminal minds.   

Moreover, advancements and research in defence only serves to advance the capabilities of attackers. If there is one thing researchers have learnt from past attacks and exploits, it’s that a criminal who is hacking for fun and profit is leveraging mostly white hat security research in their attacks.   

How many times have we seen attacks based on vulnerabilities that were disclosed a few weeks or even several months before? You just need to look at WannaCry as a recent example. It exploited the fact that people do not upgrade often or in a timely manner. Why would that category of hackers, using massive, untargeted attack campaigns, need to resort to any type of research when the vulnerability it’s handed to them on a plate?   

We can’t ignore the state sponsored attacks though. And this is where Elon Musk could be right. Here research is paramount. The economy of this type of hack is completely different compared to a WannaCry style hack.    

If a state is hell bent on asserting their authority and presence on a jurisdiction, even the world, then this is the smartest way to do it.   

As we said at the start, the application of AI is an unknown so how does a country prepare for that scenario? How do you plan for the unthinkable? What would happen if AI was used to jam communication links, plunge cities into darkness, set oil rigs alight, destroy emergency services? 


That’s why we should all take note of the warning and ensure that governments, academia and private companies work together to invest in skills, technology and ethical frameworks. If we don’t then we can’t ensure that the development of AI stays firmly in the domain of doing good. And if we fail in that then we fail mankind.     

Pascal Geenens, EMEA Security Evangelist, Radware  

Image Credit: Computerizer / Pixabay

Pascal Geenens
Pascal Geenens is the EMEA Security Evangelist at Radware.