Is AI being over-hyped in the security industry?


Will AI security completely take over from humans and provide a sci-fi like experience to defend against advanced cyber-attacks, or is it still some way behind and should we instead look at the advancements made in machine learning to boost our defences in the near future?

These questions and more have been a hot discussion point in the cyber security industry recently, with many commentators concluding that true AI security has been significantly over-hyped.

And in many ways, they are right.

For example, I am yet to see a security application or system that can intelligently adapt and evolve to different situations and not just continuously perform a single, repetitive task.

It’s often a case of blurred lines as many are failing to understand the difference between AI and machine learning. Machine learning, which technically is a subdomain of AI, is more than just Neural Networks and Deep Learning. The latter, which are all the rage in the industry, are but one class of algorithms within a large domain. Machine Learning, minus the NN and Deep Learning, may have been battle proven in the field. But it’s used for smaller, less complex classification tasks for anomaly detection. 

The emphasis for machine learning is on low to medium complexity in modelling expected behaviour, and then autonomously improving the model with small increments over time as it ‘learns’ through data the specifics of its environment. While proven and successfully applied, machine learning focuses on a very specific task and performs it over and over. The only improvement over time is getting better at predicting the outcomes.

Meanwhile, Neural Networks and its more deeply layered Deep Learning branch are among the many machine learning algorithms. While the ‘traditional’ (non-NN and non-deep learning) machine learning algorithms are modelled and coded by humans, working on low to medium complexity problems, neural networks and deep learning can be applied to highly complex problems. 

In general, you can look at Deep Learning as a way to program using data instead of programming languages or state machines. If the data is good and there is a sufficient amount of it, the resulting model will be able to classify and in the case of security detect anomalies from legitimate behaviour.

While it seems obvious and it still has its challenges, deep learning is able to find associations in data we humans would never be able to find, helping us reach new levels of detection we couldn’t before with traditional models and machine learning.

Most applications in use today (successful ones, that is), are based on supervised learning neural nets. The idea behind supervised learning is very simple. A rather generic model is trained using a set of labelled data, data for which the outcome from a given input is known and known to be correct. Once trained, the model can take any input and predict the output as a probability ratio between the fixed set of labels. For example, it is a common use for email spam filters. It works because the sheer volume of historically labelled emails provides enough data to ‘learn’ and ‘understand’ which messages are spam, and given enough data, a deep learning neural net will be able to ‘generalise’ its understanding in such a way that it can classify new messages it has never seen before with a fair probability. 

Given lots of historical data, the neural net will most of the time make the right ‘decision.’ These sorts of supervised nets can be considered an advanced form of automation. Instead of coding rules into the automation, the automation is coded through data samples and learns by example. They are highly efficient, and we shouldn’t underestimate them – they provide a solution for many domains where coding rules would be virtually impossible because of the complexity and our limitations to understand and maintain such complex code as humans. As such, supervised learning opened the door for new applications that were deemed too complex for traditional algorithmic coding.

But the challenges relating to Deep Learning are limiting its applicability:

  1. Deep learning needs a lot of data: practically this means that Deep Learning requires a lot of resources for training. Once trained, the model can run on limited resources to perform predictions based on never seen before inputs and can do it in near real time.
  2. Deep learning needs GOOD data! Data must be labelled correctly and be free of any potential bias. Practically this becomes a problem if you would deploy a new model in a real-world scenario and have learn based on its environment. In security, this means a model will have to train in an adversarial environment, one where attacks are a reality. Making deep learning resistant to learning in presence of adversaries is still ongoing research.
  3. Relating to 1 and 2, once a model is trained, it will have to perform in a real-life environment. Deep Learning models are only performing well in a static environment.  Real networks are environments that are continuously changing and evolving. Deep learning cannot work fully autonomously in such environments, at least not without humans continuously improving the training sets, re-training and evaluating the model, improving the learning and resizing, re-architecting the neural networks, all the while sanitising the outputs.

There are new breakthroughs and research on improved learning and models that are able to adapt in small increments to dynamic environments, as such that they can become in part autonomous and customise itself on premise in an organisation. 

It is still a holy quest for the universal neural network that could perform with comparable low rates of false positives as we are accustomed to from the more traditional machine learning algorithms. Until then, deep learning is a tool that helps security experts in filtering out the noise and focus on the really important events, while feeding back information into the model to improve and adapt it to new situations.

Although I paint a darker picture than most people would love to see on AI in security, I am convinced that this technology and the innovations it brings - and the automation of cyber security in a broader and general sense - will be a requirement for keeping ahead of future attacks as attackers are maturing and their attacks getting more complex every day. 

Whether it will be through small incremental advancements of deep learning, or breakthrough innovation in AI, that remains a large question, but we are on the right path. The only way to fight automation is with automation. 

Pascal Geenens, EMEA Security Evangelist at Radware 

Image Credit: Geralt / Pixabay