Skip to main content

Q&A with Balabit: Artificial Intelligence - the future of IT Security

(Image credit: Image source: Shutterstock/deepadesigns)

Why is AI such a focus for the security space at present? 

I think two things happened more-or-less at the same time that contributed towards a focus on increased Artificial Intelligence (AI) for the security space. 

Most importantly, big data became mainstream and widely usable. It's no longer just the huge tech companies or research institutes that can afford to crunch numbers. Increasing computing capacity, especially via affordable cloud computing and easy-to-use tools, made it possible for a much wider range of users to apply sophisticated machine learning and AI algorithms to solve their problems. 

Around the same time, people realised that it was extremely hard to keep up with attackers who kept on finding new, stealthy ways to infiltrate enterprise networks. For IT teams, updating predefined rules and writing new ones were proving to be extremely expensive and no longer a feasible way of dealing with specific threats. According to recent research by the Ponemon Institute, users of SIEM systems spend an average of $1.78 million on labour costs associated with the implementation and ongoing maintenance of the system annually. As a result, IT teams have become frustrated and want a solution that requires less customisation and fine-tuning and just learns what it should do. 

What are the main benefits of AI technologies? 

There are two main benefits associated with AI. 

The first is that most AI and machine learning solutions are self-adapting and require little customisation and maintenance. They are able to learn how things happen in a given environment and then adapt to that. At the same time, it lowers maintenance costs significantly. 

Secondly, AI has the potential to discover problems and attacks that they were not explicitly programmed to find; we call these the "unknown unknowns". This enables the defenders to stay ahead of the attackers in the cat-and-mouse game of security. 

What are the main concerns surrounding the adoption of AI? 

These algorithms make more nuanced decisions than the rules we are all used to. It's no longer about whether something is allowed or not or we consider an action to be malicious or benign. Rather, we are moving into the realm of ‘probabilities’ and ‘thresholds’. 

Also, there's very often a stark contrast between how an algorithm performs and the ease in which we can understand how it came to a conclusion. Quite frequently the algorithm that yields the best results does it in a way that is practically impossible to explain or completely understand. If that decision has important consequences, such as cancelling a transaction, suspending an account or starting a costly investigation process, then people get uncomfortable very quickly about not being able to understand 100% why that happened. 

An even harder-to-grasp but very real problem is around the fact that an AI has no conscience or inherent ethics. It can simply learn and mimic how people make decisions or optimize parameters to achieve a certain desired optimum that is not always what we really want. Applied naively, algorithms can simply amplify our existing biases and create systems that discriminate against certain people or make decisions humans would consider ethically unacceptable. The emergence of self-driving cars brought on a quite heated debate around this topic but the same problems apply in other areas, such as cyber security as well. 

What do you see happening in the future in terms of the use of AI in the cyber security industry? Do you see AI being the ‘next big thing’ in the cyber security industry? 

It is already the 'next big thing'. It's most certainly hyped at the moment and quite possibly overhyped, but it is something everybody is talking about and a lot of people are experimenting with. I believe as we progress with the use of AI, the industry will stop treating AI as a silver bullet and a catchy marketing technique and eventually find the right application for these algorithms. 

We will still need traditional structures and control measures, just as we need both doors with locks and a police force for physical security, but we'll probably be able to ease up on control as we can start relying on advanced analytics more. 

With the rising popularity of insider threats, is AI the best way to mitigate this type of threat? 

It is certainly a very important part of the defensive arsenal. The biggest challenge when it comes to insider threats is that people committing the malicious acts are using the very privileges they need to have in order to be able to do their jobs. Minimising access, close and detailed audit logging and monitoring can lower the risk, but ultimately there will always be employees who need access to valuable data and, being humans, can turn into a malicious actor or be blackmailed. AI, especially behaviour analysis, can be used to recognise changes in work patterns and provide the security team with a warning in real-time. 

How is it possible to manage the synergy between AI and the human element of operations? 

Our goal should not be replacing humans in the equation, but allowing them to focus their resources on activities that are truly important. Computers are fantastic at processing huge amounts of data rapidly and we should use them to do just that. Humans, on the other hand, are fantastic at understanding other humans, understanding intent and communicating with each other. The best AI tools free humans up from tedious, menial tasks and allow us to solve higher-level problems. Of course, we must keep in mind that these are just the means and not the ends: always set goals that we want to achieve and use them to choose the right tools to achieve those goals. 

Recently, the NSA Chief said that without AI, Cyber ‘Is a Losing Strategy’? Do you agree? To what extent? 

I completely agree. Several arguments can be made to support this stance, but the matter of fact is that the genie is already out of the bottle. Security is always a kind of arms race and the attackers will not shy away from creating more stealthy, sophisticated malware and other hacking tools to evade detection and penetrate networks. If the "blue team" does not want to lose, we must keep up. 

Péter Gyöngyösi, Product Manager, Balabit 

Image Credit: Deepadesigns / Shutterstock

Peter Gyongyosi
Peter Gyongyosi is the product manager responsible for Blindspotter, the user behavior analytics technology of Balabit's Contextual Security Intelligence solution. His main job is to bridge the gap between technology and business and find the best ways data science and advanced analytics can help customers solve real-life security problems.