Skip to main content

Debunking the myths of AI cybersecurity

(Image credit: Image Credit: Maksim Kabakou / Shutterstock)

Artificial Intelligence is widely perceived as ‘the next big thing’ in cybersecurity. But with many providers attempting to jump on board and jostle to use the latest industry buzzword, services are being incorrectly marketed as ‘AI-based’, leading to much confusion.

In this article, Neil Kell, Director of Evolve Secure Solutions, part of the CSI group will debunk the numerous myths that exist around AI-based security to leave a clearer picture of current capabilities as well as where it will lead security in the not too distant future.

Myth one: Many providers are already offering AI-based cybersecurity services

Some security providers are genuinely leading the way with AI-based services, especially those commonly referred to as native AI companies, but the reality is that many are still using traditional security techniques and presenting it as AI in a bid to use the latest marketing buzzword.

Many providers are actually presenting traditional rule based analysis as AI.  AI enables you to continually improve, based on the data sets it learns from. A good way to judge it is to look at how often you’re getting an update on the learning.  These should be regular so that you get see the benefits from what has been learned elsewhere. If there’s no evidence of updates, the chances are that the product is a standard rule-based analysis offering that’s been available over the past 15 years or so.

As an industry product set, in truth, we’re still at an early stage on our AI journey. At this point, AI is narrow; it is effective at behavioural analysis in the technical environment and at endpoint security, but we’re not yet at the stage of AI being fully integrated across the enterprise and we are a long way from it being autonomous and not requiring human intervention

There is also the issue relating to the number of false positives being flagged by AI, but this is mainly due to the datasets available for AI to learn from. As the quality of the data improves, so too does the accuracy of AI.

Myth two: Traditional anti-virus protection will no longer keep your organisation safe

Malware can still be mitigated by established anti-virus signature-based mechanisms. In fact, thousands of signature-based technologies are still being sold worldwide. A thorough risk assessment of the organisation will determine whether you need to invest in AI now or not. Eventually, all malware will be detected by AI-based analysis, but we’re a long way off this yet. However, many companies are currently using AI-based security to varying degrees.

We can see the biggest comparisons of AI-based malware protection compared to traditional signature-based anti-virus when we look at the 2017 WannaCry ransomware attack. Organisations that were using an AI-threat solution were already protected, days before the attack took place. Other organisations using traditional anti-virus protection were not so well prepared. This is because the AI solution was able to identify tell-tale patterns in malware characteristics before they became attacks, meaning that the security was on the front foot and the enterprise was already protected.

Traditional anti-virus products are still keeping organisations safe from thousands of malware files, but it’s hard to keep up.  AI will eventually offer a far more sophisticated solution for organisations. If a system remains just behind this curve, sticking to traditional anti-virus methods, then you can guarantee that security exploits will occur.

Myth three: Artificial Intelligence will take jobs away from humans

Rather than replacing people, AI is augmenting what people can do by taking some of the heavy lifting away which is better suited to a machine. Not only does AI offer improved mitigation by analysing thousands of malware characteristics, there is also a reduction in old remediation techniques. This actually improves the work quality of security professionals who can focus their activity on interesting stuff like root cause analysis rather than endless log reviews.

In the event of a successful attack humans spend lots of time fire fighting. Patches need to be updated, and the integrity of backups need to be checked to ensure that these have not been compromised before they are restored. This remedial work is costly and unproductive.  AI assists with the processing part of malware detection, ensuring that patches are done and eliminating daily updates which lose productivity. The CISO’s quality of job role is improved, leaving them free to focus on tasks that require human intelligence.

Security shouldn’t be in your face; it should be business as usual. AI provides an improved level of efficiency, but people can’t abdicate responsibility entirely to AI. To be effective, AI needs to  directly support a clear cybersecurity strategy and be part of an integrated approach to risk  if we are to move to a mature cybersecurity operation. At its optimum level, it will be a very powerful tool, but we’re not there yet.

Myth four: Investing in AI-based security will instantly make me unhackable

It’s important to realise that AI can be the solution but it can also be the threat. As the technology advances for protection, cybercriminals will use it to find vulnerabilities.

Hacking-as-a-service will advance, widening the pool of cybercriminals with access to hacking tools which will increasingly deploy AI. There is also the element of human error which is more difficult to prevent as attacks get increasingly sophisticated by targeting our connected devices and deploying personally tailored phishing emails.

Advancing threats don’t just come from cybercriminals looking to steal company data. Organisations are also at risk of becoming collateral damage from associated cyberwarfare between nation states, such as the 2017 Russian cyberattack on Ukraine with its fallout  affecting hundreds of organisations across 64 countries. As a result, we need greater assurances that our cybersecurity can stand up to morphing threats.

The future of AI-based cybersecurity

AI deployment is well ahead of other avenues in terms of protection. If you haven’t invested yet, this won’t necessarily leave you wide open to threat. What we have to look forward to is proper integration of end-to-end enterprise-based AI solutions that will take us forward from where we are now. As it advances, it will reduce false positives through better quality of data and provide better contextualisation of threat protection to the operating environment and uniqueness of what individual organisations do.

The AI system will learn with the organisation and will have to adapt to the multiple ways that we connect, hopping from one network to another through interconnectivity. This will also advance beyond our work-based environment into our wider connected world. AI needs to adapt, and we must be ready to stay ahead of the curve if we are to survive the widening security risks.

Neil Kell, Director of Evolve Secure Solutions, CSI group