A change in behaviour for intrusion detection

null

After a spate of successful cyberattacks against even the biggest enterprises, it may seem as though the hackers hold all the cards. In fact, we believe these breaches have helped highlight how organisations can protect themselves against a new generation of threats, by focusing on attackers’ one key vulnerability – their inability to hide their nefarious behaviour once they gain access to the network.

Attacks such as WannaCry and well-publicised breaches like Equifax and elsewhere, have made organisations begin to realise that it is not enough to rely solely on robust perimeter defences. Instead, they must focus on what happens when – not if – the attackers break through.

Earlier this year, Gartner released its 2018 Magic Quadrant examining the world’s best Intrusion Detection and Prevention Systems (IDPS). The report predicted that by 2020, new technologies and methodologies such as analytics, machine learning and behaviour-based detection will be incorporated into the majority of IDPS deployments, fundamentally changing the way we understand and combat such threats.

Behind the castle wall

Cybercrime has always been a constant battle of escalation and innovation by both sides, of move and counter-move. As with any insurgency, the bad guys only have to get lucky once; the rest of us have to be successful all the time.

How then can we load the dice in our favour? One of the most significant shifts in cyber security thinking is the acceptance that no perimeter security can ever guarantee to keep out all attackers. To protect ourselves from the impact of successful attacks, we must learn how to quickly recognise and neutralise threats once they have breached the bastions that we erect to keep out the barbarians.

We are too quick to attribute omnipotence to cyber attackers and failing to recognise that they too have their vulnerabilities. The key to understanding, and thus exploiting these weaknesses, is to recognise that attacks usually have a lifecycle; they go through multiple stages, from establishing an operating foothold within the organisation through to playing out end game objectives such as digital denial, disruption, deletion or theft.

To achieve their goals, attackers exhibit subtle but predictable immutable behaviours which can give their game away, for example, by conducting “reconnaissance” from within their established beachhead, lateral movement to jump to other hosts, or attempting to escalate privileges through authentication infrastructure scans and credential theft. If we can spot these behaviours early and accurately enough, we can detect and remove the threat after it has penetrated the castle wall but before it reaches its nefarious final objectives. In fact, there are often multiple opportunities to do so. Traditionally, this has been the hardest task to perform at any kind of meaningful speed or scale, but new technologies and methodologies such as the application of AI to automated behavioural threat detection models show us that it can be achieved.

A new approach to intrusion detection

Enterprises work hard to protect themselves against constantly-shifting threats, but many of them share the same critical weakness in their approach to intrusion detection: a continued reliance on systems that merely monitor or control traffic into and out of the network.

These systems are proficient at identifying existing known threats through relatively simple techniques such as heuristics, pattern matching and hash signatures. One of the problems is that they fire off an alert whenever they find an anomaly, making such systems very noisy, and increasing the chance that those monitoring them will start to experience “alert fatigue” after a tsunami of false positives.

This legacy approach is no longer sufficient (if it ever was) to afford meaningful protection against the effects of an attack that has successfully breached perimeter defences. This is why Gartner’s latest Magic Quadrant for IDPS added a focus on technologies that use a behavioural approach, and a use case for detection of attacker lateral movements inside the network.

By applying technologies and techniques such as artificial intelligence (AI), automation and machine learning, a new breed of IDPS are able to search for the tell-tale behaviours that identify a bad actor. Instead of a barrage of alerts that point to possible intrusions, this new approach targets the attacker’s vulnerability, an inability to carry out their crimes without undertaking certain actions – such as moving between machines trying to find powerful credentials. However, some behaviours are not simply either “good” or “bad”, that can only be concluded by understanding the context of the behaviour, by knowledge of the systems and legitimate tools the organisation uses, and understanding who exhibited the suspect behaviour, where, when, on what host? Contextualisation is key.

By automating significant parts of this process, behaviourally-based IDPS can act as multiple tripwires without generating thousands of speculative alerts. These technologies should augment rather than replace humans, who in most cases will make the final decision of what constitutes a threat and how it should be remedied by reviewing and understanding the context of the detection. Response actions can be partially or fully automated depending on the threat and the risk appetite of the organisation. But our own experience shows that using AI to detect and analyse attacker behaviour improves response times by a factor of around 30 compared to traditional alert-based methods.

This approach has only been made possible thanks to innovation in AI and machine learning married with deep insights into attacker behaviours but has also been driven by factors such as the global cybersecurity talent shortfall and tumbling compute costs. With cybersecurity teams and budgets stretched to breaking point, new IDPS technologies and methodologies promise to support human analysts, providing them with the knowledge to take a pan-enterprise view of intrusion, and appropriate action.

As is always the case in security, enterprises must beware of seeing these technologies as a panacea to the problem of new, ever-more sophisticated threats. It’s not enough to see these solutions as a sticking plaster: they must be applied with the mindset that compromise is inevitable, and the assumption that the perimeter has already been breached. Automation, powered by AI, will make a key contribution towards the adaptive security architectures and processes that digital enterprises must embrace to reduce their cyber-risk.

We think that wise enterprises will want the ability to monitor for malicious behaviour today. In the endless game of move and counter-move, new technologies such as automation and AI already promise a way to take a decisive step forwards in the fight against cybercrime.

Matt Walmsley, EMEA Director, Vectra
Image source: Shutterstock/BeeBright