Should more protection really equal more false positives?

(Image credit: Image Credit: Den Rise / Shutterstock)

False positives (FPs) occur when a security solution raises an alert about an issue (e.g., malware, anomalous behaviour) that doesn’t actually exist. Considering all the challenges security professionals face every day, reducing false positives may not rank very high on their priority lists. But it should.

“One false positive is not a big deal. On a cumulative basis, however, they cause alert fatigue and divert the security team’s already limited resources away from real issues.”

This doesn’t just waste time and money (the cost in time wasted while responding to false malware alerts is estimated at $1.37 million annually), it also leaves the organisation more vulnerable to a real attack.

One infamous example is the 2013 Target breach that resulted in the theft of the private data of about 70 million customers. The company’s security monitoring software reportedly did alert staff in Bangalore, India that the attack was underway, who in turn notified Target staffers in Minneapolis. But no one took action because these alerts were included with many other alerts, most of them likely false.

Security tools that leverage predictive algorithms (e.g., machine learning, artificial intelligence) are especially susceptible. That’s not surprising considering their primary shortcoming is in their names. Rather than being deterministic, they are predictive, and it’s rare that one of these solutions can come anywhere close to 100 per cent accuracy in its predictions. Whether the product is UEBA (User and Entity Behaviour Analytics) or NGAV (Next-Generation Antivirus), it is not uncommon to see FP rates over 20 per cent.

“Security professionals should not have to accept that their efforts to harden their organisations’ security postures inevitably creates a constant, ever-growing deluge of false positives. Yet too many are resigned to the mistaken belief there’s nothing they can do to prevent them.”

About one-third of respondents to the 2019 Endpoint Security Report ranked high rates of false positives in their top three endpoint security issues. Moreover, a majority (53 per cent) estimate that between 10 per cent and 49 per cent of endpoints security alerts their systems generate are false positives. And an additional 17 per cent of respondents estimate that over half of their alerts are false positives.

What are security administrators to do? Facing hundreds or thousands of false alarms, it seems they only have two choices, neither of them good: either keep the settings strict and resign themselves to being overwhelmed with the FPs; or loosen the defensive settings, which will likely result in more intrusions into the company’s systems.

Some try to put a positive spin on these approaches with terms like “reaching an equilibrium” or “finding a balance.” One NGAV vendor states that “to provide maximum value while reducing the pressure on overworked staff, security … must balance blocking malicious software with avoiding impact on the regular use of business applications. This requires a robust understanding of an organisation’s good software, in addition to identifying and training on malicious software.”

“The industry should not force organisations to pick between being overwhelmed by useless alerts and not having strong protection. Fortunately, there is another way.”

Unpleasantries expected

If we approach the problem of malware using signatures, we get very few FPs because the signature is very specific. Trouble is, you can’t create and store signatures for the billions of pieces of malware prowling the world. Nor you can create signatures quickly enough to thwart the roughly one million new pieces of malware created every day. So, this overly specific approach doesn’t work.

On the other end of the spectrum is a generic approach. This can take the form of a rule-based machine (probably blacklist-based) that analyses system calls that malware tends to leverage (e.g., NtFileDelete for file deletion).

Being too strict, this approach will likely result in too many FPs because these system calls can be used by both malicious and legitimate applications.

There is also the “application whitelisting” approach that reduces the attack surface and minimises false positives. However, this typically comes with substantial management overhead. At a time when so many organisations across all industries are struggling to hire adequate numbers of skilled security professionals, adding more labour-intensive defence systems is a recipe for failure.

“What if we apply the whitelisting approach to the OS system calls, but instead of looking at just the system call in question (e.g., file deletion), we look at the way a process has arrived to that system call? In other words, what if you could ‘rewind’ the full sequence of calls before arriving to this potentially dangerous activity? You could see who initiated this activity, and whether there was a user interaction.”

When you can create a “map” of how a process is supposed to arrive to this activity, you know all the finite number of good paths. By definition, all other paths will be malicious, and hence those activities and processes will be detected and blocked. What is especially interesting, the more of those conditions we add and the longer we make those paths of OS call sequences, the fewer are the chances of FPs.

The way to describe those “maps” is through Indicators of Integrity (IOIs). IOI is a piece of evidence (e.g., a sequence of system calls) that consistently precede a legitimate activity (e.g., file deletion). Instead of trying to identify and prevent the infinite amount of “badness” that’s out there, you focus on the more finite amount of legitimate (i.e., “good”) system activities to improve protection and eliminate false positives.

With IOIs we can describe the legitimate OS behaviour, creating a kind of OS behaviour whitelist. One of the additional benefits is that the OS behaviour is universal and exactly the same at every organisation irrespective of its industry, user behaviour and application, eliminating the need for baselining and constant management traditionally associated with whitelisting technologies. Moreover, because its core functionality rarely changes, especially in the way it works with the file system and networking, dramatically reducing the need for regular updates.

“Finally, the more precisely you can describe a specific behaviour, the higher level of security you can achieve while simultaneously reducing the rate of false positives. As a result, we create stronger security (without constant chasing of every-evolving “badness”) and at the same time fewer false positives.”

Cybersecurity professionals accept that some unpleasant aspects of their jobs are inevitable. For example, attackers don’t take days (or nights) off; end users will make a stink whenever a security scan slows their devices or interrupts their workflows for even just a few minutes; and adding more layers to the security stack increases the number of false positives. There really isn’t anything that can be done to change adversaries’ behaviour or users’ high expectations. But no one should have to accept that more protection always equals more false positives.

Nir Gaist, Founder, CTO, Nyotron
Image Credit: Den Rise / Shutterstock