The impact of false positives on breach detection system accuracy

null

False positives, those alarming notifications that turn out to be nothing at all, might initially seem like minor inconveniences, but they dramatically reduce the accuracy of security tools and create huge impediments for security analysts. Unable to remove the noise generated by numerous false positives, it’s extremely difficult for security staff to determine and set correct breach response priorities.

Why false positives are so significant

Before we drill deeper into how false positives impact cybersecurity, let’s use a medical analogy to help understand why the effects of false positives are so significant.

Imagine that there’s a medical screening test designed to scan a large population for a serious disease. One per cent of the population actually has the disease, and doctors use the test to determine who has the disease and who doesn’t.

  • The test has a 10 per cent false negative rate. That means for those individuals that really do have the disease, the test says that they do not 10 per cent of the time.
  • The test has a 5 per cent false positive rate. That means for those that do not have the disease, the test says they do have the disease 5 per cent of the time.

The question becomes, if your doctor tells you your test result was positive, should you be worried?

On the surface, it seems that our test is quite useful. However, the problem is that a large percentage of the test results are actually incorrect. Some people will receive wrong results—telling them that they don’t have the disease when they really do (false negatives) and the test will tell others that they do have the disease when they really don’t (false positives). But how accurate is our test really?

The simplest method to determine the validity of the test is to imagine a large group of people and calculate the percentages of that group that actually do or do not have the disease based on the test. For our test scenario, let’s look at a thousand people:

  • Of the 1,000 people, only 10 really have the disease (1 per cent of 1,000).
  • The test is 90 per cent correct for people who have the disease, so it will get 9 of those 10 correct, and report that 9 people have the disease.
  • But 990 people do not have the disease. Unfortunately, because of the test’s false positive rate it will say that 5 per cent, or 49 of them do have the disease even though they don’t (5 per cent of 990 is 49).
  • So, out of 1,000 people, the test will say that 58 people have the disease, even though only 10 of them really do (9 plus 49 = 58).

Of the 58 people who are told they have the disease, only 9 of them actually do. For anyone who the test indicated had the disease, it is only about 15 per cent likely that they really do.

9 / 58 = about 15 per cent

Why are the odds of the test being correct so small even though the false positive rate seems relatively low? It’s because the odds of actually having the disease is so low that those who actually have it are greatly outnumbered by those with a false positive.

False positives have a dramatic impact on the accuracy of the test, significantly impacting the outcome and whether the detection is correct or not. For instance, if the false positive rate were improved to 1 per cent, the test would identify 19 people as having the disease, 9 of which actually do and 10 of which are the false positives, improving the overall accuracy of the test to about 50-50.

Impact of false positives on breach detection system accuracy

As one would expect, the lower the false positive rate for breach detection systems the better. And as in the medical world, small differences in false positive rates make a huge difference in a product’s ability to accurately detect a data breach.

The false positive rates presented in the following table are the actual values calculated by NSS Labs for 5 leading data breach detection systems in their 2016 Breach Detection Systems Group Test.

Assuming that one in a thousand events are actually malicious, the table above shows the impact that different false positive rates have on the validity of an alert generated by the system. Because the vast majority of objects tested are harmless, even though a breach detection system may have a relatively low false positive rate of say 1 per cent, the alerts generated by such a system are incorrect in over 90 per cent of the cases.

For example, the third row of the table shows the accuracy of alerts for a system with a false positive rate of just .99 per cent. This means that of the millions of objects the system evaluates, it will indicate that 1 per cent of them might be dangerous when they are actually not dangerous at all. The effect is shown in the second and third columns. Only 9.1 per cent of the alerts generated are correct, and 90.9 per cent are incorrect.

As the table shows, unless the false positive rate is virtually zero, most of the information generated by the system is invalid. This causes security managers and analysts to engage their incidence response and SOC teams to squander valuable time hunting down these ghosts. Because of the potential for damage, they have to investigate these alerts. Unfortunately, having done so they will find that there is nothing there— wasting minutes, hours, or even days.

Low false positives enable your security team to be effective

Since false positives directly and dramatically impact the effectiveness of your security team, it’s critical that organisations understand the false positive rates of each security product they implement.

Just a small number of false positives will create a lot more unproductive and distracting work for your analysts than one might initially think.

Christopher Kruegel, CEO and co-founder, Lastline
Image Credit: Balefire / Shutterstock