Recently, I’ve been thinking: where are we all going wrong in cybersecurity? Why is it that even though companies spend huge amounts of money on various security tools, they still get hacked? Even organisations that have all the money in the world to spend on IT security still experience data breaches. Why is this happening? The sceptics amongst you are probably thinking: “Oh no, he is going to talk about the skills gap again!” I agree that is part of the problem, however not the whole story.
I’ve witnessed some bad deployments of critical security infrastructure in my time as a cybersecurity professional, so I started to ask myself a question: “Are businesses using their cybersecurity tools effectively?” The more I thought about it, the clearer it became. I believe we’re all getting lost in the features and promised benefits of tools. In addition, all too often companies are looking for silver bullets rather than the right tools for the right jobs.
Poor breach detection by businesses is one indicator that security, even at well-established and funded businesses—is not working. According to incident stats from the Verizon Data Breach Report 2018, on average it takes over 200 days to detect a breach. We have all seen significant data leaks in the news recently. The reason for this, in my opinion, can be summed up in one simple statement: “The wrong tools for the wrong job and expecting too much of the right tools.”
To demonstrate this, I am going to take you through my thoughts on some of the critical components of businesses’ security infrastructure and how businesses are misusing them. This is not an exclusive list, but I believe these aspects are the most critical:
Intrusion Detection Systems (IDS)
This security tool really is the phoenix rising from the ashes and probably the best example of what I am trying to convey. Why is a tool, that has been around for years and almost went extinct, now on the list of top IT security priorities? Let’s start with why businesses stopped investing and lost faith in Intrusion Detection Systems in the first place. For many, IDS became the proverbial “boy who cried wolf.” I’ve witnessed this more times than I wish to remember. The story goes like this: the deployment engineers would come in, deploy the tool, perform some basic tuning and say they’d come back once there was enough traffic to tune further. The problem is that tuning is an ongoing process that requires expertise of people who know how to extract the real value and remove the noise. After some time and many false positives, the IT team becomes desensitised to the alarms and ignores them, till someone comes to renew the product and “throws” some tuning back at them.
Then there is the issue with the constant upkeep. Finding time to investigate and continually tune is tough. Who actually has the time to maintain the system, while keeping up with new IT projects and the speed at which they move today? The other question businesses must ask themselves is, “Are traditional security tools built to handle today’s challenges and do you have the right people in order to get the best out of them?”
Why do we need IDS solutions at all? Many businesses are starting to realise that simply relying on blocking tools is not the right approach: businesses still get hacked, even with the best blocking tools and the biggest budgets. It’s the lack of visibility, in many cases, that leads to businesses missing the obvious signs of a cyber attack and worse still, a successful cyber attack. Back in the day, organisations bought security tools but had no expertise in-house to service them, and so ended up with shelfware. In order to solve their challenges, they really needed a set of expertly tuned tools, investigating 24x7, with the ability to continually improve detection with every new threat, all while incorporating product improvements. The reality however was that they found themselves servicing the IDS in between the other 101 things on their to-do list.
The next issue with IDS is the unknown: how do we handle that which looks suspicious, but is not currently known as a threat? You can’t build rules for an unknown threat. This has been compounded by the volume of suspicious traffic and so it was almost impossible with IDS to handle that volume. Now, we have supervised machine learning, which combines new data science, human security expertise and analytics technologies.
Monitoring File Change
In my opinion, File Integrity Monitoring (FIM) has been selected to address specific compliance requirements, but businesses rarely think about it strategically. Instead, they think that monitoring every single file change is the best approach. This is just not manageable and leads to confusion and missed attacks. In my years working closely with forensic investigators, I heard them often say that the victim could have mitigated the cyber threat if they had just monitored the root directory of the webserver. They would have spotted the cyber attacker dropping tools in the critical directory and could have shut down the attack in minutes or hours rather than days. In some alarming cases in which the victims hadn’t managed the logs, it was impossible to categorically explain what happened during the attack, severely limiting the forensic investigators’ ability to deduce what had taken place.
FIM is one of those tools that typifies my pet hate, with businesses collecting everything with little thought for how they will actually use the data. It’s not realistic and just adds more hay to the haystack—making it more challenging to find the needles. This approach of collecting everything is most useful, once you have been breached.
Firstly, I would like to share my opinion on SIEM software, as I have seen very few SIEM deployments that I would consider done well. When using SIEM, organisations collect a load of logs and consolidate security events from a variety of tools, then switch on correlations that help identify what to triage. If you have a mix of assets in a hybrid environment, it’s highly unlikely that your SIEM provides a single pane of glass visibility into all of those assets.
I believe that unified log management is a much simpler and better approach to threat detection. You should be able to track user activity and suspicious behaviour in real time across all your environments and you can only do so if you use tools that spot malicious activity through the consolidation of the logs, log review and correlation. If you get hacked and the forensic investigators ask where the logs are, they typically want access to RAW, not parsed log data to use their command line tools on. Forensic investigators need to be able to prove without doubt where things went wrong. They won’t typically ask for a SIEM UI unless it can actually export the RAW logs.
Another common scenario is that an organisation collects logs specifically for auditors. I always believed that IT compliance and regulations in cybersecurity exist to drive better practices and improve security, and not to make us jump through hoops only to satisfy an auditor.
Daily Log Review & the SIEM Myth
Daily log review enables you to spot suspicious activity and triage it in order to identify how it would affect the security of your customers’ data and the integrity of your systems and users.
I must come back to the subject of quality. Are you collecting the right data and does the person reviewing the logs really know what “good” and “bad” looks like? This is where correlation rules are supposed to help you focus on the threats and dig into the logs with specific focus.
Why have correlation rules not solved all our problems then? The first reason is linked to what we collect, as correlation rules are created based on specific log types and their relationship to other logs. It may seem great that a SIEM software has hundreds of correlations, but if they are not relevant to the logs you have, they are never going to be turned on and are basically useless.
The next issue is third-party security tool log ingestion. With many SIEM tools, you spend a huge amount of time updating parsers. This is because vendors send you data whilst making constant changes to the logs their tools make as they add new content. This becomes a massive overhead on security teams or ends up meaning that those critical logs fail to parse, which leads to correlations not working and potentially missing cyber threats and companies being blinded by noise. Searching becomes a nightmare if you don’t know what to look for in RAW logs. You can also run out of space as you grab more data and maintain intelligence.
SIEMs can be used to ingest and consolidate logs for an expert to review, and to create specific correlations that address concerns and help spot suspicious activity, but they are not a silver bullet. With SIEM you can expect decent reporting and a way to search through large volumes of logs by “asking” sensible questions with the tool. Don’t try to explore what happened for the whole year, as most of the time it will take forever to collect that data and when you receive it, it will hurt your eyes. Start thinking about what you are putting in and what the expected output should be, remembering that “bad data in means bad data out,” as you could spend way too much time analyzing the noise and not enough time on the actual threats.
OK, you got me, privilege management is not a tool as such. The problems I see here are in relation to giving away too many rights. In the case of a new recruit, usual (wrong) logic is to copy Bob’s profile, as he has the correct permissions, rather than looking at the correct rights for his job. Or in the case of service accounts, they are often granted admin rights because it is too hard to figure out how to get services to run without high permissions.
Is Vulnerability Scanning the Same as Pen Testing?
Vulnerability scanning is the same as pen testing, right? Sorry, but that’s incorrect! Vulnerability scanning is automated with no human intervention, it’s designed to yield a prioritized list of vulnerabilities and is generally for clients who already understand they need to improve their IT security posture. Penetration tests involve manual efforts and someone thinking outside the box, they are designed to achieve a specific, cyber attacker-simulated aim and should be requested by businesses who are already at their desired IT security posture.
A modern approach to vulnerability scanning is that it shouldn’t be a once a quarter task, it should be an ongoing process; Another approach I advocate is: don’t try to boil the ocean unless you have a manageable environment. Due to the large networks they operate, many organizations have a never-ending list of things to remediate. Focus on the critical infrastructure, that’s connected to risky assets: systems that store, process and transmit sensitive data or are directly connected to those assets. For other, less critical assets, put strict controls and deal with them in order of priority by risk. Be mindful, however, this is a risk-based approach and if you’re not careful, you’ll end up having assets that are insecure, so be ultra-careful about your critical workloads. Think like a hacker!
Cybersecurity Tools in Summary
All above mentioned are the right tools, they are just all used in a wrong way and work better together with experts using them for tuning, spotting new cyber threats and improving ways to use the core tools for threat detection. Think defence in depth, but just as importantly think integration. Integration in the sense that when new attacks and vulnerabilities are discovered, your experts know how each of the tools can be used to detect threats. All these tools play their part, but without an integrated view on the content and expertise you will fail to get the value, as they will work against each other. This is how great security service providers should work.
James Brown, VP EMEA at Alert Logic
Image Credit: Pavel Ignatov / Shutterstock