People are no longer surprised by data breaches. Many breaches are perpetrated by malicious actors. Others are the result of lapses in internal security protocols. The one common thread today is that every organisation, whether it is a retailer, a healthcare provider, a financial institution or a government agency, is becoming more dependent on the network and its data. The traffic flowing across the network is what keeps the organisation in business, so the data has enormous intrinsic value to the enterprise. Although breaches at companies like Yahoo have garnered a lot of attention recently, the sophisticated and automated tools used by hackers put every company, regardless of size, at serious risk of being hacked.
According to an annual security report produced by the SANS Institute in 2016, most companies spend up to 12 percent of their annual IT budget on various security defences, either to protect sensitive data or to comply with local regulations. These traditional security defences, such as firewalls, SIEMs and other intrusion detection and prevention appliances (IDS, IPS) have not changed much in the last decade. In the meantime, hackers have outpaced security technology by developing smart, automated tools that circumvent current security protection and “fly under the radar,” exploiting weaknesses slowly and carefully, whether those weaknesses stem from technology vulnerabilities, poor management processes, or human carelessness. Modern enterprise networks have so many potential attack vectors that a hacker needs only one entry point to surreptitiously gain access and sit undetected for months while exploring the network and preparing to exfiltrate valuable data.
Security analysts are overwhelmed with data
Hackers have also become experts at disguising their tracks, allowing many breaches to go unnoticed or undiscovered for long periods of time. This was highlighted in an IBM and Ponemon 2016 Cost of Data Breach Study, which found that malicious and criminal attacks took an average of 229 days for organisations to discover, and an additional 82 days to resolve. Although some organisations are better prepared to respond to breaches than others, these results reveal how difficult it is to adequately safeguard enterprise data security.
A primary reason for this weakness is that the volume of alerts has become so overwhelming that security teams typically only have the bandwidth to investigate a small percentage of the highest priority alerts each day. It’s very easy to find the bottleneck in this process; current security solutions typically require a disjointed, multi-step process that forces security analysts to manually correlate aggregated data from alerts with the corresponding network logs or traffic. Picture multiple screens and a lot of manual cross-checking, which is certainly not ideal when time is such a valuable commodity.
The introduction of The General Data Protection Regulation (GDPR), an EU regulation will further complicate the job of the CSO within enterprise. Data breaches will need to be reported to authorities within 72 hours of the data breach, or companies face hefty fines.
Best of both worlds
With the right approach, network forensics tools can be used to make network data more readily available and useful to both network and security professionals. Security analysts often find network diagnostics tools intimidating, or even awkward and time-consuming in security investigations. However, these tools can bridge the divide by providing customised views that help both network and security analysts pinpoint the precise information they need, such as for possible breach investigations. As network forensics tools get smarter, they may also become more useful by incorporating security alerts from popular open-source IDS platforms like Suricata and Snort, providing network and security data in a single view.
Next, it will be important to replace manual processes with more automated ones. Data collection and correlation is a perfect example. As alerts are received from a firewall, IDS, IPS or SIEM, a computer could easily be used to parse the alerts and store them together with any related network packet data. This can be done in the background, making it easier for an analyst to go back at a later date to access the data and evaluate whether an issue needs further investigation.
With packet data in hand, file reconstruction is possible, and can greatly benefit both network and security professionals. Having access to security-relevant network packets makes it possible for software to automatically reconstruct files, typically those transmitted via HTTP, to see exactly what kinds of files were sent over the network. In addition to file reconstruction, software can be used to automatically reassemble packet payloads for any network transmission, making payloads easily searchable for suspicious strings.
Automation is key
As you’ve no doubt realised, the key to improving security alert fatigue is automation. If alert-related network traffic can be captured automatically, an enterprise can store it in a central location, and save it for months. Without such a capability, analysts typically turn to disparate tools such as a SIEM dashboard, or they log into a host machine or other UI, and have to switch between multiple software applications or devices to access and review each alert when performing a forensic analysis.
The security industry sorely needs automated tools and processes that work in the background to collect suspicious network traffic, making it readily available to analysts whenever needed. By centralising and automating much of this process, analysts have much more time to investigate alerts each day, which in turn greatly increases the likelihood that they will find and limit the impact of a breach.
Jay Botelho, director of products at Savvius
Image Credit: Vasin Lee / Shutterstock