Cybercriminals are more prolific than ever. Everyone is familiar with brand-name breaches, like those affecting Marriott, Equifax and Facebook. But with attacks occurring every 39 seconds, it’s becoming increasingly clear that every business’ IT infrastructure is susceptible to bombardment. The growth of everything from cloud deployments, SaaS applications and remote locations, to mobile workers, connected devices and encrypted data, is expanding the network perimeter, exacerbating network exposure and making security virtually impossible to ensure.
For network security providers, much of the challenge stems from the application layer. Also known as Layer 7 of the OSI (Open System Interconnection) model, it’s the layer most accessible to the outside world. In order to boost protection, network security devices must be attuned to applications. However, a severe lack of application visibility and control is hampering the providers’ efforts and putting their customers at greater risk. When network security vendors can’t see what users are viewing or look inside networks and applications, they’re flying blind, missing critical data needed to combat attacks.
Networks are plagued by browser attacks, secure sockets layer (SSL)/transport layer security (TLS) man-in-the-middle (MITM), denial of service, brute force, DNS amplification attacks and more. It’s enormously difficult to get ahead of these attacks because of the realities of today’s increasingly digital world, which include:
- Data Growth: Global Internet Protocol (IP) data traffic is expected to increase threefold from 2017 to 2022, to an annual run rate of 4.8 zettabytes. Mobile, connected devices and cloud computing are all contributing to orders-of-magnitude more traffic on the network. The huge increase in sessions, packets, flows, applications and protocols overwhelms many cybersecurity devices, making it difficult to understand and process this traffic.
- Web-Based Applications: The web browser is now the most commonly used application user interface. In the past, setting and enforcing policies at the host level were relatively straightforward. Now, network security solutions operating exclusively on basic IP layer information can’t distinguish between permitted and malicious activity. Malicious code can masquerade as valid client requests and normal application data, making application-layer attacks easier and more frequent.
- Protocol Complexity: New web browser protocols, whether proprietary or not, introduce new features like binary header compression and 0-RTT handshakes. These features create blind spots and, therefore, vulnerabilities. For instance, the QUIC web protocol is standard for sites like YouTube, but a typical firewall doesn’t understand QUIC.
- Encryption and Obfuscation: Gartner says that more than 80 per cent of enterprise web traffic will be encrypted by 2019. In fact, many of the most popular apps, like Skype, WhatsApp, BitTorrent, Facebook, Twitter, Office365 and Gmail, are encrypted.
Hackers, too, are taking advantage of data encryption to conceal delivery, command and control activity and data exfiltration. Standard security measures, such as input filters, output encoding mechanisms used in web-based intrusion detection systems (IDS) and firewalls, can be easily bypassed. Obfuscation also gets in the way. For example, SQL obfuscation can help an attacker bypass web and database application firewalls or an application’s input validation controls.
The visibility imperative
Visibility is essential for determining potential network infrastructure and application-related weak spots. A lack of visibility limits a provider’s ability to deliver network diagnostics and prove compliance. It also makes it harder to protect the interior of the network from internal and external threats.
So how do network security vendors ensure visibility in the face of ever-expanding network perimeters? For many, Deep Packet Inspection (DPI), and in particular, its embedded Encrypted Traffic Intelligence (ETI), provide the missing visibility they need.
DPI brings more capabilities than legacy Shallow Packet Inspection (SPI), which just examines header information. MITM SSL/TLS Inspection is often used in the network security environment to enable the DPI to examine the payload of the encrypted connections. This helps determine if there is any malicious content or application-specific issues. DPI also adds IP traffic analytics capabilities to network security devices and provides granular information on applications and protocols for proper classification. It looks into every packet crossing the network, which is critical as more traffic is handled via IP, and compares packet timings, sizes and other parameters, correlating them over different flows of a network connection. It also performs behavioural analysis by checking for packet sizes and their order within IP flows, while tracking information about the subscriber and host.
When it comes to encrypted traffic intelligence, DPI provides a distinct advantage:
- Reliably classifies encrypted applications and categories, such as voice call, chat and video streams, by leveraging pattern matching, heuristics and behavioural analysis.
- Enables fine-grained application control to block malicious file transfers.
- Detects suspicious protocol usage in user profiles to prevent stolen or leaked information.
- Identifies applications and connections that are obfuscated or actively hiding, such as BitTorrent file sharing, anonymisers and virtual private networks (VPNs).
Build or buy?
Once network security providers have decided to embed DPI into their network security solution, the big question becomes whether to build the capability themselves or buy it from a specialised vendor. To determine the best approach, network security vendors should consider these factors:
- It takes about one year to build a basic DPI engine from scratch, and around 25 years to build an advanced DPI engine that covers thousands of applications and protocols.
- With the network perimeter expanding by leaps and bounds, and new applications and protocols constantly coming to market, network traffic is constantly evolving.
- Signatures for the latest applications and protocols must be continually updated to provide comprehensive visibility into network traffic and make sure all applications are detected. Even small changes can cause problems with classification. And since details for these changes are generally not announced publicly, you need to conduct continual tests to check the accuracy of all classification signatures.
- Some basic protocols are stable and easy to manage. But proprietary and fast-evolving protocols for things like social networking, video streaming and VPN applications add complexity.
Perhaps the best way to determine whether to invest a company’s R&D resources in a DPI engine or license it from a vendor, is to look at whether DPI technology is core to the network security provider’s business model. Do the expected returns justify the costs of in-house development and, therefore, a ‘build’ decision? Does the expediency and accuracy of obtaining DPI from an external vendor provide the assurance and ROI needed to differentiate the network security solution?
Whatever network security vendors decide, it’s clear that the visibility afforded by DPI has never been more critical. Embedding DPI’s encrypted network intelligence into network security products is essential to unmasking hackers, stopping cyberattacks and providing the protection customers need.
Stephan Klokow, director DPI, ipoque
Image source: Shutterstock/alexskopje