Gone are the days when attackers had to breach your perimeter to get inside your network and steal data. Businesses are expanding to the cloud, SaaS, mobile, and social channel to boost visibility and interact with customers, leading to a dramatic increase in the attack surface available to hackers and giving rise to an entirely new threat landscape outside the firewall.
A CEO's social media presence, meant to build trust and visibility with customers, can turn into hundreds of rogue accounts impersonating him. A third-party plugin that makes a website more interactive can pose a significant vulnerability if compromised (remember the Panama Papers breach?).
Today, InfoSec must go beyond setting up and protecting a firewall, to a place traditional security tools provide little to no visibility or protection. Security teams must now monitor their organisations from the outside in, seeing it the same way their customers—and those targeting them—see it. But how do you protect or monitor what you don't know about? How do you go about discovering the "unknown unknowns" that could hurt you?
Adapting to this fundamental shift in attack tactics requires a new approach to security known as “External Threat Management” (ETM), which provides the visibility and management capabilities for things on the open internet; things agents and firewalls provide for the corporate network. An ETM program allows security teams to:
How an organisation looks online is always changing via a wide range of factors, both legitimate and malicious. As companies grow, it’s increasingly challenging for security teams to stay on top of the day-to-day activities of far-flung partners, vendors, and internal teams and business units.
These groups continuously create and alter digital assets, many of which are outside official protocol. At the same time, threat actors create fake branded websites, mobile apps, and social media accounts intended to fool customers and prospects and steal sensitive information or distribute malware. But through an intelligent platform that analyses and contextualises enormous datasets to peruse the full breadth of the internet, security teams can have a real-time view of their internet-exposed attack surface as it appears to hackers.
With a catalog of everything that's part of their infrastructure, teams can bring previously unknown assets under management, verify the security and compliance of what belongs to them, and identify what may be fraudulent.
When you hear the term web crawling, you probably think of indexing (like Google). But its role in monitoring organisations' internet-facing assets for security risks—and building out vital security datasets in the process—is core to ETM.
This network of crawlers, sensors, and proxy users emulates human users with a fully instrumented browser and algorithms to simulate human-like mouse movements and click behaviour. Similar to how you read an article online, this type of automation does it much faster, all while storing the entire chain of events that may have lead to attack, such as a redirect sequence on a phishing page. When these crawlers process web pages, they take note of details like links, images, and dependent content so that security teams can reconstruct an event and what led to it—just like a detective might do at a crime scene.
When it comes to malicious campaigns, it's not uncommon for attackers to keep their infrastructure up for a short period to not attract attention. ETM crawling technology should capture changes in a page's document object model (DOM), so security teams can fully recall or recreate the web page as it was when it was crawled. Being able to see and interact with a page that may no longer exist is invaluable in understanding the nature of the attack.
Because bad guys are always trying to avoid detection, virtual users are paramount to crawling infrastructure. The same diversity of user agents and behaviour profiles that enable crawlers to detect online fraud also ensures that ETM monitoring remains invisible to adversaries. If attackers are unable to tell the difference between virtual users and real victims, their behaviour is more likely to be observed.
A proxy network of virtual users provides the perfect cover because it's comprised of a combination of standard servers and mobile cell providers that act as egress points deployed all over the world. In other words, pretending to be a victim’s mobile phone in the same region in which it’s targeted means crawling infrastructure has a higher likelihood of going undetected, observing and logging a full exploitation chain without spooking the threat actor into changing their infrastructure.
The same technological advances that empower internet users, services, and businesses also enable cybercrime to thrive at an unprecedented scale and velocity. Attackers can create massive amounts of these digital accounts at little or no cost and leverage a huge network of black markets to maximise profit and reduce the level of technical skill required to carry out sophisticated attacks.
Organisations must be able to scale at the same pace. While crawlers deploy, they're aggregating key data that help security platforms leverage the internet itself as a detection system to automatically defend a network from cyber attacks.
Attackers use automation to launch sophisticated attacks cheaply by rotating and reusing undetected infrastructure. But defenders with access to internet data collected by crawlers can detect unknown threats at the source and track how they change and spread.
Correlating threat data extracted from a broad set of data sources across channels reveals the risk posed to an organisation by a single piece of infrastructure—and how it's being used within a larger context.
To keep up with attacks, advanced analytics are necessary to automatically triage and address security events and track changes in threat infrastructure to predict new attack vectors as they emerge.
Arian Evans, VP Product Strategy, RiskIQ
Image source: Shutterstock/jijomathaidesigners