Skip to main content

Off the hook: How AI catches phishing emails even if we take the bait

(Image credit: Image source: Shutterstock/wk1003mike)

From social media apps to collaborative cloud services, unprecedented methods of communication now arise on a daily basis. Yet workplaces around the world are still reliant on good old-fashioned emails, with more than 100 trillion of them sent in 2018 alone. A single office worker receives an average of 121 emails per day and — as most of us can attest — therefore has just a moment to decide whether each one merits a reply. Given this barrage, it is hardly surprising that 90 per cent of malware originates in the inbox, disguised within phishing emails whose senders impersonate trusted colleagues.

Of course, long-time internet users have learned to be wary of messages from foreign princes asking for help transporting their gold. Yet nearly three-quarters of targeted cyberattacks today involve “spear-phishing” emails: a personalised form of phishing wherein attackers employ online reconnaissance or physical eavesdropping to produce convincing forgeries. Both humans and conventional email security tools have proven ineffective at spotting such subtle threats. One prominent study found that — among 150,000 phishing emails sent for the experiment — almost half of recipients clicked on the emails’ scam link within the first hour.

Detecting spear-phishing campaigns requires a platform approach to cyberdefence, as opposed to siloed, email-specific solutions. Powered by unsupervised machine learning, cyber-AI platforms come to understand how individual users work and collaborate across the digital infrastructure, from the email service to the cloud, to the on-premises network. This contextualising knowledge is imperative when looking for the slightest signs of something “phishy,” since activity that is malicious for one user under one circumstance could well be benign in other cases. And crucially — because motivated attackers may still find a way inside an organisation’s protective skin — such all-encompassing AI platforms can autonomously respond to minimise the damage, no matter where the infection occurs.

Learning from patient zero

Consider a sophisticated but nevertheless commonplace attack against a global enterprise. The attack begins, unsurprisingly, with a spear-phishing campaign targeting employees across the business. The emails use a phishing tactic called domain spoofing, which involves registering a seemingly legitimate domain that resembles the sender address of a familiar contact. More often than not, the attacker will seek to impersonate a high-level executive and make an urgent request — hoping the employee will comply before spotting the forged domain.

In this instance, the attackers, who have spied on the company’s CEO via their tweets, have emulated their writing style in order to trick recipients into opening the emails’ attachment. Because the spoofed domain does not appear on the IP blacklists used by the company’s native email controls, they make their way into the inboxes of more than 200 employees, ready to infect the firm with a fast-acting strain of ransomware after a single click. To make matters worse, the multinational firm has offices in four continents. Thus, when “patient zero” — a salesperson in London — gets to the email first, its US-based security team is still asleep halfway around the world.

The company’s cyber-AI platform, meanwhile, analysed the emails and correlated their attributes with each employees’ typical online behaviour, leveraging its knowledge of the entire digital infrastructure. This analysis revealed the emails to be suspicious, and although the AI did not yet intervene, it primed its autonomous response capability to take immediate action. Back in London, patient zero skims the email and inadvertently downloads its ransomware payload, which begins to move laterally, identify file shares, and encrypt company documents at machine speed. For most organisations, it’s already too late.

But within seconds, the cyber-AI platform flags the unusual nature of the ransomware’s activity and, given the urgency of the threat, determines that an autonomous response is necessary. It surgically neutralises just the anomalous lateral movement and encryption, restricting infected devices to their normal behaviour. However, the platform doesn’t stop there. After performing a root cause analysis, the AI traces the attack to the phishing email — information that prompts it to sanitise the other emails in the campaign before they deceive additional victims. The salesperson continues working, unaware that the AI is also hard at work behind the scenes, saving the company from a major compromise.

AI attacks the inbox

It isn’t just defenders who have artificial intelligence at their disposal. AI also promises to supercharge spear-phishing by rendering these emails more realistic and far more scalable — automating what is, for human attacks, quite a labour-intensive process. One notable experiment in 2016 found that an AI-powered toolkit, which studied the social media behaviours of its targets in order to send them personalised spear-phishing tweets, was able to put a human attacker to shame by luring 275 victims into its trap in a mere two hours. The human, over that same duration, made only 129 attempts.

Compared to largescale, standard phishing campaigns that have compromise rates of 5-14 per cent, such automated spear-phishing, has been found to succeed between 30 per cent and 66 per cent of the time, while AI technology continues to exponentially improve. There is no silver bullet for countering this next wave of AI attacks, regardless of how robust perimeter-oriented protections become. Rather, we must employ our own AI platforms to secure our digital assets from the inside-out. By uniting email security with enterprise security in this way, we can autonomously fight back against phishing attacks — even those we fall for hook, line, and sinker.

Dave Palmer, Director of Technology, Darktrace (opens in new tab)

Dave Palmer, Director of Technology at Darktrace, has over ten years' experience at the forefront of government intelligence operations, working across GCHQ and MI5. At Darktrace, Dave oversees the mathematics and engineering teams.