Can artificial intelligence replace humans in the fight against spam?

null

Artificial Intelligence (AI) is designed to recreate human intelligence as closely as possible, but machines can only think and act like humans if they have as much information about the world as humans have, which allows them to make the same nuanced decisions.

AI is currently in the forefront of the fight against spam, but the question remains, can it ever take over from humans in combatting spam completely?

Spam filters

While the first ”spammer” in 1978 earned about US$ 14 million, spammers today can only dream of achieving such a high return for a single spam mail. They typically receive only one response for every 12.5 million emails that they send, although they can still earn around $3.5 million over the course of a year.

Since its first occurrence, spam has become increasingly sophisticated in design and subject matter, but for all that, the vast majority of spam emails today have far less chance of making it into an email user’s inbox simply because spam filters are constantly evolving.

In their purest form, basic rules filter out messages with suspect ‘trigger’ words such as ‘shipping’, ‘today’ or ‘100 per cent’ which come from unknown or blacklisted IP addresses. Machine Learning (ML), a branch of AI, allows computers to process data and learn for themselves without being manually programmed.

One way is by using a large amount of data from already recognised spam emails. ML has a baseline ‘normal’ and can look for repeating patterns that are highly likely to be an indicator of spam. The ML algorithm then automatically creates a new rule for the spam filter.

Another way to train spam filters with the help of ML is user feedback. If enough users mark emails containing the word "Shipping!" as unwanted, then the filter learns that this word is a new criterion for spam and automatically creates a new rule for it.

So far so good, but does this mean that AI can ever replace humans altogether? The technology is very advanced and improving all the time, but the best possible spam filter at the moment still relies on human beings and machines working together, rather than in isolation.

Human discretion

The reason why is that machines cannot currently recreate the entire human experience and make decisions based on context, abstract thinking or assumptions, Experienced email security experts - let’s call them ‘spam cops’ - can easily ‘join the dots’ from various pieces of evidence and use their discretion where sometimes a spam rule applies and sometimes it doesn’t.

The issue for AI is that the more you broaden the problem environment, the more limited AI's capability becomes and the more training it needs. Humans may be slower at processing data but they have common sense and reasoning capabilities that enable them to plan and make decisions without having all the information.

One way that humans dig deeper is to cross reference with other activities. So, for example, "Was there a major data breach recently where private data could have been hacked – maybe from a well-known company with millions of subscribers?” Another example is phishing trends. Recently there have been EU GDPR-related phishing scams and a rash of sextortion emails where recipients were threatened with their contacts being sent compromising videos unless a ransom was paid.

Reasoning

In addition to the anti-spam specialists, there is a second human factor in the evaluation of spam: the user. From the user point of view, spam can be classified into three categories. Black spam, which is spam which is either not accepted by the provider’s email servers (because it is delivered by servers on blacklists) or can be detected as unwanted spam by spam filters, Red spam, which contains malicious links (e.g. phishing) or even malware. For both of these categories, the recognition rate is very good across all major email providers, so that users hardly ever see these emails in their inbox.

But then there is a third category: “Graymail”. Users currently have an edge over machines when assessing this third category. Called “Gray”, because it is neither on the black list of blocked senders or on the user’s white list of approved senders, this is email that your spam filter isn’t quite sure what to do with until it’s learned a bit more about it. Some users mark it as spam but others don’t which makes its status ambiguous.

Over time, the spam filter will learn what the recipient considers to be “graymail” based on these actions, as well as by the actions of all other recipients of emails sent from that particular domain name. AI may in the future be able to adjust and improve its reaction to this sort of spam proactively, based on such continuous feedback.

Cooperation not competition

AI accelerates spam detection by the sheer force of its processing power which evaluates huge amounts of data almost in real time. Where AI is focused on one narrow task, where all the rules and scenarios are available to it, it can certainly outrun humans in speed and problem management. But this is not the same as thinking and performing tasks autonomously like a human and thereby lies the difference.

Many developers in fact prefer the term ‘augmented intelligence’ which gets across the message more clearly that AI will be able to improve products and services, not replicate the humans that use them.

Currently AI doesn’t exist in this advanced form although maybe it will one day. Humans and AI can achieve far more when they're cooperating instead of competing.

 Jan Oetjen, MD, GMX
Image Credit: PHOTOCREO Michal Bednarek / Shutterstock