Studies show that most security execs now have first-hand experience of dealing with “bad” bots. In fact, we know that executives aren’t just dealing with them they are now a topic for discussion in board management meetings such is the impact to business - 53 per cent say they’ve encountered reduced website revenue due to inventory hold-ups by bots, 51 per cent report bots skewing marketing analytics, and 36 per cent have talked about abuse of user accounts of payment information.
Talking to execs I’ve found that most start by deploying an in-house solution as their first response. However, the majority admit it wasn’t good enough and failed to make a scratch on the attack surface and have since replaced it with a dedicated solution.
So why are in-house bot management solutions failing and why don’t businesses deploy a dedicated bot management solution as soon as they determine the threat?
To uncover why, you need to look at the various options for in-house bot management solutions and their pitfalls.
Types of in-house bot management
There are four types of in-house bot management solutions that organisations deploy.
1. Manual Log Analysis
Used to manually prepare a list of suspected IP addresses. Suspected IPs are then blocked through access control list of WAFs or SIEM tools to prevent them from accessing web applications.
2. Rate Limiting
Organisations limit the number of visits from one IP address. Rate-limiting based solutions work based on predefined rules.
3. Basic Fingerprinting
Basic fingerprinting-based solution that collects IP- and header-centric information to identify and block malicious bots.
4. Advanced In-house Bot Management
Advanced in-house bot management solutions are built using in-house data while leveraging basic machine-learning models.
Security researchers from ShieldSquare studied the traffic of organisations that had deployed and found that these solutions do more harm that good. In that they they fail to detect the bots on at least a 2:1 rate. Against 22.39 per cent of actual bad bot traffic, advanced in-house bot management solutions were able to detect only 11.54 per cent of bad bots, and that half of those detected were false positives.
So what’s going on? Why are in-house approaches not working when AI is becoming so accessible?
There are two factors firstly the design inhibits sound decision making and you end up with a poor user experience, and secondly there’s no way to feed in any global intelligence.
1. Poor User Experience
In-house bot management solutions struggle to understand distinctive user behaviour and result in high false positives and negatives, resulting in poor user experience. To explain this we need to look under the bonnet of how in-house systems are designed.
Higher false negatives
In-house bot management solutions are not optimised to consider various factors when analysing the traffic on a website, such as stopping sudden surges in traffic, low and slow hits, and mutating bots. It’s largely due to the way attacks are constructed. Firstly they are sophisticated and complex, and secondly they are often large-scale. Credential stuffing is now a norm, whereby a combination of different techniques are used to bypass security measures while masquerading as genuine users.
The attackers can create tens of thousands of IPs distributed across tens of domains and geographical locations, using numerous ISPs. It’s then deliberately designed to be hidden from detection. With these attack methods, attackers can carry out thousands of unique URL hits on the login pages to perform a credential stuffing attack. It’s this scale and pervasion that makes it so hard to detect a problem.
Then there are the low and slow attacks. These use sophisticated bots, using thousands of IP addresses in one attack instance played over a long period of time. Using this low and slow technique, attackers can scrape product information and pricing details of millions of products from e-commerce portals.
In effect they are not unusual in comparison to day-today genuine traffic and stay under the rules of advanced in-house bot management.
There’s also the inability to get the intent analysis right ie determining the risk score of a visitor, traffic categorisation (between good bots, bad bots, and humans), and then mitigating bad bots. This is becoming more and more crucial as the bot attacks grow in sophistication.
In-house bot management solutions can't analyse intent, and therefore cause massive false positives. As a result, companies that have failed to stop bad bots and been the victim of scraping, fraud or theft, recognise they need Intent-based Deep Behaviour Analysis (IDBA) to fight the crime.
Higher false positives
The biggest contributor to this is in an inability to determine domain specific user behaviour. For example, on portals with live content, such as news portals and social media sites, some users tend to spend more time scrolling through their feed or browsing the website, and they have comparatively higher session times than the users of websites with relatively static content. In-house systems identify them as bots regardless which makes the decision on how to deal with them precarious.
2. Lack of Global Threat Intelligence
In-house bot management solutions lack global threat intelligence and rely on in-house data. This hampers any ability to identify sophisticated human-like bad bots. On the other side, dedicated bot management vendors serve thousands of global customers. The firms collect data from end users’ devices to build a comprehensive database of bots (both good bots and bad bots) and human fingerprints. Bot management firms leverage this database to improve their bad bot detection and mitigation capabilities dynamically. It’s easy to see why CIOs are now investing rather than try to replicate it themselves.
So why not go for a specialist solution straight off?
Companies that deploy in-house bot management solutions often encounter a clash of commitments –pressure to reduce risk, prevent an impact to customer experience and profit, yet deliver solutions on a shoe string with little to no dedicated skill. Their hands are tied.
Let’s face it with so many threats emerging, it’s unlikely that a company will have an expert in each field at their disposal, especially bots. Bot management is a very niche space and requires comprehensive understanding and continuous research to keep up with cybercriminals. I’m sure CIOs will be telling their boardroom counterparts this. Still the pressures of cost pervade.
However, the evidence is now abundant. Companies can not afford to get this wrong. It will effect brand perception and revenue. CIOs embarking on the strategy, or thinking of turning back, are now in a stronger position to learn from their forerunners and persuade the board to make a different investment. One which will deliver an ROI, and free them up for a more strategic use of their budget and skill. Competitive advantage depends on it.
Pavan Theda, Head of Bot Management Solutions, Radware