Understanding bots and security: Why not all bots are beneficial

Does your company have a bad bot problem? And if it does, what are you doing to address the issue? If you’re not sure it has a problem, what are you planning to do to find out?

These are not trivial questions. Bad bots can cause significant damage to a business if they’re left unaddressed. Bad bots are automated attacks that can target specific holes within company websites or simply abuse a website while impersonating legitimate human traffic.

In the past, bad bots were limited to activities such as web scraping and brute force attacks. But in recent years bots have evolved and they can now perform much more sophisticated actions such as online fraud, account takeover and looking for vulnerabilities within an organisation’s IT infrastructure.

A recent study by research firm Aberdeen showed that the likely risk of bad bot impact can range from 1.8 to 7.6 per cent of a website’s contribution to annual revenue. On average, the likely median for this is about 4 per cent. If your company site helps generate $10 million in revenue a year, then bad bots are likely to cost you around $400,000 annually. The likelihood of risk scales up or down regardless of the site’s revenue, Aberdeen stated, meaning that companies of all sizes can be affected by this class of attacks.

Issues caused by bots can include the following:

  • High server load and bandwidth usage caused by unpredictable traffic spikes
  • Skewed marketing analytics typically manifesting in poor decisions on how to optimise the conversion funnel
  • Declining search engine optimisation rank because traffic is being diverted to imitation sites posting the same content
  • Account takeover problems, particularly after large scale password thefts like those that took place with MySpace and Linkedin
  • Unusually high account sign-ups and a spike in fraudulent transactions, driven by bots testing stolen logins and credit card information

Before looking at bots and how to deal with them, it’s important to acknowledge that there are good bots as well as bad bots. According to our own research in the 2016 Bad Bot Landscape Report, we could identify statistically significant data on global bot traffic. Nearly half of all web traffic (46 per cent) now originates from bots, and 19 per cent of the traffic originates from bad bots.

While this represents an overall decrease in bad bots compared to previous years, however, advanced persistent bot (APB) activity is increasing. Most bad bot traffic (88 per cent) features one or more characteristics of an APB, and just over half of bad bots are able to load external resources such as JavaScript. What that means is these bots will end up being falsely attributed as humans in Google analytics and other tools.

Nearly 40 per cent of bad bots can now mimic human behaviour. Because of this, tools such as web log analysis and firewalls that perform less detailed analysis of client behaviour will likely result in a huge number of false negatives.

Furthermore, more than one third of bad bots disguise themselves using two or more user agents, and the worst APBs are capable of changing their identities more than 100 times. About three in four bad bots rotate or distribute their attacks over multiple IP addresses. Of those, one in five surpassed 100 IP addresses.

Bad bots, including APBs, are affecting companies in industries such as real estate, travel, ecommerce, financial services, healthcare and others. The biggest sources of bad bots remains China and the U.S., according to the Distil Networks report. Six out of the top 20 Internet service providers (ISPs) with the highest share of bad bot traffic originated from China.

All of this presents significant challenges for IT and security teams at organisations experiencing bad bot attacks. However, preventing bot attacks can have serious financial impact. By detecting bad bot activity earlier and stopping attacks, the cost for the risk associated with bad bots would drop. According to Aberdeen, mitigating bad bot attacks with an advanced anti-bot system can bring the risk down to between 0.1 and 0.2 per cent, with the median being about 0.2 per cent. Compare this to the estimated median cost of around 4 per cent of revenue, and the benefit is considerable.

When approaching bot protection, it’s important to know that not all approaches that are available can be equally effective. For example, web-based fraud detection products can provide reporting on automated traffic patterns. The problem here is that this only spots some of the patterns for the less clever bots, and don't help organisations take action against bad bots.

Some organisations try to address bad bots via web application firewalls (WAF). But WAFs were not designed to manage the sheer numbers, variety, and sophistication of today’s bots. WAFs identify and block application exploits looking to exploit a coding vulnerability, and use attack signatures. But with bots there are no signatures involved; they’re not limited to launching website attacks, and can pose as humans to programmatically abuse and misuse websites.

Teams can also sift through their logs and parse out bad traffic manually. Depending on how much visibility they have into their traffic, companies either see bots and recognise the amount of impact the bots are having - usually manifesting in fraud or customer data theft - or they're not sure why they're having to deal with such a wide variety of issues all at the same time.

Regardless of the approaches they take, it’s clear that companies need to assess the real risks of bad bots to their business and what tools they should invest in to address the issue - and they need to do this sooner rather than later.

Rami Essaid, CEO at Distil Networks

Photo credit: Gunnar Assmy / Shutterstock