Skip to main content

The human vulnerability in cybersecurity

Over the past 20 years, organisations have spent billions of pounds and expended countless hours protecting their most critical information assets. However large-scale breaches are happening almost daily and despite advancements in technology, the threat landscape has only got worse.

These incidents are happening in every industry on every continent, targeting every type of data that conceivably holds monetary value. The question is no longer ‘if I get breached,’ it has become ‘when I get breached.’ Whatever the cybersecurity industry has been doing to combat this has, very simply, failed.

Why have so many organisations spent so much money and effort and made such little progress? They have all been fighting the wrong battle with the wrong weapons. Data breaches are almost always framed as technical failures. However, the evidence suggests that the cybersecurity problem is not a technology problem, but a human one.

Internalising success, externalising blame

So who is to blame for the rapid increase in data breaches? According to Ponemon Institute research, hackers and criminal insiders cause the most breaches. With 47 per cent being caused by malicious or criminal attacks and 25 per cent being attributed to human error, the remaining 29 per cent are blamed on system glitches. These can be defined as a failure of IT and business processes such as natural disasters, power outages and internet service provider issues. Whilst it’s true that all of these unpreventable events can have unpredictable effects on a computing environment, it’s important to consider the nuance behind a ‘process failure.’ When considering that these processes are developed and implemented by humans, it becomes clear that these failures are in fact caused by humans failing to tell computers to do the right thing. In other words, admitting that the system hasn’t worked would mean admitting personal failure, something that humans are often reluctant to do. This phenomenon is known as externalisation, which in turn is a component of causal attribution.

The natural human response to something bad happening is to project blame onto somebody or something else. In the case of cybersecurity, the initial reaction is to blame the technology or systems in place used to prevent a breach, as opposed to somebody’s poor decisions making. The human error factor in data breaches is something that is reflected in research reports across the board.

For example, the 2015 Defending Data Report shows that 93 per cent of respondents thought human behaviour was the biggest threat to their organisation’s security. The Experian Data Breach Industry Forecast report echoes this sentiment, finding that employees and negligence are the leading cause of security incidents, but remain the least reported issue. As a result, it’s safe to say cognitive biases present in the human mind – such as externalisation – are at the heart of the cyber security battle.

The failure of the human system

So what exactly do we mean by cognitive biases and how does this affect the war against cyber criminals? A cognitive bias is a limitation in our brain’s ability to process information sufficient for us to make conscientious decisions. This also refers to a systematic pattern of deviation from normal or rational judgement, which can lead to perceptual distortion, inaccurate judgement, illogical interpretation or irrationality.

This phenomenon presents itself in other industries, such as the manufacturing industry. The natural reaction of employees who are the victims of an accident is often to blame the equipment, even though 88 per cent of accidents are in fact caused by unsafe acts.

Behaviour Based Safety (BBS) and many other manufacturing industry safety programs address this bias in the human mind. These programs apply the science of behaviour change to real-world problems. Changing workers’ behaviour is the key to reducing the number and severity of workplace accidents.

BBS analyses what people do and why, then uses an intervention strategy to change that behaviour. These changes require buy-in from all employees, from the CEO to the frontline workers. Much like BBS, a successful cybersecurity programme requires all employees to be on the same page and requires support and acceptance. An effective security programme must be holistic since it is an issue that affects the whole business, not just IT.

The human solution

Taking into account the impact that human error can have on cybersecurity incidents, what’s the solution? Although reducing the number of times a human has to make a decision can minimise the amount of mistakes made, we still need human involvement to analyse complex data and assess the findings. Technology can do 90 per cent of the legwork which then allows human focus to shift to pattern recognition efforts during attack scenarios. This would mean implementing a more efficient marriage of human intelligence and technology to spot malicious activity when it’s occurring. Rather than spending vast amounts of money on tools and technology, organisations should be focused on training individuals to correlate the alerts they see with actual human activity.

The cybersecurity industry has historically thought of data breaches as a technology problem, but all the evidence indicates that it’s a human problem. Based on lessons learned from the manufacturing industry, preventing breaches requires changing behaviour and reducing the number of opportunities for people to make mistakes. Doing so will enable cybersecurity programmes to be exponentially more prepared and subsequently more successful in preventing breaches.

Chris Pogue, Chief Information Security Officer, Nuix

Image source: Shutterstock/Titima Ongkantong