Skip to main content

Harnessing the power of probability to improve cybersecurity job satisfaction

(Image credit:

Today’s security analysts are under constant pressure and expected to be on their “A game” 24/7/365. They serve as the gatekeepers to the nation’s networks, faced with the overwhelming task of defending against constant attempts to breach their organisations’ defences. Their hours are long, and the work is sometimes tedious, but security analysts play a vital role as an organisation’s most powerful defence against data breaches, ransomware, hacking and a host of other malicious acts. Their knowledge, expertise and diligence keep sensitive customer data, valuable intellectual property and coveted financial information safe.

While many were attracted to this profession with the promise of evolving into a seasoned detective, they frequently find themselves serving in a role akin to a virtual security guard – inundated with a constant influx of alerts day in and day out. These mundane tasks take their toll on analysts, increasing both the length of their to-do lists and their stress levels and decreasing their average tenure of employment. It’s no wonder that the average analyst’s job tenure is a mere 12 to 18 months.

To cope with the sheer volume of tasks thrown at them, analysts must meticulously manage their workload, striving for perfectionism to ensure no threat slips through the cracks.

Adopting this perfectionist persona has created a level of unease between analysts and machines, as analysts harbour concerns about relinquishing some control. Security analysts may question the legitimacy of incoming alerts or fear an automated solution may have missed something. This can lead to duplicative efforts as analysts double back to fact-check the work of automated security solutions.

Though analysts may feel they are doing their due diligence by going the extra mile to investigate these alerts, in reality, they are continuing to take a burden on themselves that they can never reasonably fulfil.

The changing role of probability in cybersecurity

We all rely on human judgement—a form of probability—more than we realise. Whether it’s predicting which sports team will win or what tomorrow’s weather forecast will be, every decision we make in our daily life is based on a form of probability (i.e. historical knowledge, context, current view of a situation, or experience).

Security analysts also use this practice in their day-to-day lives. For years, analysts have relied on their own probability judgements to gauge the severity of a threat. Despite the knowledge security analysts have, the threat landscape is evolving at a speed no human can keep up with. This means that even the most proficient security analyst may struggle to use his or her own judgement to accurately determine the severity of a threat and decide on the best course of action.

Having said this, probability does need to remain an important part of cybersecurity strategies moving forward. The difference is that security analysts must shift a portion of this manual analysis from themselves to intelligent security automation tools with probabilistic reasoning embedded at their core. 

Combining human judgement with machine

So, how can a security analyst trust that the machine will make the right decision? The answer is Robotic Decision Automation (RDA)—the next evolution of Robotic Process Automation built upon human expertise and judgement. RDA combines expert judgement with self-adaptability (driven by artificial intelligence and machine learning) to support security analysts in making decisions more accurately and quickly.

RDA executes probabilistic decisions at a rate no human can match by blending human capability with technology. With machine learning built-in, RDA can tap into the knowledge of any environment. RDA gives frontline analysts a fuller view of what’s going on across the whole IT environment. It enables them to see each alert, not as a discrete event—or a piece of data streaming across a console—but instead as part of the story that’s taking place.

It can make the decision to escalate or just monitor a problem based on what it has learned so far while adjusting to new inputs and context as it monitors security. This helps security teams analyse data and make decisions at a fraction of the time it would take a security team unaided by intelligent solutions, while also lowering the risk of human error.

When RDA takes over the mundane aspects of the job, human security analysts can focus on the more interesting, complex, and “advanced” aspects of their role—spending a greater percentage of their time on higher-value tasks that tend to be varied and exciting. For any analyst, detecting real intrusions is rewarding and validating. When employees are able to do so regularly and often, they’re more likely to feel that they’re making a difference for their organisations and in their careers.

A win-win security scenario

For too long, the responsibilities placed on security teams have been unrealistic. Security analysts have tried to meet unreasonable expectations and need the support of intelligent security automation to help elevate them to greater value. Because RDA can consider every one of hundreds of thousands of daily IDS events, there are fewer gaps in coverage, and significant anomalies are far less likely to be missed.

RDA can transform the day-to-day life of a security analyst, automating frontline security operations and providing analysts with an opportunity to focus their time on other, more investigative work that allows them to make the most of their innate creativity, curiosity and skill. In addition, RDA can enable analysts to develop their advanced and specialised analytical skills faster – such as threat hunters, forensic analysts or malware experts, for example. This is a win for the analyst and the organisation alike, creating both a better work environment and a safer network.

Chris Calvert, Co-Founder, Respond Software

Chris is Co-Founder and VP of Product Strategy at Respond Software.