Cyber security AI is almost here – but where does that leave us humans?

Whatever the industry or application, it seems that any attempt at future-gazing around technology today will inevitably turn to the impact of artificial intelligence (AI). From automated customer support and targeted marketing to self-driving vehicles and even the field of war, AI has huge potential in almost every aspect of our professional and personal lives.

In the security industry, AI is being looked to in order to create the ultimate solution for advanced cyber-attacks. This vision of security nirvana involves the arrival of a system so sophisticated that it can detect and shut down an attack before us mere humans are even aware there was a threat at all.

In many applications, the impending advent of AI has proven to be a controversial idea, with debate frequently turning to nightmarish outcomes like the malevolent and murderous Skynet or HAL 9000. Indeed, even before it became a modern sci-fi staple, the concept of creating a helper and having it run amok is deeply ingrained in human culture, dating back hundreds of years to tales like the Golem of Prague and The Sorcerer’s Apprentice. 

Tesla and Space X founder Elon Musk even went as far as to state that rogue AI would be most likely cause of World War 3, and helped found the OpenAI organisation to try and funnel AI research towards outcomes that will benefit humanity.

The future of security?

In the security world however, AI has a very clear-cut potential for good. The industry is notoriously unbalanced, with the black hats getting to pick from thousands of vulnerabilities to launch their attacks, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the white hats tasked with defending a system have to stop every attack, every time.

With the advanced resources, intelligence and motivation to complete an attack found in high level attacks, and the sheer number of attacks happening every day, victory eventually becomes impossible for the defenders.

The analytical speed and power of our dream security AI would be able to tip these scales at last, levelling the playing field for the security practitioners who currently have to constantly defend at scale against attackers who can pick a weak spot at their leisure. Instead, even the most well-planned and concealed attacks could be quickly found and defeated.

Of course, such a perfect security AI is some way off. Not only would this AI need to be a bona fide simulated mind that can pass the Turing Test, it would also need to be a fully trained cyber security professional, capable of replicating the decisions made by the most experienced security engineer, but on a vast scale.

Before we reach the brilliant AI seen in sci-fi, we need to go through some fairly dumb stages – although these still have huge value in themselves. Some truly astounding breakthroughs are happening all the time, with Google alone undertaking thousands of AI-based projects as we speak. When it matures as a technology it will be one of the most astounding development in history, changing the human condition in ways similar to, and bigger than, flight, the Internet and Big Data.

How can AI help security professionals?

Technology develops in such an unpredictable way that such world-changing breakthroughs may happen next year, in the next five, or take a decade to really set in. However, it’s worth remembering that AI is not an all-or-nothing technology. While it continues to develop, the principles behind it are already being used every day in the form of machine learning.

Machine learning is often described as a computer that is able to learn without being explicitly programmed to do so. While lacking the self-actualised decision-making abilities of popular sci-fi AI, machine learning programmes are extremely valuable for their ability to handle vast amounts of data and identify patterns and trends.

This capability has helped to address a challenge from the earliest days of forensic criminal investigations - Locard’s Exchange Principle. This is the idea that all crime scenes involve the criminal taking something away, but also leaving something behind, which forms the clues for forensic investigators to follow. In the modern world, the principle has become skewed by the vast amount of potential evidence involved in cybercrime, granting the criminals a huge advantage over the investigators.

Aided by a machine learning based analytical tool however, it is possible for security teams to catch up. The analytics tool can take care of the heavy lifting of sorting through the vast piles of digital evidence and then breaking them down into the key data points that require human attention. This not only gives the investigators a much better chance of tracking down crucial information about the attack, but also means they are free to focus on using their intuition and experience, rather than wasting time tediously crunching data.

What about human security workers?

The fact that machine learning and eventually AI have the ability to improve the experience of security practitioners as well as enhancing their capabilities is an important asset for the industry. Going back through the centuries, the advent of a significant new technology has usually lead to social and economic upheaval over the existing workforce being made redundant.

For security at least, AI will have the opposite impact, and actually help to create more jobs at the same time as enhancing existing ones. With existing practitioners now gifted with far more time to engage in analytical work rather than sifting through endless data, there is more capacity to bring in new employees. We’ll also see these new starters gain skills and experience more quickly than is currently possible. We have already seen machine learning begin this change, and the advent of fully-fledged artificial intelligence will accelerate things even more.

One of the main negative outcomes theorised around security AI is the fact that the black hats will also be able to use it to enhance the complexity and scope of their attacks. Indeed, we have already seen attackers implementing machine learning techniques to improve their attacks, particularly when it comes to creating more advanced bots. As the technology advances, many believe the ultimate outcome will be a scenario of pure machine vs machine, with attacking and defending AIs battling it out without human intervention.

However, we believe that human intelligence will always have a vital role to play in cyber security, no matter how advanced AI becomes. Our unique creativity and capacity to make intuitive leaps means we will always have an edge in spotting patterns and trends that the more direct approach of a machine mind will be likely to miss. While artificial cyber defenders will certainly be transforming the security industry within the next few years, flesh-and-blood experts will always be at the centre of cyber security, no matter how far into the reams of sci-fi we travel.

Sam Curry, Chief Security Officer, Cybereason
Image Credit: John Williams RUS / Shutterstock