Skip to main content

The onset of Artificial Intelligence is a lot closer than you think

Artificial Intelligence is a term the public is increasingly familiar with.

After all, in 2003, a computer programme was causing mass mayhem. It was rapidly and successfully overwriting core programmes within a global network and almost achieved complete control – well, in the film The Matrix Reloaded at least!

Far-fetched and futuristic as that film seemed all those years ago, and whilst we are not yet seeing 'Agent Smith' levels of Artificial Intelligence or a post-apocalyptic future where machines rule humans, robots taking over networks are not a figment of imagination anymore.


Last December, ‘BlackEnergy’ malware targeted power companies in western Ukraine with great speed and precision, causing a blackout that affected more than 225,000 civilians . Before that, ‘BlackEnergy’ had overwritten file extensions within Ukrainian media companies, rendering their operating systems unbootable. In January this year, it was also detected on the IT network of Boryspil, Kiev's main airport, which included air traffic control systems.

These attacks characterise today’s danger. It is no longer just the classic scenario of information being stolen or websites being defaced, but the unseen attacks – enemies that creep in and change systems at will or install kill switches, quietly and carefully. These attackers use previously unseen and customised codes, cross boundary defences only once and do not send out information. They may only be active for a few seconds a year but when commanded to act, they prove fatal. Despite the high stakes, many firms and CEOs have yet to tackle these kind of attacks, and they seem almost impossible to fight back against.

Machines – 3, Humans – 0The harsh realities of being human: unavailability, incapability, and fallibility


It is a reality that we are outnumbered. As cyberattacks increase in frequency and severity, organisations find it difficult to implement desired security projects due to a lack of staff and expertise. We’re struck by a global cybersecurity talent shortage that is only going to get worse. By 2020, there will be an estimated 1.5 million shortfall in information security workers worldwide .

Some firms have started cross-training IT workers and converting them into security specialists. Others are partnering with academic institutions to provide scholarships for students undertaking cyber security certification. These are definitely steps in the right direction. However, they lead us to the second reality – we can be easily outmanoeuvred and it’s not just about throwing more bodies at the problem.


Organisations have and will continue to embrace digital transformation. It would be impossible not to – it’s what customers want. This ranges from their personal life, where they can access on demand TV on multiple devices; to the adoption of virtualisation and cloud at work, changing the way in which we work. The number of connected devices will hit 6.4 billion in 2016 and increase to 20.8 billion by 2020, with each connection representing a potential point of entry.

Given the proliferation of data in today’s online business environment, and the familiarity the majority of workers have with technology, it is not just unproductive but impossible for humans to sift through the vast amount of information and identify potential threats passing through networks in real-time.

As attackers increasingly obtain credentials from employees, customers, suppliers or contractors, and use these cloaks of legitimacy to exploit points of entry in ways that are difficult to predict, it’s unsurprising that humans are often blindsided or bypassed.


Herein lies the third reality – we are a part of the problem we are desperately trying to solve. The ‘BlackEnergy’ incidents traded on human curiosity. These spear-phishing attacks targeted specific individuals within the organisation and compelled them to open an email. That email, once opened, would contain either an attachment or a link to a website, appearing legitimate but resulting in the malware’s installation – there’s an obvious lack of education for a lot of employees across a lot of different industries.

New breeds of machines doing the heavy lifting on behalf of humans

We cannot continue to rely on sure-fail traditional approaches to deal with today's new breed of cyberthreats.

‘BlackEnergy’ found its way into networks despite the presence of firewalls, anti-viruses, and sandboxes. These traditional tools failed because they attempted to pre-define the threat by writing rules based on previously known attacks. Hackers now preemptively change just enough of an attack’s code to elude established defences. They also use machine intelligence that can observe and learn how to behave as authentically as real devices, servers and users. For rule-based or signature-based approaches to work, what is required is a perfect archive of previous threats and complete clairvoyance to predict future threats. This is not achievable.

So what can we do?

We need to make machines work for us, in the same way they can work for the attacker. Using complex algorithms and a mathematical framework, unsupervised machine learning technology can process and make sense of today’s deluge of data, before making logical, probability-based decisions against cyberthreats on behalf of humans .

The technology, when applied, automatically studies a network’s so-called ‘pattern of life’ – everything from the devices that usually ‘talk’ to one another to what sort of data they normally transmit to whom and when. Once a baseline has been established, the programme acts as an ‘immune system’ of sorts, alerting systems administrators to behavioural irregularities, with each alert highlighting how serious a threat might be. This means that previously unidentified threats can be detected and fought back against, whilst alerting the correct person. This saves the organisation time – the machine can begin to fight back, as the organisation decides how to deal with the breach.

Since its introduction two years ago, unsupervised machine learning technology has addressed more than 5,500 serious and sophisticated attacks , including network breaches from Facebook, iMessage, storage devices used at work and at home , as well as network-connected coffee machines and biometric sensors.

Unsupervised machine learning could be the one thing that gives us a chance against advanced and automated adversaries.

Humans are still an essential part of the process – this should go without saying. But, we need to work with the machines, and not try and outwit them. By working together, the threat can be detected and dealt with, through a human’s skillset and a machine's intelligence.

Dave Palmer, Director of Technology, Darktrace

Image Credit: Razum / Shutterstock

Dave Palmer
Dave Palmer, Director of Technology at Darktrace, has over ten years' experience at the forefront of government intelligence operations, working across GCHQ and MI5. At Darktrace, Dave oversees the mathematics and engineering teams.