Skip to main content

Preparing for AI cybercrime before it’s too late

(Image credit: Image Credit: Razum / Shutterstock)

Artificial Intelligence (AI) is currently used by IT professionals to manage cybersecurity threats, protecting organisations from ongoing cybercrimes. However, this won’t last forever. With its ability to take large volumes of information and deduce clusters of similarity, it won’t be long until AI will turn on us.

AI - a formidable enemy?

Another quality that AI is exhibiting is its ability to mimic humans to a worryingly accurate degree. It can draw pictures, age photographs of people, and just recently, it has been found to impersonate human voices.

This means that AI could potentially replicate human hacking tactics, which are currently the most damaging and the most time-consuming form of attack for hackers. Those hacks performed by humans involve digging into systems, watching user behaviour and finding or installing backdoors. Hacks performed by tools are much easier to detect than those performed by humans.

AI could be used to build an independent, patient, intelligent and targeted attacker that waits and watches: an automated APT, if you will. That would be far more difficult to defend against than automated ’splash’ tactics, and it could be executed or industrialised on a very large scale.

AI cybercrimes – not now, but let’s prepare

The good news is that any such automated APTs will arrive slowly, because AI is complicated. AI algorithms are not simple. They require data science experts and currently, the skills needed are in short supply in the industry. This means, if such automated APTs were to arrive, we’re likely to see this achieved first by nation-states, not by hobbyists. The first likely targets will be organisations with national interest.

A while ago there were hacks on Anthem, Primera and Care First, major healthcare providers in the US, all of which worked with a lot of federal employees. At the same time, Lockheed and the Office of Personnel Management, which handles Class 5 security clearance, were hacked, losing fingerprint and personal data for thousands of people.

One theory about these hacks was that a nation state stole the data. As it didn’t turn up on the dark web for sale, where did it end up? If this nation does now possess it, they have terabytes of healthcare, HR, federal background check and contractor data at their command. The value of such data would make relating one set of data to another very difficult and time consuming if done by hand.

But an AI program could find clusters and patterns in the data set and use them to work out who could be a good target for a future attack. You could connect their families, their health problems, their usernames, their federal projects – there are lots of ways to use that information. Nation states steal data for a reason - they want to achieve something. So as AI matures, we could see far more highly-targeted attacks taking place.

AI phishing – faster and more effective

While it’s likely that AI-powered hacking will begin its life as the preserve of nation-states, it’s only a matter of time before this sort of attack becomes commonplace in the regular market.

At the moment, it’s often easy to tell if an email is a phishing attempt from the way it’s written with misspelled words and odd grammar. AI could eliminate that. Let’s say that AI can write better than 60 per cent of people, using colloquialisms and idiomatic phrasing – it’d be pretty hard to spot. And even if AI is only ‘as good’ as humans, it can be much faster and therefore more effective.

Phishing is one of the most lucrative forms of hacking - if AI can raise the rate of success from 12 per cent to 15 per cent, say, with half the human effort, then it could be worth it for hackers. We haven’t seen any truly malicious, AI-crafted spearfishing attempts yet, but it’s likely to be a very effective first step for AI cybercrime.

Have the right people and tools to protect your organisation

An effective defence comes down to having the right people and the right tools in place. It’s been several years now that organisations have been working to solve the information-overload problem in cybersecurity, yet still most security teams still have difficulty weeding out data theft incidents from the chaff.

Organisations have realised that the collection of user and application access to data is a responsibility of cybersecurity.  Now security is feeling the pain of trying to understand this vast data.  The most successful teams are leveraging AI or machine learning to perform these analysis activities to meet both the organisation’s and any regulation needs.

Companies should be reminded that not every attack will be protected. The focus should be shifted to discovering where your critical resources are and what you can do to mitigate the risk on those resources specifically. If data is your most critical resource, what do you know about it?

Your databases are where your most valuable data resides, making it a prime target for the hacker. Therefore, it is crucial for organisations to have visibility into their databases and files and having the appropriate security for key apps.

If you’ve been breached, it is vital you are able to tell a regulator what was taken. Otherwise, this will be costing the organisation hundreds of millions. AI cybercrime is coming. Make sure you can protect your data by knowing where it is.

Terry Ray, Senior Vice President and Fellow, Imperva