Skip to main content

An AI cyber-security sauce to repel IT threats?

Since cybersecurity threats have become a topic of nightly newscasts, no longer is anyone shocked by their scope and veracity. What is shocking is the financial damage the attacks are predicted to cause as they reverberate throughout the economy (I know how terrible this type of crime can be. I myself have been the victim of a data theft by hackers who stole my deceased father’s medical files, running up more than $300,000 in false charges. I am still disputing on-going bills that have been accruing for the last 15 years).  

Cybersecurity Ventures predicts global annual cyber-crime costs will grow from $3 trillion in 2015 to $6 trillion annually by 2021, which includes damage and destruction of data, stolen money, lost productivity and theft of intellectual property, personal and financial data, embezzlement and fraud. That doesn’t even include post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data, systems and reputational harm. 

While traditional security filters like firewalls and reputation lists are good practice, they are no longer enough. Hackers increasingly bypasses perimeter security, enabling cyber thieves to pose as authorised users with access to corporate networks for unlimited periods of time.

Insufficient detection and alert fatigue

Organisational threats manifest themselves through changing and complex signals that are difficult to detect with traditional signature-based and rule-based monitoring solutions. These threats include external attacks that evade perimeter defences and internal attacks by malicious insiders or negligent employees.  

Along with insufficient threat detection, traditional tools can contribute to “alert fatigue” by excessively warning about activities that may not be indicative of a real security incident. This requires skilled security analysts to identify and investigate these alerts when there is already a shortage of these skilled professionals. CIOs and CISOs need to pick up where those traditional security tools end and realise that it’s the data that is ultimately at risk. 

Cloud deployed security shields need to be placed where the data resides as opposed to monitoring data traveling across the network.  The safeguarding of the data is as important, if not more imperative, than just protecting the network or the perimeter.  CIOs need to pick up where those traditional security tools end and investigate AI cyber-security digital safety nets. IDC forecasts global spending on cognitive systems will reach nearly $31.3 billion in 2019.   

Some cyber-security sleuths deploy a variety of traps, including identifying an offensive file with a threat intelligence platform using signature-based detection and blacklists that scans a computer for known offenders. This identifies whether those types of files exist in the system which are driven by human decisions.  However, millions of files need to be uploaded to cloud-based threat-intelligent platforms, scanning a computer for all of them would slow the machine down to a crawl or make it inoperable. But the threats develop so fast that those techniques don’t keep up with the bad guys and also; why wait until you are hacked?  

The mix of forensics and machine learning

Instead of signature and reputation-based detection methods, smart CSOs and CISOs are moving from post-incident to pre-incident threat intelligence. They are looking at artificial intelligence innovations that use machine learning algorithms to drive superior forensics results.  

In the past, humans had to look at large sets of data to try to distinguish the good characteristics from the bad ones. With machine learning, the computer is trained to find those differences, but much faster with multidimensional signatures that detect problems and examine patterns to identify anomalies that trigger a mitigation response.

The good, the bad and the ugly

Machine learning generally works in two ways: supervised and unsupervised.  With supervised learning, humans tell the machines which behaviours are good and bad (ugly), and the machines figure out the commonalities to develop multidimensional signatures. With unsupervised learning, the machines develop the algorithms without having the data labelled, so they analyse the clusters to figure out what’s normal and what’s an anomaly. The obvious approach is to implement an unsupervised, machine learning protective shield that delivers a defines layer to fortify IT security.  

A self-learning system with the flexibility of being able to cast a rapidly scalable safety net across an organisation’s information ecosystem, distributed or centralised, local or global, cloud or on-premise. Whether data resides in a large health system or is the ERP system of a large energy company or a financial institution, rogue users are identified instantly. By applying machine learning techniques across a diverse set of data sources, systems become increasingly intelligent by absorbing more and more relevant data. These systems can then help optimise the efficiency of security personnel, enabling organisations to more effectively identify threats. With multiple machine learning modules to scrutinise security data, organisations can identify and connect otherwise unnoticeable, subtle security signals.   

Security analysts of all experience levels can also be empowered with machine learning through pre-analysed context for investigations, making it easier for them to discover threats.  

This enables CISOs to proactively combat sophisticated attacks by accelerating detection efforts, reducing the time for investigation and response.    

The digital eye sees all

Once a machine learning system is in place, organisations need to identify solutions that employ behavioural analytics which will baseline normal behaviours and identify irregularities. While the technology is advanced, the concept is simple.  

A pattern of user behaviour is established and stored in the system. To adequately address the threat, CISO’s should consider using solutions which are ambient to completely surround an intrusion while harnessing the power of the machine learned system’s cognitive nature. This combination creates an evolving “virtual intelligent eye” defence shield that provides real-time behaviour analysis and anomalous user access monitoring.  

This type of solution provides an eye that learns, understands, recognises and remembers normal user habits and behaviour as they use applications in their daily work. The eye generates a digital “fingerprint” based on behaviour for every single login, by every user, in every single application and database across the organisation. 

If your organisation deploys this type of comprehensive cybersecurity system, a gloomy doomsday scenario offered up by many cybersecurity ventures will no longer be a concern.

Santosh Varughese, CIO, Cognetyx
Image source: Shutterstock/deepadesigns

Santosh Varughese is President of Cognetyx, the world’s first “Ambient Cognitive Cyber Surveillance” solution to safeguard medical information. Cognetyx uses advanced machine-learning artificial intelligence to detect rogue users.