Q: 2018 was a roller-coaster year for the tech industry – lots of big court cases and high-profile data privacy disagreements. What impact do you think this has had on the security industry?
A: Last year put the importance of trust front and centre for all businesses in all industries. Businesses can rise or fall based on trust—companies abusing their customers’ trust face millions or billions of dollars in regulatory fines and lost market value, as in the case of Facebook and Cambridge Analytica. But, the intersection between end-user and data is also the point of greatest vulnerability for an enterprise, and the primary source of breaches driving cyber risk to all-time highs.
How can security professionals know if an end-user login is the result of an employee’s coffee-shop WiFi access or an attacker abusing authorised credentials? How do they know whether a user identity is behaving consistently or erratically on the network compared to an established routine? Knowing and acting on the difference between an individual legitimately trying to get their job done and a compromised identity is the difference between innovation and intellectual property (IP) loss, the difference between an organisation’s success or failure.
Q: Do you think that is going to become harder for the cybersecurity industry to identify with the introduction of new technologies like AI?
A: The buzz for cybersecurity AI is palpable. In the past two years, the promise of machine learning and AI has enthralled and attracted marketers and media, with many falling victim to feature misconceptions and muddy product differentiations. Today, cybersecurity AI in the purest sense is non-existent, and we predict it will not develop through 2019. While AI is about reproducing cognition, today’s solutions are actually more representative of machine learning, requiring humans to upload new training datasets and expert knowledge. Despite increasing analyst efficiency, at this time, this process still requires their inputs—and high-quality inputs at that. If a machine is fed poor data, its results will be equally poor. Machines need significant user feedback to fine-tune their monitoring; without it, analysts cannot extrapolate new conclusions.
Q: So, AI is out – what about other hyped up technologies, like IOT?
A: The industry is already very well aware of and working on the vulnerabilities created by the influx of consumer IOT devices to the market. This year, however, we think that the focus will shift to larger scale attacks on industrial IOT devices by targeting the underlying cloud infrastructure. This target is more desirable for an attacker— access to the underlying systems of these multi-tenanted, multi-customer environments represents a much bigger payday.
Q: What’s the problem then? What makes it so attractive?
A: There are three issues at play: the increasing network connectivity to edge computing; the difficulty in securing devices as more compute moves out to the edge, as they do in remote facilities and IoT devices, and the exponential number of devices connecting to the cloud for updates and maintenance.
As control systems continue to evolve, they will be patched, maintained, and managed via cloud service providers. These cloud service providers rely on shared infrastructure, platforms, and applications in order to deliver scalable services to IoT systems. The underlying components of the infrastructure may not offer strong enough isolation for a multi-tenant architecture or multi-customer applications, which can lead to shared technology vulnerabilities. In the case of industrial IoT, a compromise of back-end servers will inevitably cause widespread service outages and bring vital systems to a screeching halt. Manufacturing, energy production, and other vital sectors could be affected simultaneously. Organisations will need to move from visibility to control where the IT and OT networks converge to protect against these deliberate, targeted attacks on IIoT systems.
Q: You mentioned consumer IOT devices – is there still a risk posed by the number of devices connected and the information that we trust them with?
A: Absolutely – we put a huge amount of trust in our devices, storing everything on them from our banking details to pictures of our kids. Increasingly, we’re storing nearly our whole lives on our devices – a bold move considering that credential theft is the oldest (and most effective) trick in the book.
A number of approaches have been taken over the years to protect credentials. Two-factor authentication (2FA) adds an extra layer of security, but even this method has a vulnerability: it is usually accomplished through cellular phones. Moving past 2FA, biometric authentication uses data more unique to each end-user. At first, the possibility of verifying a person’s identity via physiological biometric sensors seemed like a promising alternative to 2FA. Fingerprints, movements, iris recognition— all of these make life difficult for attackers seeking to access resources by stealing someone else’s identity. But in recent years, even biometric authentication has begun to unravel.
Now, facial recognition has gone mainstream thanks to Apple’s release of its iPhone X, which uses a flood illuminator, an infrared camera, and a dot projector to measure faces in 3D, a method they claim cannot be fooled by photos, videos, or any other kind of 2D medium. But the reality is that facial recognition has serious vulnerabilities—and that is why we think hackers will steal the public’s faces in 2019.
Q: There was a lot of focus on introducing regulations to protect data and privacy last year. What do you think will happen next? Are we entering a world where data protection lawsuits will become the norm?
A: Data protection regulations have bolstered an employee’s ability to claim foul when a data breach occurs in the workplace, especially when it results in the exposure of their personally identifiable information (PII). We believe that over the next 12 months we will see a court case where, after a data breach, an employee claims innocence and an employer claims deliberate action.
In the case of a breach, a win in the courtroom by the employer proving negligence or bad intent by the employee is merely a Pyrrhic victory. Instead, it serves to highlight publicly an organisation’s deficient cybersecurity measures. Whether a judge rules in favour of an employer or an employee, executives will realise that the burden of proof in demonstrating adequate and appropriate technical and organisational security measures lies with their internal processes and systems. Organisations must identify malicious activity as it occurs and stop it before it harms critical systems and IP and should take steps to inject workplace monitoring cybersecurity technologies into their IT environment to understand the full picture around an incident and prove end-user intent.
Q: So overall, what would be your advice for a cybersecurity professional in 2019?
A: Cybersecurity professionals know that specific attacks will change and evolve, but the themes remain the same: sensitive data is an attractive target for attackers. Threat actors, malware authors, the “bad guys”—call them what you will—keep inventing new methods to bypass protection systems devised by the cybersecurity industry. Attackers and security analysts expend efforts in a continuous cycle of breach, react, and circumvent—a true game of cat-and-mouse. We need to escape this game; by taking a step back every year to examine trends and motivations, we’re able to see the overall forest among the millions of trees.
The way to gain control is through behavioural modelling of users or, more specifically, their digital identities. Understanding how a user acts on the network and within applications can identify anomalies, bring about understanding of intent, and gain trust. Behaviour might be deemed low risk or high risk, or undetermined. Deeper understanding of behaviour means we can be stronger in our determination of trust and risk. Instead of making a black-and-white decision like traditional security approaches, the cybersecurity response now and in the future can adapt as risk changes, without introducing business friction, allowing us to stop the bad and free the good.
Raffael Marty, VP Research and Intelligence, Forcepoint (opens in new tab)
Image source: Shutterstock/lolloj