Skip to main content

How Wannacry shows why we need to rethink Infosecurity

(Image credit: Image Credit: WK1003Mike / Shutterstock )

Perhaps the truly surprising aspect of Wannacry is that it did not happen sooner.  Exploitation of a security flaw on this scale has long been on the cards and this headline-grabbing piece of ransomware, which affected organisations around the world, has underlined the vulnerability of private and public sector networks.

Of course, the way Wannacry worked meant it was particularly effective: the malware reproduced itself like a worm, so it was able to scale rapidly and we will probably continue to see its after-effects for some time. While the initial scare may already be old-news, the worry is that organisations are still not protected against similar future threats, which while may not be as innovative as Wannacry, could still cause mass devastation.

Surely, these organisations have made substantial investments in information security solutions and strategies designed to protect users and data?  What about, for example, intrusion detection and prevention systems, multi-AV engines, anti-phishing, application control, deep packet inspection (DPI), URL filtering and APT protection?  Why aren’t they upgrading to the latest operating systems and patches to protect themselves better?

In reality, many organisations have most definitely taken their security responsibilities very seriously, but the traditional approach to security means updating and patching all the systems, applications and devices, which is often just not feasible.  The efficacy of existing infosec investments is under-mined.

Think about it: most professional workers will have access to at least one device, if not several: a desktop system, a laptop, a smartphone, smart-watches and other internet-enabled devices.  As the Internet of Things (IoT) continues to grow, the range of end points that need protecting will probably keep on expanding.  Extrapolate that across an organisation with hundreds or thousands of employees, plus all the operating systems and applications being run and it is easy to see how updating and patching becomes such a monumental challenge.

There are other complications, in particular the fact that – somewhat ironically – security and compliance requirements forbid modifications, thus preventing updates, therefore contributing to possible vulnerabilities.  For instance, in a manufacturing firm, software-driven production equipment is mission-critical.  Security reasons often mean that their control systems cannot be modified, which explains why so much legacy equipment is being run on older software versions, even beyond their EOL.

That situation is not about to change any time soon, because those IT investments have a shelf-life of many years, even decades.   Similarly, in highly regulated markets such as automotive and healthcare, compliance requirements hinder any modifications to existing systems.

Beyond those barriers, the sheer cost and effort involved in updates and patching can be prohibitive, so critical security updates may not be implemented immediately, often not for quite a while.  Remember the SSL security flaw Heartbleed in 2014?  Three years after it was made public, hundreds of thousands of systems connected to the Internet were still unpatched (opens in new tab).

It is hardly surprising that many organisations follow the ‘never change a running system’ philosophy, because large-scale updates can lead to errors, performance problems or – worst case scenario – bring the organisation to a halt.  For instance, in 2014, updates to several versions of Windows led to a spate of ‘blue screen’ fails, with Microsoft asking users to manually remove the patches (opens in new tab).  

Similarly, in 2015, it was reported that a Windows 7 update was causing some computers to be stuck in a re-boot loop (opens in new tab).   However, we are not singling out Microsoft here, pretty much every vendor – however solid and reliable – is, or can expect to, experience problems.  One week after the launch of iOS 8 in 2014, Apple released and then immediately withdrew its first update of the new operating system – iOS 8.0.1 (opens in new tab).  Reports were flooding in that this update was breaking cellular reception and other features, such as Touch ID for some users.  Apple removed the faulty update but by that time, many users had probably gone through the installation process. 

A new approach is needed

So, we have this conundrum: organisations want to avoid the risks, costs and time involved in patches and updates (assuming they are even able to do so), but by not doing these patches and updates, they leave themselves wide open to future threats. This is why we need to take a different approach to patching for security reasons.  Of course, there is no such thing as absolute security, however protection is much more likely to be effective if it is centralised and over-arching, universally across the entire enterprise, rather than trying to protect every device separately.

One possible solution could be using a cloud-based approach and ideally, led by internet providers and offered by them as an integral part of their service.  That way, all customer traffic can be run through this cloud-based security layer, regardless of user devices, company operating systems or even their own security solutions.  Threats are searched for before they can reach the end-user organisations, so infection is halted and there is no need for enterprises to make system modifications at their end.

The key to this approach is the combination of several enterprise-grade security technologies that detect potentially unknown threats by monitoring suspicious data streams, using pre-configured security and filter policies.  These are isolated in sandboxes and analysed using an advanced algorithm engine before they are allowed anywhere near a customer’s device.  Harking back to Wannacry, this technique detected the ransomware before it was passed through to the users’ devices, preventing the initial infection.

Being cloud-based, there is no impact on existing IT systems, nor need to install additional software, plus it can scale easily.  There is no need to install software on every single user device, because the threat does not even get that far. As well as providing real-time insight into the possible threat behind IP addresses, domains, hosts and associated files, this technique can also be applied to detecting malicious bots in IoT environment.

Of course, this approach to dealing with security puts the onus on telcos and ISPs, but another way of looking at it is the value that offering this service to their customers adds.  It’s one way in which these providers can differentiate themselves in an increasingly price-driven world.  Security services could be offered to both business customers and consumers.

Wannacry was one of the worst cases of malware the world has seen so far, but it is unlikely to be the last.  Given that traditional approaches to security patches – despite vast R&D and investments – aren’t stopping these threats in their tracks, it’s time for a rethink, by preventing them from reaching end user devices in the first place.

Dennis Monner is CEO of Secucloud (opens in new tab)
Image Credit:  WK1003Mike / Shutterstock 

Dennis Monner is CEO of Secucloud, a German-based specialist for cloud-based security. Previously, he founded the IT security manufacturer gateprotect, which was taken over by Rohde & Schwarz in 2014.