You are only as vulnerable as your last backup

(Image credit: Image source: Shutterstock/scyther5)

Ransomware has made disaster recovery (DR) way more important than it used to be. Ransomware has become so prevalent – and consequently damaging to companies – that not preparing for it is nothing short of professional malpractice. This is especially true because it’s not even that difficult to implement, at least not with today’s technology. When you consider that some companies are even suing their own employees for breaches, it should hopefully make you take notice. Fix things now before your company suffers irreversible damage.

It may be known as losing favour to banking trojans and cryptominers but – make no mistake – the threat of ransomware is alive and well. The malware is so pervasive, would-be hackers can buy sophisticated and capable attack packages on the dark web and deploy in a matter of hours. There are even ransomware-as-a-service vendors that can deploy in a matter of minutes. The threat is so broad and ubiquitous, firewalls can’t fully keep up. By sheer force of numerical strength, some malware links will get through, and some users – no matter how well educated - will open them.

Modern enterprise cybersecurity always takes a layered approach. For example, only people with the right passes can enter a building. Firewall hardware protects network perimeters. Security software guards on-premise enterprise applications. This layered, “defence-in-depth” approach, when done right, reduces vulnerability to the onslaught of new and innovative ransomware, viruses, bots, and other threats.

But even if your security is truly world-class and stops 99.99 per cent of ransomware – that .01 per cent is all it takes to cripple critical areas of company data. In order to ensure business continuity, leaders must assume hackers will ransom, erase or corrupt corporate data. It is not a matter of if, but when and to what extent. There is only one absolutely full-proof protection against ransomware: encrypted, offsite, time-indexed backups.

Annoyance or existential threat

When it comes to ransomware attacks, you are only as vulnerable as your last backup. No matter how  sophisticated a hacker’s phishing scheme is, if your data is fully redundant, separate and secure, that hacker has wasted their time and money. A potentially devastating attack turns into… “Ah, I need to rewrite the last part of that one proposal I’ve been working on.”

In other words: an annoyance, not an existential threat.

If “you’re only as vulnerable as your last backup” seems simple, that’s because it should be - for you. A single pane of glass enabling a user to log in even in the midst of an attack, scroll through time-indexed files and clearly see which may have been corrupted by the malware. Then, with a few clicks, they can restore backups to every single endpoint device that may have been affected. No ransom. No corrupted data. Limited downtime.  In addition, any backups identified as corrupt can be easily erased. But achieving this level of simplicity means having incredibly sophisticated technology and protocols on the back end.

Firstly, trust in the cloud. When it comes to scalability, security, and capability, no other platform comes close – including your own data centre. AWS is the most vetted cloud provider of them all, providing the highest level of security available, combined with seamless, readily available and near infinite scalability.  Beyond the inherent security of using such a trusted cloud provider, using an external backup service creates an important gap between your critical backups and any potential attack on your server.

Second, automate and ensure regular backups don’t eat bandwidth. With backup and recovery services, it almost goes without saying: automation is key. But as applications become more intricate, the interdependencies become more complicated. It is not enough to simply automate backups, enterprises must automate runbook execution and streamline core processes for rapid recovery after a large ransomware attack. Testing is also key – to ensure that backups in place don’t just exist, but can be readily transferred and deployed when needed.

Preparation

But even with automation, typical backup software can still eat bandwidth, significantly limiting the frequency with which uploads can occur. A well-written source deduplication system is critical, where data is deduplicated before it is ever sent across the network. Instead of full backups, a source deduplication system sends only the new, unique blocks each time a backup runs. These efficiencies make the difference between automated backups measured in seconds, versus minutes and hours.

The right mix between automation and source deduplication means backups can be scheduled as frequently as necessary. With a cloud-native, SaaS approach, data from all those endpoints, data centres and cloud workloads are transferred securely, stored in a local AWS region as an isolated tenant, and strongly encrypted in transit and at rest. In addition, an anomaly detection function, can track unusual file deletions, modifications, encryptions, and header changes, alerting IT to security threats and enabling quickly finding the most recent “safe” backup.

It’s time to stop joking about those few servers that are running an ancient version of Windows. It’s time to stop accepting that it’s okay your backups are stored on a device that is running the same operating system that is being attacked – and sitting right next to the server that is being attacked. You need to start using basic data protection techniques like the 3-2-1 rule (three copies of your data on two different media, one of which of stored off-site). It’s an old rule, but it’s still around for very good reasons. Prepare for ransomware before you become the next company that someone is writing about.

W. Curtis Preston, chief technologist, Druva