Skip to main content

Why does my business need to invest In back-up solutions?

According to a report by Veeam, the costs associated with downtime have soared from an average of $10 million two years ago, to $16 million this year. The study finds that downtime actually increased in 2015 with organisations experiencing 15 unplanned downtime events during that year compared to 2014.

Yet in today’s online world many businesses should plan to ensure that they are online 24 hours a day, 7 days a week. To ensure that this can happen they need to back up more frequently than they currently do. There’s no room for complacency, which can often lead to a financial and reputational cost as well as lost customers.

Data is, after all, the oil that makes the cogs of their companies work efficiently, and without it even the best business intelligence solution won’t deliver results. So rather than procrastinating they should be treating their data as their most invaluable asset. To protect it they should treating regular back-ups as their key insurance policy to prevent downtime from damaging their businesses and their market competitiveness. C-Suite executives should therefore invest in business and service continuity.

But why else is it in their interest to invest in back-up solutions? Well C-Suite executives are normally also board members, and as such they have a fiduciary duty to their stockholders to protect the company. So they need to recognise that nothing is disastrous today than data loss. Yet traditionally back up and disaster recovery has been passed to the IT department, making it a function that’s been kept “out of sight and out of mind” of most senior executives.

That can’t happen today because the risks have changed lately. In the US, for example, the Department for Homeland Security and the Department of Justice reported that $24m was paid out to victims of ransomware.

Failover sites

Now these are reported cases, but what about all those unreported cases? The true cost of ransomware is likely to be much higher. This is now a very profitable business, but it has far reaching concerns for Digital Data Recovery (DDR). Many larger organisations, and those that have an online digital business, have business continuity sites that will act as a failover – switching into action whenever another site goes offline due to either a human-caused disaster or by a natural disaster. The disaster recovery site will take over the workload with little or no disruption to the service.

These installations typically employ synchronous replication techniques to maintain data on both sites, and the data transfers are identical within a few hundred microseconds of each other. However, what happens when ransomware starts to corrupt data on one site? Within a few seconds both sites are bought to a standstill – then what? So yes, ransomware should be stopped at the front door, but shockingly the attackers will often find a way in by exploiting a company’s IT vulnerabilities.

Data locations

How do we protect against this new and rising threat? There is an old adage that states: “If you haven’t got data in three locations, you haven’t got it.” The first two is your synchronous pair, the third is a copy with an “Air Gap”. Data that is not online, that can’t be touched in normal circumstances. So the C-Suite must be engaged in any DDR plan because it should concern them at the highest levels of their organisation to ensure that their businesses can resume service with the least amount of disruption whenever their datacentres or IT systems go down.

Together they must assess the Time Gap Vulnerability (TGV) for their departments: the time between the last copy of the “Air Gap” copy and the next. For example, an online trading company that handles 1000 orders an hour, should consider the question: “What is the acceptable trading loss?” That might be 10 minutes or 30 minutes, but who makes this decision? I bet the C-Level person responsible for that business unit would want to have some input into the decision. However, it doesn’t stop there.

Prioritising data

Whenever your organisation suffers a major disaster think about what function has the highest priority for recovery once the basic infrastructure is back. Who and how are you managing your P.R while you are out of service? If you’re not talking to social media; they are talking about you – this could be meltdown before you know it. So no longer can the c-Suite pass over the responsibility of data protection and restoration to the IT department – everyone at the highest level must be involved and they owe it to their shareholders as well as to their customers and partners to get it right.

The problem is that many of the C-Suite can be unaware of the true impact that data loss and yet data protection still it remains the Cinderella of IT budgets. The considerable improvements in the quality and reliability of modern systems can lull all levels of management into a false sense of security. With the ever increasing pressure on IT budgets I would hazard a guess that nearly all companies are committed to improving their data resilience, and most are pushing this to the back of agenda.

Company size

It’s also important to note that it is not the size of a company that drives the need for multiple datacentres. What does is the need of the business and its customers. In the case where a major environmental disaster creates a wide “circle of disruption”, companies providing a service to customers across a wide geographical area need a third datacentre. Why a third? Well this is because Always On datacentres have to be in close proximity to each other to maintain data synchronicity within the “circle of disruption”. Therefore, a third datacentre needs to be outside of the circle to provide business resilience.

Latency and packet loss

Yet driving data to and from that site does have its own problems due to the effects on latency between the datacentres. To mitigate the effects of latency and to prevent packet loss it’s wise to deploy solutions that can mitigate the effects of latency, which offer data acceleration to reduce back-up and recovery times to allow access to the data whenever a disaster strikes. Backing up with protect your organisations data, and with the advent of the cloud this is becoming an easier task.

Larger organisations with customer-facing profiles need to think in terms of service continuity through failover capability and business resilience through the use of remote disaster recovery facilities. Whilst these could use the ever popular cloud, consideration should be given to the speed of access to and from the cloud and whether this can meet the needs of the business and technologies that can improve the performance.

In all cases investment in back-up solutions is a must, whether you organisation is an SME or a global enterprise with a footprint in all corners of the world.

David Trossell, CEO and CTO of Back-Up Solutions and Data Acceleration company Bridgeworks

Image Credit: Oleksiy Mark / Shutterstock

David Trossell
David Trossell is CEO and CTO of award-winning data acceleration company Bridgeworks, which has developed products such as PORTRockIT. It is the winner of the DCS Awards 2018 for the Date Centre ICT Networking Product of the Year category; and it won the DCS Awards 2017, Data Centre ICT Networking Product of the Year.