Disaster Recovery: Don’t let it be an oversight

The dictionary definition of the word disaster outlines that it can be used in the case of a sudden accident or a natural catastrophe that causes great damage or loss of life. With this in mind, it could be quite extreme to use it when talking about disasters within the IT infrastructure.

At the end of the day, no one has died because of it. Nevertheless, it does not mean it cannot wipe out a business. According to the National Archives & Records Administration in Washington, '93 per cent of companies that lost their data centre for 10 days or more due to a disaster filed for bankruptcy within one year of the disaster'. Furthermore, '50 per cent of businesses that found themselves without data management for this same time period filed for bankruptcy immediately'.

No wonder so many organisations and businesses are implementing disaster recovery (DR) strategies to try and ensure that they will survive any unforeseen emergency. Unfortunately, successful DR can be difficult without the right tools. Too often, data recovery is viewed as restoring accidentally deleted files but it’s far more complicated than that. For instance, do you know where the data is, how critical it is and how it should be protected? What policies are in place to determine how different types of data are protected, how often they are backed up and how many copies are kept, how long are they kept for and how quickly does the data need to be restored?

It’s complicated

Data recovery can be a complex process involving multiple storage arrays and data protection solutions. In many cases, data protection and recovery systems are managed based on disks or LUNs rather than applications or virtual machines. But companies need flexibility in data protection and recovery. With servers and applications moving from physical hosts to virtual hosts to the cloud, organisations need their data protected wherever it’s running.

Recovery is not just complex, it can be expensive too. In many cases, organisations are forced to use a number of different tools to recover files, disks, systems, or the entire site. Companies frequently purchase data recovery for SAN/NAS systems, physical server backup tools, virtualisation backup tools, cloud backup tools and more. Each one of those tools has its own management interface and licence. If organisations could reduce the number of tools required in their data recovery infrastructure, they could reduce their costs significantly. The optimum data recovery system with the lowest complexity and cost would work across the entire infrastructure.

There are varying degrees of recovery, therefore businesses need the flexibility to choose from a number of options to recover back to the exact source if required, or recover the contents of a physical server to a virtual machine or vice versa. The process also has to be fully validated so that organisations are reassured that a recovery operation was completed and performed without errors.

Businesses should be able to protect physical and virtual environments with a single platform. They should also be able to perform a recovery operation anytime, anywhere and from any device and they should be able to access their data whether it’s on a physical or virtual host or in the cloud. Unfortunately, most DR systems are too complex and fragmented and rely on a siloed array by array, site by site, application by application based approach.

What can be done?

A more effective DR approach would provide a common methodology that would eliminate silos in favour of common capabilities across the entire infrastructure accessible via a single interface. One way to achieve this is via software-defined storage (SDS) which creates an abstraction layer above the hardware and below the OS and application. Virtualising the underlying storage enables it to be managed and recovered via a single common interface.

A common platform enables automation of the DR process and speeds up the ability to recover files, databases, systems and entire sites. It enables proactive centralised monitoring, analytics, and configuration across heterogeneous storage infrastructures. Snapshots can significantly reduce or eliminate backup windows and recover data to any point and to dissimilar hardware. This makes it possible to convert from physical to virtual environments on-the-fly or even convert virtual to different virtual. This can reduce recovery time to a reboot where administrators can mount the device and grab a single image or file.

Performing DR in the abstraction layer provides organisations with policy-based, audited, validated recovery that can be performed to and from physical, virtual, or mixed environments. It brings peace of mind that the data is protected and fully recoverable. The flexibility of the environment also provides organisations with bootable and mobile snapshots for instant recovery. They can test a disaster recovery plan without affecting business operations and protect appliances and data without hurting the performance of their servers or the SAN.

A storage abstraction layer can seamlessly take advantage of different storage media with different characteristics and price points. The DR strategies for modern scale out solutions require a different approach compared to legacy DR. Because of the size and volume required for scale out, many DR strategies revolve around data replication and keeping replicated data online on the cheapest online media possible. An abstracted storage management environment can easily facilitate the needed functionality at the right price point to enable a more agile DR strategy for scale-out data centers.

In other words, virtualising the storage layer enables the abstraction of disaster from recovery.

Farid Yavari, Vice President of Technology at FalconStor