Organisations can’t afford to pay the price of data downtime

null

When people talk about downtime, they’re typically referring to those moments when you’re kicking back, relaxing and trying to tune out from the world. But for organisations that depend on a steady supply of data to serve their customers and maintain revenue streams, downtime is a far less attractive prospect.  

Downtime refers to a period of time that an online system, computer or data is unavailable for use. Whether it’s Microsoft Office 365 being hit by outages, such as the one in April that affected all of Europe, or access to data becoming unavailable, downtime can occur in a number of different ways and businesses need to be ready.

Protecting data in these instances goes beyond cyber security and protection from malicious actors. Of course, attacks like distributed denial of service work on the premise of forcing downtime on an organisation’s operations. But more often, downtime will be the result of a simple gap between user demand and what IT can deliver.  

Beyond the (not insignificant) fact of user frustration that might result from this downtime, there are pretty serious business consequences.   

When we spoke with IT decision makers around the world last year, they told us that these gaps in availability are not only extremely common but that they’re costing considerable amounts of money too. In fact, we found that the average cost of downtime globally for mission-critical applications can clock up to as much as $80,000 (about £60,000) per hour. The average annual cost of downtime sits at $21.8m (more than £16m). Needless to say, there aren’t many companies out there that could shoulder these kinds of incidental costs. 

On top of all of this, with General Data Protection Regulation (GDPR) and the Network and Information Systems (NIS) Directive now in force, the fines associated with cyber-attacks, breaches and network outages is nothing short of eye-watering. If data is lost and systems go down, some organisations could face double the fines in line with both directives.

In June 2018, the Government also introduced the new Minimum Cyber Security Standard, highlighting why data protection should never be an afterthought and why it isn’t worth the risk to a company’s operations. But is this enough to help organisations?

Businesses are playing catch-up

Downtime is not a problem that appears to be going away. Results from our same survey suggest that unexpected downtime incidents are on the up – there was a 36% YoY uptick in the twelve months to 2017. And beyond these costs there are broader implications for how unplanned downtime can affect digital transformation projects. The pressure to evolve business processes to meet the demands of an increasingly digital environment has never been greater, but a majority of the organisations we spoke with said that downtimes are stifling their digital transformation initiatives.

As more businesses take steps to update their existing systems to adapt to digital-first operations, the risk (and potential cost) of unexpected downtime increases. Changing and updating systems can be a delicate procedure, but is increasingly an essential one. Gartner predicted in 2014 that by this year as much as 82% of the world would be operating on virtualised servers – representing a considerable jump from around 50% just four years ago. This direction of travel is clear, and backed up by the need to mitigate the risks of unexpected downtime. 

Yet research produced by digital transformation software company Appian as recently as two years ago has suggested that businesses are lagging behind. Just 14% of companies have migrated their servers in line with digital transformation standards. 

An ‘always on’ approach

The clearest conclusion to take from all of this is that data-driven businesses need to make a more concerted ‘always on’ approach to running their operations. Think about the reason you take a holiday, or the purpose of a weekend break – you’re looking to enjoy some downtime – and then play it in reverse. 

That’s perhaps not the most stunning reveal, granted – but it’s one that, if ignored, could have considerable consequences. And not just in money terms; there are any number of critical services that depend on reliable data availability.

Landmark Information is a property, land and environmental data specialist. It is dedicated to helping organisations in the residential and commercial property industries to streamline its operations and reduce risk. Data and technology are the lifeblood of Landmark’s business and are central for delivering intelligence and solutions that enable its customers and clients to make informed commercial decisions. 

As a Proptech business, Landmark provides a wealth of data and services to organisations in a wide array of sectors.  From environmental specialists that rely on data and mapping for flood risk assessments and site analysis, through to property solicitors who utilise Landmark’s reports to provide environmental, property and location-based due diligence to their home buying clients, as part of the conveyancing process.

To ensure the availability of its data, Landmark deployed Veeam to protect its data, meet customer SLAs and save approximately 500-hours in resource annually. By working with Veeam, Landmark can ensure that no matter what happens, data is protected and remains available.

A price too heavy to pay

When seen in the context of events as potentially devastating as major flooding, data availability downtime becomes less a prospect of lost dollars and something even more serious. In these cases, the everyday value of data is put into stark perspective – but it shouldn’t take the threat of natural disaster to make IT managers properly consider the importance of their digital transformation plans.

If businesses want to commit to providing an ‘always on’ service to their customers, then they need to focus on the planning and implementation of their availability solutions and avoid the pitfalls of unplanned downtime. This ‘hyper-availability’ not only helps us ensure data and applications are always there when we need them, it helps maintains reliability, reduces costs of manual processes, ensure the continuous delivery of production IT services, and satisfy compliance requirements. Only then can organisations remain confident that their business can continue running – even if the worst does occur.

Mark Adams, regional VP for the UK and Ireland at Veeam 

Image Credit: janeb13 / Pixabay