Business costs: The consequences of network inadequacies?

The performance of IT systems and increasing reliance on data, means that inadequacies within a network can lead to expensive business costs.

The performance of IT systems and increasing reliance on data, means that inadequacies within a network can lead to expensive business costs. This article will examine the Citrix sponsored research by Tech Research Asia to discuss how these expenses can be minimised. It highlights that poor network connectivity is costing Australian companies an average of 71 hours of lost productivity per employee, per year. Estimating that a company with 50 employees would have a total cost per year equating to $144,563. 

Firms in New Zealand fared only slightly better with an average productivity loss of $NZ66,399 per year for the same sized enterprise. The study found that 23 per cent of outages affected Australian companies’ revenue source, and in NZ, 47 per cent of the companies surveyed said that network issues impacted on 14 per cent of their revenue streams.   

There are other locations in the world where the situation is far worse. In certain areas, network size is limited, latency is a big problem, and packet loss is huge. Often bandwidth is still extremely expensive and therefore it is a precious commodity. So companies need to be able to utilise the bandwidth they do have to the optimum. In other areas you might have a scenario where the network connectivity isn’t reliable, versus other areas where it most definitely is. But this still doesn’t take latency into account, and often not the real extent of packet loss.  

In summary, you need to be looking at different ways to transfer data across the network because some of the traditional methods just don’t alleviate these problems, especially not when it comes to seriously large volumes of data. 

Market changes

Telcos have historically been about selling pipes and capacity, such as MPLS. This has always been their focus as they haven’t necessarily had the need to focus on achieving optimum performance. Ten years ago, organisations were running small pipes where the challenge was to how to get as much data down a limited bandwidth pipe as possible. The only way to achieve this was to compress and dedupe the data to give the illusion of performance. Yes, companies are now investing in reliable network connectivity, but these solutions didn’t take into account the challenge that latency presents, especially with large and encrypted volumes of data. 

Now a decade down the line, we have 10 Gb pipes, and we are now starting to talk about 100 Gb bandwidth. So problem now is how to fill them. Instead of running at 20 per cent capacity, telco customers want to run at 98 per cent capacity to ensure that they are getting the value for which they are paying. Yet the inadequacy is not the pipe; it’s about HOW you send data over it. So the key issue to address here is latency.  If you are spending £100,000 ($123,671) a month on your bandwidth, you will want to ensure it is being fully utilised. The trouble is that many organisations are only using around 20 per cent of their available bandwidth which their telcos are providing. It is in the interest of the telcos to help their customers address this by advising them of how to optimise their bandwidth usage. 

Underperformance risks

According to an article in Computerworld which references Tech Research Asia’s research, the risks of underperforming networks includes the following consequences:

  • Staff performance and collaboration – time to undertake activities increases and the ability to collaborate/innovate with colleagues is adversely impacted;
  • Data gathering and access – namely the inability to capture data, find data and access data for insights and analysis (and in some cases it can also bring about the loss of data – a potential minefield all of its own);
  • Customer engagement and interaction – lost sales revenue and the inability to contact or respond to staff in a timely manner.

The latter point from a telecommunications company’s perspective is particularly crucial. Customers that constantly suffer poor network performance are likely either to jump ship or struggle on until the network breaks or spend more than they should in order to attempt to optimise their existing network with, perhaps, WAN optimisation tools. In fact the study reveals that 50 per cent of the surveyed organisations said they would need either to upgrade or to optimise their network environment to deliver their short to mid-term business imperatives.

The trouble is that customers who upgrade or WAN optimise their network environment may not find the answer and the outcome they seek. Network inadequacies arise owing to distance and from the fact that, fundamentally, the world runs on TCP/IP (with a few exceptions).  Inadequacies arise due to latency, packet loss. People still believe that capacity solves latency which is simply not the case. There is a common misunderstanding that you can solve your problems with a bigger pipe.  If you have 60 ms of latency on a 1 Gb pipe, you will have 60 ms on a 10 Gb pipe. It’s the law of physics; you can’t change it! With a 10 Gb pipe, you have 10 times the problem because the data is still travelling at the same speed as it is on a 1 Gb pipe. The difference: You have more capacity that you’re not using, dark space or, if you want to put it crudely, a drain into which you are pouring your cash.

WAN Optimisation 

What will solve your problem is looking at a different way to accelerate your data across the network and that doesn’t mean WAN optimisation. WAN optimisation is designed for small amounts of data over small pipes, using caching techniques. Apart from technical difficulties, it is cost prohibitive for the heavy-lifting of large amounts of data that telcos and their customers are transporting today. The other alternative we hear about is shift and lift. AKA the ‘Snowmobile’ from AWS or sticking your data on a UPS truck. That may just be viable for a one-off migration. Although there are inherent risks in doing that, and it doesn’t solve the problem of backing up, replicating data and, more importantly, restoring your data for circumstances where you will need to have quick access. 

Data acceleration

You need to be able to restore the data very quickly when a disaster occurs. You can’t wait two days for it. So it’s time for a fresh approach and the good news is that there are solutions already here. Telcos need to be a source of advice for infrastructure managers, and CIOs need to stop trying to put a round peg in a square hole.  WAN Optimisation is great for WAN edge use cases but for serious data movement, data acceleration is currently the only effective choice. 

If telcos or other ICT vendors can help their customers to find a way to use the pipes correctly, helping them achieve real ROI, then they may not be minimising the cost of the pipe but they will getting the full value fromit. Alternatively, it may be found that by utilising the pipe correctly, the bandwidth is too much for what they need. With data volumes growing, there may be the possibility to future-proof infrastructure in the sense that there may not be a need to upgrade. This equates to a cost-saving.  Think in terms of cloud: organisations not only want to get their data there, they also need to get it back quickly.  Especially if Backup-as-a-Service or DRaaS are part of a company’s strategy, rapid restore of data is paramount. 

Telling us to put it on trucks or deliver it back by courier will not work in these scenarios. Telcos need to offer solutions that enable organisations to get hold of their data fast, as and when they need it. By using data acceleration tools on the market, such as PORTrockIT, I know that if I have 1Gb pipe into the Cloud, I will be able to get my data back at 100MB per second, even if that data is in Iceland. Telcos need to keep it simple for their customers, and deploy the latest solutions to help with the challenges their customers are facing. This way everyone will be able to predict very accurately how long it will take restore their data. 

For example, if I personally have 1TB of data, then I need to replicate on my 1Gb pipe with 40ms of latency and 0.1 per cent of packet loss. It will probably take me several days to move but my Recovery Point Objective may state I have only hours to achieve this. Using PORTrockIT, I know that this replication job will be complete in 2.77 hours on my existing bandwidth, well under my RTO.  If you can reduce your own or your customers’ DR restore that is currently taking 12 hours to restore to 45 minutes, then you will be a hero. Being able to do this means you have to have a solution that handles latency and packet loss effectively – perhaps by using machine intelligence. 

Making sense

With increasing volatility in the world, the optimum method is to encrypt data when you are moving it so that nobody can gain access to it. However, when you are encrypting data, it normally has implications on the speed of transfer. It’s therefore important that telcos offer their customers a solution that allows for data encryption, data compression or de-duplication without impacting on network performance or penalties on speed.. This requires a data acceleration solution that enables the ability to replicate securely or back up huge volumes of data, whether to a datacentre or to the cloud or to a hybrid environment. This is exactly what telcos should be offering their customers to ensure that their customers gain real value from their networks.

Jamie Eykyn, Chairman of data acceleration company Bridgeworks
Image source: Shutterstock/Toria

ABOUT THE AUTHOR

Jamie Eykyn is the Chairman of the data acceleration company, Bridgeworks. Jamie is an investor in Bridgeworks, which uses machine intelligence to accelerate transfer speeds, and reduce packet loss, when moving large volumes of data across the WAN.