IoT: How to alleviate bandwidth challenges

Life is full of comprises, and the lack of bandwidth forces you to make some of the hardest ones.

According to IT journalist Ray Shaw, “extensible connectivity is the foundation for enabling a robust Internet of Things (IoT) platform.” This may require companies to gather data to physically control things in real-time. Those ‘things’ include equipment within a power plant, used to determine certain readings to allow the operators to realise when they need to adjust the settings of a device. He adds: “Such precise data analysis may require computer power closer to the edge of the convergence between the IT systems and devices to reduce the inherent latency introduced by sending information back to a datacentre or cloud environment.” In order to analyse the data in an IoT environment, it commonly needs to be aggregated. Gateways are also used at the edge to compile and filter out data in order to reduce the impact of any potential bandwidth challenges. 

But why is there a need to alleviate bandwidth challenges? Well if you have more IoT devices trying to report data over a low bandwidth link than it is capable of carrying, then to ensure that the most important data gets through you are going to have to filter out the less important data, and compress the remaining data down. Hopefully the data will then fit within the available bandwidth. Life is full of comprises, and the lack of bandwidth forces you to make some of the hardest ones. 

IoT device growth 

If the pundits are to be believed there are going to be somewhere around 25 billion IoT devices connected by 2020 – this is going to be a massive change to the internet. For 1 we are going to have to move whole heartedly to IPv6 for addressing and we are going to have to add considerable capacity to the network. The current estimate for the amount of traffic over the internet is around 1.1 Zeta bytes if we add all our IoT devices in assuming an average daily traffic flow of 20K per device we quickly add another 150 Petabytes of data and in the same manor that we always underestimate the expected traffic from for road systems this will probably on the low side.

The DBTA writes: “According to Cisco's Internet Business Solutions Group, 50 billion devices will be connected by 2020, up from 2010's 12.5 billion. By 2020, data production will be 44 times greater than it was in 2009, and by 2020, more than one-third of the data produced will live in or pass through the cloud, according to Computer Sciences Corp.” This just show that we always underestimate our predictions, which will vary enormously and they can be as wrong as election pundits.

The obvious pitfall of this approach is the possibility of filtering out the important data if you have many IoT devices.  The problem going forward with the IoT is every solution is going to be different and require different approaches to solve the network latency and bandwidth problems and therefore, it is difficult to make sweeping solution statements. With the expanding use of AI for data analysis where it can spot trends before they manifest themselves into larger ones, sometimes it is the little insignificant data that is filtered out that can give real savings by spotting trends before they become problems.

Network investments 

That’s all well and good, but how should companies looking to invest in IoT from a networking perspective invest to reduce the inherent network latency? Latency is always the killer of performance when moving data over the internet.  In many cases where the IoT data payloads are small and the time is not critical from a time prospective, many IoT implementations can survive in a high latency world. 

However, this is not always the case and we can draw a parallel from the road transport industry to illustrate. The road transport industry is most efficient when, rather than accepting deliveries from a series of small trucks that arrive at random times, it is far better to receive a large shipment at a known time. This is more efficient from a resource point of view, no trucks are waiting wasting time, and deliveries can be planned whilst the roads are quiet improving delivery times and using less fuel.  With the ever-expanding data crossing the internet, ensuring data delivery is going to be an increasing difficult task.  We either accept that we are not going to get all the data we want or we approach it from another direction.

Keyword: Compromise 

In an ideal world, we would get all the data we wanted all of the time with no filtering or compromise. In many of the Industrial IoT applications, there are a mixture of small critical real-time data and other large chucks non-critical status and auxiliary data. It is sometimes that auxiliary data coupled with AI analysis that can give early warnings of things that will happen further down the road. So how do we move this auxiliary data across the internet so we don’t have to resort to filtering? 

Just like the road systems, the internet has its local quiet period – some of us just have to get our eight hours. If we could marshal the entire auxiliary together within a data aggravator, compress it down, and send it in one timed transfer to the data centre, this now becomes a possibility just like those big trucks to get all the data including the auxiliary data.  However, there is always a spoiler and that is latency. Even if the internet is “quiet” this has a dramatic effect on performance extending the time taken to transmit the data and shrinking that “quiet” period.

Improving data flow

There are tools that have traditionally employed that improve data flowing across the internet such as WAN optimisation, but with large data changes each day and compressed and or encrypted data, these solutions become ineffective – particularly whenever there is a need to move large volumes of data around from one place to another. Even small amounts of IoT data from an increasing plethora of devices eventually becomes a mass of data that needs to be backed up and protected in case a disaster of any kind occurs. This is particularly important as IoT data can be amassed to provide not just a real-time view of a series of events, but also an historical view. 

One of the challenges when moving data over distances for remote IoT deployments is the network is a living, constantly changing, entity and having static options configured to improve performance means the performance of the data transfer over the network will always be a very heavy compromise. In the same manner where AI has transformed the analysis of data buy constantly learning and adjusting its decisions making process the same can now be applied to transferring data across the network with the new breed of WAN accelerators. With the need to move, back-up and restore data fast, new breed solutions such as PORTrockIT may be required to mitigate the effects of latency. 

AI and IoT

IBM’s Susanne Hupfer, Senior Consultant of Thought Leadership, Strategic Editorial and Creative adds that she thinks that “Artificial Intelligence (AI) and IoT are shaping up to be a symbiotic pairing [because] AI doesn’t just depend upon large data inputs; it thrives upon them.” In her opinion new with the creation of data types and scenarios, “cognitive systems evolve and improve over time, inferring new knowledge without being explicitly programmed to do so.” The AI systems will be needed to make sense of the ever-larger volumes of IoT data. Furthermore the faster the data flows, the better and the more real-time and accurate the data analysis can be.

She cites a report, ‘Six in ten early adopters report’ that suggests that AI is going to be essential for tackling the data challenges that traditional analytics just can’t handle. In her findings she says that 53 per cent of the respondents from the report believe that AI will unlock the hidden value in their organisation’s untapped data – data that has yet to be tapped or analysed. This make IoT and AI a match made in Heaven, and 85 per cent of early adopters say that “IoT will play an important role in their AI initiatives within two years.”

Yet for both to work effectively and efficiently together, they will often need to tackle the inherent network challenges that are posed by latency and packet loss. So AI will always have another role to play in this regard – particularly as machine intelligence can be used to automatically manage the performance of a Wide Area Network connection without human interference. By eliminating the human element, costly mistakes can be avoided, allowing for more efficient data acceleration.

Bandwidth tips

At this juncture let’s remember that IoT isn’t just about consumer devices, it can be used on an industrial scale for a variety of purposes, and the data won’t always be sent and received via a wireless network. So my first of five tips is that you should consider the network architecture over which the data is to be transmitted and received. 

My second tip is that the solution won’t necessarily require you to invest in more bandwidth than is already at your disposal because this may not improve your data acceleration and network performance. Your existing pipes and interconnected technologies may be sufficient from an investment perspective, albeit requiring a boost with a data acceleration solution as opposed to a WAN optimisation one. 

Thirdly, with the right solution, datacentres won’t need to be located dangerously close to each other in the same circle of disruption to mitigate latency. Fourthly, look beyond the traditional large enterprise suppliers as much of the innovation is coming from much smaller companies – including start-ups. Lastly, trial a solution before making a purchase to enable you to gain conceptual proof of it will improve your data acceleration and alleviate bandwidth issues.

David Trossell, CEO and CTO, Bridgeworks
Image Credit: Melpomene / Shutterstock