Skip to main content

In-memory computing will unlock 5G benefits

(Image credit: Shutterstock.com / Who Is Danny)

The global rollout of 5G wireless networks has begun. The technical developments of recent years in the area of cloud, Internet of Things (IoT) and mobile devices have been limited by their inability to communicate in real-time. The global rollout of the 5G wireless network promises to relieve many of those constraints. And first impressions don’t disappoint.

5G improves upon its predecessor 4G in terms of both speed and latency and scale. It promises download speeds of up to over 1Gbps, compared with just 20-30Mbps for 4G. The Global System for Mobile Communications Association (GSMA) has a target latency of just 1 millisecond for 5G - 50 times better than for 4G. In addition, 5G can support several times as many connected devices. This translates into an explosion of data, and opportunity.

5G is the network platform for the data-driven enterprise.

As with other new technologies the deployment of 5G networks is just getting started. But with major carriers in key markets already having launched, the potential benefits are numerous. 5G gives cloud services access to a vast ecosystem of real-time data sources, including embedded sensors, IoT devices, telemetry data from applications deployed on mobile devices and much more.

IDC expects 1.01 billion endpoints to be on 5G networks by 2023 in areas including construction, manufacturing, production and logistics. Gartner predicts that by 2022 more than half of all enterprise data will be both created and processed outside of the traditional data center.

So how do we deal with all that data?

Edge computing is about placing compute power as close as possible to wherever data is being generated and the service is being rendered – either to filter or aggregate raw data and reduce the amount that must be transmitted or to run analytics on-site and get to important business insights faster.

The performance of data processing applications is measured in milliseconds. At this speed, latency can be the difference, on an oil rig, between sensors looking for measures such as vibrations and temperature that predict the start of a crack on a drill head and engineers taking the drill offline, versus that crack exploding - with potentially catastrophic consequences.

In other words, defeating latency at the edge is essential. Latency results in lost revenue, increased costs, missed opportunities and sometimes even extreme danger.

Unfortunately, the latency-busting advances of 5G apply only to the latency between the antenna and the cell tower. The remaining challenge is the data processing latency on the edge compute nodes themselves.

This challenge can be substantial and is perhaps one of the reasons why, according to Cisco, 76 percent of IoT projects are rated failures - with the majority turning out to be more complex than originally expected.

Streaming engines and memory layers

This breakthrough in edge computing is only possible with a data architecture that capitalizes on advantages in both software and memory layers.

Analytics and artificial intelligence (AI) can be challenging at the edge because computing power is often limited by physical space. Most edge sites do not have the space to support a hardware infrastructure made up of data center servers. A streaming engine is therefore essential for the ingestion, transformation, distribution and synchronization of data as it is created.

It’s also important that this engine has a streamlined code base and a footprint small enough to fit into a wide variety of hardware and endpoints.

System memory is the key to the processing and analysis of data on the edge. A set of networked, clustered nodes can pool their memory for applications to share data structures with other applications running in the cluster.

For example, British Gas currently operates the UK’s largest IoT network. Its Hive Network serves over 200,000 homes with a system that allows users to remotely control their heating and hot water temperature from their mobile device or on the web. Hive completes 20,000 writes per second in a 20-node Hazelcast IMDG cluster with plenty of spare capacity and an average latency of under a millisecond.

With an in-memory data grid, data no longer needs to cross the network for the remote processing that can delay response times. In-memory can deliver sub-millisecond response times with millions of complex transactions per second.

Add to that the benefits of being able to autonomously manage a wide range of distributed compute resources at scale - automating and enabling AI, analytics and IoT workloads to be securely deployed and delivering real-time analysis. The IBM Edge Ecosystem is capable of supporting up to 10,000 devices simultaneously.

Real-time data processing at the heart of business innovation

Just as digital transformation is the first of many steps in the journey towards enterprise business agility, the next generation of business capabilities will be underpinned by real-time data processing.

Pattern matching, correlation analysis, statistical predictions and others will find their uses in risk management, the elimination of waste, supply chain optimization, and in providing retailers with far greater insight into customer needs than was previously possible. They will lie at the heart of innovation in our connected cities, autonomous vehicles, smart grids, healthcare and medical systems, wireless factories, telecommunications networks and so much more.

Think, for example, of how drones might be used in directing firefighting or in providing disaster relief.

Think of pretty much anything – faster.

Free from the constraints of latency

As companies integrate 5G with in-memory computing, they will build a platform for a new generation of technology. It will expand edge use cases such as IoT and mobile devices and enable far greater efficiency in the interpretation of big data.  AI (fueled by advances in machine learning) will become indistinguishable from human interaction as systems listen, ‘learn’, process, and respond in real-time.

Freeing companies from the constraints of latency, through the under-realized but essential catalyst of in-memory computing, will enable rapid advances in both their product sets and business models.  The challenge, then, will become how best to take advantage of them.

John DesJardins, VP and CTO North America, Hazelcast