Building an edge computing strategy without leaving your network behind

(Image credit: Image source: Shutterstock/Toria)

Edge computing is a topic on the mind of more executives these days as tech companies advertise their ability to offer "edge intelligence." What company wouldn't want to have more intelligence? Deciding if an edge computing strategy is truly beneficial, however, means understanding which applications benefit from edge computing, and how to architect the network to support edge services.

Start first with the question "Why would I need to use edge compute?" Applications that are being negatively impacted by latency and other network-related issues might benefit from edge computing. This is because edge computing places high-performance compute, storage and network resources as close as possible to end users and devices. While improving application performance for end users is a key attribute of edge computing, there are other benefits as well:

  • Regulatory compliance – keeping data within certain geographic boundaries.
  • Security – to perform authentication and other security functions closer to end users.
  • Resilience – leveraging distributed cloud resources, to reduce reliance on applications residing in a single (or two) cloud region(s).

Don't misunderstand the intent of an edge strategy – it's not about replacing a cloud provider like Amazon or Microsoft; it's about augmenting those capabilities. When and how to augment the cloud will be based on a solid understanding of where best to have users and applications interact.

Which edge are we talking about?

Diving into edge computing can be a bit confusing at first. There has been a lot of discussion around Multi-access Edge Computing (MEC) which is a set of standards built with mobile networks in mind. In relation to industrial IoT, fog computing is a term that has come up frequently, but an edge computing strategy is also not necessarily tied to IoT implementation.

In both cases, one might conclude that you don’t think you need an edge strategy, but you actually might need to use a different edge - or two.  The Linux Foundation offers a glossary of terms for guidance. The relevant edge terms enterprises should be familiar with: infrastructure edge, access edge and the aggregation edge.

Infrastructure edge: "Edge computing capability…which is deployed on the operator side of the last mile network.  Compute, data storage and network resources positioned at the infrastructure edge allow for cloud-like capabilities."

Both the access edge and aggregation edge are a sublayer of the infrastructure edge. The access edge is closest to the end user or device, while the aggregation edge sublayer is another hop further away from the access edge. This edge might consist of a medium scale data centre in a single location or consist of multiple interconnected micro data centres.

Networking the edge

The network is an element of edge computing architectures that is commonly overlooked. Whether your edge strategy involves moving existing applications closer to end users or creating all new applications, an agile network is needed to bring all the elements together.

The first step towards an edge-ready network is the use of a flatter network architecture that shifts traffic from regional branches of the enterprise or partners and suppliers and aggregates traffic into regional hubs. These hubs are located in carrier neutral multi-tenant datacentres, offering interconnection points with cloud service providers. The hubs are connected together through a variety of access networks (MPLS, broadband, wireless) but managed as a logical network via SD-WAN technology. Creating overlay networks via software allows for easier configuration and adjustment of routes and application-level optimisation.

In the terminology laid out by the Linux Foundation, this architecture of interconnected hubs would also be referred to as an aggregation edge.

How to use the edge…now

Having distinguished these different layers and a method for connecting them, which one serves the goal of enhancing existing enterprise applications? In many cases, the aggregation edge will be the location of choice for enterprises. Some examples of ways to leverage the aggregation edge include:

  • Moving security elements closer to source of an ‘attack-deploy’ enterprise firewall and access controls closer to the end user for improved user experience and reduced bandwidth demand on core WAN links.
  • Collecting user requests and using load balancing and application delivery controllers to direct requests to origin servers that are able to best serve requests.

These and other functions such as static object and content caching are in some ways already provided services such as content delivery networks (CDNs), which act as an aggregation layer for customers.

While CDNs are offering more functions than ever in their edge points of presence (POPs), edge computing promises the ability to run one's own custom stateful applications in a protected space. Examples of more advanced edge services would include:

  • Accelerating delivery of dynamic content-using enterprise application logic in edge POPs to retrieve data and perform functions that quickly deliver a personalised user experience
  • Compliance – segregating data to specific regions and performing functions in-place rather than transporting data to non-compliant regions.
  • Performing data collection and analytics closer to the source of data for time-sensitive applications.

A use case for the 'next' edge: video surveillance

A use case for enterprises might include HD video surveillance of corporate offices and manufacturing sites, for example. The aim of leveraging HD video would be to increase the speed and accuracy of identification of unauthorised access to buildings or sensitive areas. There are significant advances occurring with the application of AI and machine learning to the video source that an enterprise would want to use. However, it is expensive to transmit the large volume of video data to a distant, centralised cloud data centre to perform these compute-intensive operations. The problem is multiplied as the number of sites to monitor increases.

One solution is to have basic processing done near the cameras themselves (on a campus, or even on the device, in some cases). Flagged sections of video can be delivered down to the aggregation edge, which has the compute and storage resources available to compare images against a wide variety of sources, and then produce valuable insights in the form of improved detection of objects or people.

Implementing a software-controlled network allows for an architecture that enables enterprises to benefit from better performance and security at the edge of the WAN—today's edge. This strategy also allows for future expansion of an overlay network to access and aggregation edge POPs in carrier neutral datacentres. What this means is that down the road, IT managers have given the business the maximum flexibility for both compute and network needs as end-user requirements for high performance, security and personalised experiences continue to grow.

Mark Casey, CEO, Apcela
Image source: Shutterstock/Toria