Skip to main content

Getting ready for the cutting edge of cloud computing

(Image credit: Image source: Shutterstock/bluebay)

Over the last few years, there has been increasing interest and speculation from industry experts around the potential of Edge Computing. With many expecting the “Edge Cloud” to overtake the "Hyperscale Cloud" in the near future, it’s important to clarify what this frequently misunderstood term means and whether they can live up to the industry hype.

In some instances, Edge Cloud can simply mean hyper-converged solutions used within large locations. This is a more clear-cut example of Edge Computing and where we are predominantly seeing this technology currently being used. However, more obscure use cases such as small embedded appliances acting as IoT gateways or “mini-clouds” residing in 5G base stations are other examples of true Edge Computing.

Another example that has received little notice is a new model using a multi-core X86-based Ubuntu Linux server and associated NIC card. Aside from being small enough to disguise as an Ethernet access switch, the main advantage of this software model is that it offers unparalleled simplicity, low-latency and programmability. Providing a direct 20G connection to the switch fabric through an Ubuntu Linux server, this software is easily able to run multiple VMs or containers.

An additional advantage of this type of Edge computer server is that they can execute network remediation applications, like packet capture, and enhance network operating system capabilities. By sharing pre-existing subsystems, these sorts of Edge solutions can take advantage of the power, cooling and internal connectivity features inside the switch. Another advantage is that this can all be performed in addition to traditional Edge use cases and deployed rapidly thanks to a simplified packaging and deployment model.

Edge clouds on the horizon

To illustrate the capabilities of this software, it may be helpful to look at a few potential use cases. One network-oriented example could be placing a firewall VNF running as a VM on the internal server and then sending traffic from a predetermined set of ports through this NGFW application. Another potential use case could be facilitating connections for third-party vendors who have equipment located off-site; to achieve this all they have to do is set aside Ports N+3 through N+11 on an access switch. A final and more simple example could be using the software to mirror some ports to the server for packet capture.

Edge Computing applications also have a broad range of potential uses - although they are frequently incredibly time-sensitive. For instance, 5G MEC applications with latency requirements of 5ms, or applications with such large data volume that sending it to the cloud is impractical. This is where pre-processing at the edge offers a distinct advantage. We have seen this performed by IoT gateways like the AWS Green Grass serverless IoT applications or the LoRA gateway application.

To demonstrate how this would work in practice, let us consider a video analytics application that is accelerated by an accompanying AI video analytics adapter connected to the CPU’s PCI bus. In this use case, the data can be pre-processed before the associated metadata is packaged up and sent to a cloud-based application.

Virtualising success

The true virtualisation-capability of the X86 CPU in the switch with its own vNIC running Ubuntu is a key advantage of this software. This is also of benefit to the operator managing this server as it’s functionally identical to a Linux server capable of running applications with less than a predetermined storage size, within the CPU’s capabilities and requiring more than 128GB of flash memory. In use, this allows for the majority of applications that are able to run VMs in the cloud to run at the edge instead which has some promising ramifications for the near future.

With true Edge Computing applications capable of running inside a switch, considerations such as orchestration and diagnostics need to be taken into account. This can simply be addressed by a CLI command sequence to push the relevant application out to the Linux OS and run it. However, more novel approaches include using solutions akin to an Ansible playbook or executing a small Kubernetes node on the server alongside a connected set of servers which can cooperate dynamically.

Scalability is another consideration for Edge Computing - especially with the Razor’s Edge model. If an Edge Computing application running on a server needs expansions, more CPU, memory and storage need be added. To show how the Razor’s Edge handles this, let’s complicate this example by saying that this application also needs a video AI accelerator chip like the Intel Movidius or the Google TPU. An easy solution would be to add a vertically-oriented rack-mounted small form factor mini-server blade adapter to one of the 1G/2.5G/5G PoE Ethernet ports. Importantly, this can be delivered without changing the model by providing instant capacity expansion via the mini-server.

To conclude, as the number of potential use cases grows, it makes increasing operational sense for organisations to move towards Edge Computing. The Razor’s Edge specifically is enabling computing at the edge in a way that historical models have previously been unable to supply. It’s distinct advantages over other solutions are promising for not only Edge Computing but network remediation and security too.

Eric Broockman, CTO, Extreme Networks