Is networking becoming cool again?

About 20 years ago there was a big wave of networking innovation, spurred by the Internet age. It gave birth to huge networking companies like Cisco, Juniper, and Ericsson. Then, the pace of innovation just kind of stagnated. However, the recent acquisition of Viptela by Cisco at $610M and even more recent acquisition of VeloCloud by VMware at an estimated (whopping) value of $1.4B have changed the game, and they definitely suggest that virtualisation is setting the networking market on fire again.

During the time when data centres were undergoing the revolutionary transformation to cloud, the networking industry basically skipped this entire evolution.

Which begs the question, why?

From the internet age to the cloud

Starting in approximately 2011, the second wave of major transformation began to present itself in the networking industry, starting with major acquisitions like Cisco with Tail-f, and VMware’s acquisition of Nicira—leading to a world where the service and configuration model started to take centre stage.

Networking, unlike compute or storage which are mapped into a single box, is much more complex and is comprised of many services (AKA functions), which can begin with the IP-tables, Wi-Fi and network cards in your operating system, and continue into corporate services such as firewalls, load balancers, routers, DPIs, VPNs and many more.

So virtualisation of the network essentially requires a virtualisation of all of those functions, making the task of virtualising the network significantly more complex, and slower to happen.

On top of these complexities, network services are extremely sensitive to performance. Time was needed for virtualisation technologies and networking hardware to evolve and deliver acceptable performance through specific hardware acceleration such as CPU pinning, SR-IOV, and such. This kind of acceleration also had to be supported by cloud providers and their cloud platforms, which took still more time.

So... what’s changed?

As with many transformations there are many factors that must converge to move the industry through a major evolution. In this case, the following factors accelerated network virtualisation:

• Competition from web-scale companies—Google, Facebook and Amazon Web Services built their own networking infrastructure based on commodity infrastructure and software-driven networking. This was a critical catalyst proving that network virtualisation can be delivered at massive scale while still delivering the necessary performance through software-driven network services and commodity hardware. Proof of that can be seen by the project and case studies presented at the Networking @Scale 2017 event, which showcased many of the open networking projects developed by web-scale companies.

• Cost pressure on carriers—It's no secret that carriers are facing huge cost pressure. While they have an increasing demand to expand their network bandwidth, they have no way to monetise on this investment. Therefore, they are forced to reduce their cost of operations as an immediate mitigation plan.

 • Business disruption—Netflix, WhatsApp, and Skype provide over-the-top network services at a fraction of the cost and disrupt the core of carriers’ former revenue sources. 

 • Technology maturity—Virtualisation technologies, coupled with hardware accelerators, now make it possible to run network services with high performance and predictable latency, just as one would with dedicated hardware. 

 • Market demand—Manual network management worked on a centralised and relatively static network environment. The move to cloud and multi-data centres leads to a more dynamic network environment. Therefore, manual practices are becoming unmanageable and costly.

• Emerging startups—New entrants have played an important role in this transformation. Unlike incumbents like Cisco and Juniper, startups play the disruption game, bringing new, software-only approaches built for the post-cloud era.

The network virtualisation and orchestration market 

Different reports show that the network virtualisation market is expected to reach the hockey-stick in 2020 with $12.4B in NFV software, while edge computing is expected to drive faster and bigger growth and reach to $19.4B by 2023.

The Orchestration market is expected to grow at an even a faster rate according to the Cloud Orchestration Forecast and Multi-Cloud Management Forecast by Market and Markets:

“The cloud orchestration market is estimated to grow from $4,950.5 million in 2016 to $14,172.5 million by 2021, at a CAGR of 23.4 per cent. The key forces driving the cloud orchestration market include growing demand for optimum resources utilisation, increasing the need for self-service provisioning, and flexibility, agility, and cost-efficiency.

The multi-cloud management market size is expected to grow from $1,169.5 million in 2017 to $4,492.7 million by 2022, at a CAGR of 30.9 per cent.”

It’s also interesting to see the strong correlation between the multi-cloud market and the telecommunication business, which is expected to be the biggest vertical in that segment:

“Telecommunications and ITES is one of the most significant verticals in the multi-cloud management market. Multi-cloud services and solutions are used in this vertical for various on-demand services, depending on the Call Detail Records (CDRs).”

The worlds of DevOps and network virtualisation are converging. The same concepts that brought about the DevOps transformation in the enterprise are now changing the networking industry. The same forces are at work here: the need for more lean and agile rollout of services, automation and orchestration at the core, software-defined everything, self-service, open source, and provisioning of on-demand and dynamic services. This is the new networking reality. The stars are aligning to bring new DevOps concepts into the world of networking, at scale and more openly.

The network virtualisation future 

Yes, the network is now becoming cool again, being at the centre of the transformation of an entire industry moving towards agility, self-service, automation and web-scale. On top of this, all these transitions are largely based on concepts of open source, that had the power to change entire industries—from the operating system, through the mobile market, now all the way down to the networking layer.

All this didn’t happen overnight, and there was a gradual transition to make this transformation a reality.  

As 2018 dawns, it’s a good time to see how the network virtualisation world is evolving.  This, to me, can be broken down into three “generations” as I see them:

First generation: Network virtualisation

The first stage was led by ETSI and defined the general architecture for enabling network virtualisation. The architecture was adopted by many carriers and played a key role in setting the market, creating a common taxonomy of the key layers.

This ETSI definition introduced the concept of orchestration as a key component in the network virtualisation architecture, as outlined by ETSI MANO which is separated from the infrastructure (VIM) and OSS/BSS.

ETSI - MANO Architecture  

The challenge with first-generation virtualisation is that it forced a fairly big change on how carriers operate. This was a big undertaking not just technically but also culturally.

That made the adoption of NFV extremely slow, especially by second- and third-tier carriers who couldn’t afford the investment required to make the transition.

A bigger challenge was the fact that all the standardisation efforts behind ETSI NFV were for the most part led by the same vendors who didn’t really have an incentive to drive such a big transformation, which ultimately would result in major cannibalisation of their own core business.

The result was that many of the NFV players ended up using the ETSI model mostly as a way to sell the same thing but dressed up in modern technology, but the actual product—and more importantly, the business model—didn’t change much to fit into the cloud world.

Second generation: The move from network appliance to a network service

Second generation network virtualisation is led mostly by new startups such as Meraki, Viptela, VeloCloud and Versa. These startups brought a more narrowly scoped solution for specific and common use cases such as SD-WAN and vCPE, repackaging formerly complex problems into new and exciting products that deliver an excellent user experience and are easier to consume.

The key to their success was that rather than changing the entire networking backbone as a first step, these services were offered as new, over-the-top services.

vCPE and SD-WAN solutions first target moving from manual setup and configuration of the WAN network to self-service network management of all the network configurations.

SD-WAN Self Service Management of branch office network management

The key challenge with this approach is that we're trading one closed source solution for another. Most of these solutions are still fairly proprietary, and come with their own set of network functions, their own flavour of protocols, and management systems.

The move of network services to a software-driven model makes it accessible to applications and developers. It is expected that the control of network infrastructure will follow a similar shift to be more application controlled, similar to how this happened with compute and storage infrastructure.

Current SD-WAN/vCPE solutions were not designed for DevOps processes. Rather, they were targeted mostly to network operators. Today we are looking towards more ecosystem partners that will drive these processes and provide dynamic management layers to enable best-of-breed networking stacks.

Third generation: Cloud native and the move from Software-Defined Networking (SDN) to Application-Defined Networking (ADN)

The move to cloud-native networking will commoditise network devices including CPE devices, and we're already seeing open CPE alternatives as covered by Nikos Andrikogiannopoulos on the x86 based CPE devices such as pfSense. In addition, the Open Compute Initiative (OCI) is driving a proposal for an Open uCPE, which a list of network vendors already agreed to support.

Other network services such as Quagga, and Calico as well as Metaswitch Project Clearwater are another example of new, mostly open-source and cloud-native network services.

In this cloud native world, the control plane moves from the device to the cloud, thus transferring some of the heavy lifting from the device to the cloud, further reducing the cost of the device itself.

This moves the shift of value from the network device into the control plane, itself as pointed out by Andrikogiannopoulos:

“By transferring control plane functionality in the cloud one can lower CPU/RAM requirements and do the heavy lifting on the cloud side. This SDN/NFV approach will allow faster delivery of new functionality/services at lower cost. Services like NAS storage, parental control, VPN/cloud gateways, CDN functionality, etc. Firmware upgrades can become as easy as iPhone upgrades.”

In addition to commoditisation of the network devices and the move of the centre of gravity to the cloud, I expect that networking will be driven by the application, and not just by network operators.

This move has already happened with compute and storage infrastructure, and it is fair to say that today most of these resources are software driven.

I see a number of reasons why the network should follow a similar path, like:

·         The fact that networks should be managed as part of the lifecycle of the application. Today, when networks are managed separately from the application, we see lots of firewall, ports, load balancer rules that are left open even when the application that needed them doesn’t exist anymore or has changed. This needs to change.

·         Likewise, the move to multi-cloud and edge computing is forcing a more dynamic and ad-hoc network environment where it isn’t even always possible to know which network needs to be created beforehand.

The security case for application defined networking  

Network security is applied today mostly as an afterthought. Many of the network security products were designed to identify malicious behaviour by tracking packet flows. In addition, a central firewall can’t be the sole gatekeeper especially when we're moving to multi-cloud or even more distributed environments such as edge computing.

Instead, what is needed is the ability to create micro-firewalls that will be created per application and ensure that only the right set of services will be exposed to the outside world. Such definitions will follow the application as it moves from one environment to the other or be deleted when they are not needed anymore.

Application Defined Networking

I expect the move to Application Defined Networking to have a similarly disruptive effect on network operations as we've seen previously with the effect of DevOps on data centre management. This shift will also include a shift in power from the network operation to the application owner.

The move to open networking

These three phases in network virtualisation have created an industry ripe and ready to embrace more open solutions that have industry-wide adoption, creating a more standardised method of operation and best practices that will enable carriers to remain relevant and adopt new technologies more quickly. 

The open networking promise is more real than ever, and the industry is converging around new and exciting projects to help deliver real world implementations that are applicable at the scale and distribution required today from multi-cloud to the edge.

Nati Shalom, CTO and co-founder, Cloudify
Image Credit: Sergey Nivens / Shutterstock