Over the years, the complexity of IP networks has increased significantly, slowing down the innovation cycle and driving up capital and operational expenses. The tight integration between services software and purpose-built hardware has made it increasingly difficult to manipulate services “on-demand.”
The business driver for this new cycle of evolution is to create a more dynamic and service-agile infrastructure where existing services can be changed “on-the-fly” and new services can be delivered rapidly in response to changing customer needs, and the the total cost of ownership can be reduced through improved automation and orchestration capabilities.
This new virtualised infrastructure is based upon open standards of Software Defined Networks (SDN) and Network Function Virtualisation (NFV). By virtualising network functions, NFV allows network functions to be placed anywhere in the infrastructure and be moved as necessary. By separating control plane and data plane functions, SDN allows the customised orchestration of individual services and how they are transported across the network.
This is a very significant transformation, similar in nature to the TDM-to-IP transformation that started in the late 1990s.
During this transformation, network offices will convert from legacy architectures with purpose-built network elements to virtualised architectures with commercial off-the-shelf platforms hosting virtual network elements. These virtual network elements can be controlled from external software systems where the service creation and orchestration intelligence resides. The new software-centric network becomes more service-aware, self-organising, and self-managing, no longer heavily dependent upon outboard OSS/BSSs for these functions. With this transformation, service providers will be able to better utilise their network resources while creating and delivering differentiated services to their customers.
In this new world of agile, software-centric networks, service assurance becomes a foundational capability. Its role expands beyond the traditional role of troubleshooting network and subscriber problems, to a more dynamic role in the closed-loop orchestration of traffic flows and just-in-time resource management for the delivery of new, on-demand services.
In addition, this move opens up the opportunity to greatly simplify the service assurance infrastructure by using a common set of network element independent packet flow monitoring systems that provide real-time access to highly granular network, application, and subscriber level metadata and performance metrics on an end-to-end basis.
The value of a next-gen probe monitoring solution in a virtualised infrastructure
As service providers move to a virtualised infrastructure, they are looking to maintain full visibility into both “north-south” and “east-west” traffic. They will also be operating a hybrid environment of physical and virtual networks for a long period of time and relying upon integrated service assurance capabilities to manage through this transition in a seamless, disruption free manner.
Network operations teams are looking for solutions that leverage the richness provided by packet flows while simultaneously reducing the complexities involved in handling massive amounts of data. Therefore, the right packet flow solution will require the following capabilities:
Real-time Scalability – Coping with massive volumes of data and providing real-time metadata and performance metrics to upstream applications can be a daunting task in large operator networks. In a dynamic virtualised infrastructure, traditional packet capture tools that rely on middleware components to generate metadata and performance metrics will not be able to scale to provide timely, actionable information to congestion and resource management systems and allow them to respond effectively to rapidly changing network conditions.
Other solutions that rely on a virtual Tap to capture packets from the hypervisor and relay them to an external probe introduce latency that may compromise on the real-time nature of the overall solution while adding incremental cost.
Efficient Data Reduction and Backhaul – Extracting useful information from the packets can be done centrally or at the source. Centralised solutions may be problematic, as massive amounts of data have to be transported over the network. Furthermore extracting useful information centrally from the large volumes of data collected across the network can be a difficult task, akin to extracting the proverbial “needle” from the “haystack.”
On the other hand, a decentralised architecture where metadata extraction, metric computation and correlation is done locally where the probe is installed, reduces the backhaul traffic while making the pre-processed data readily accessible to upstream systems for taking appropriate actions centrally. This is very important in a virtualised infrastructure where, in addition to backhaul capacity, hypervisor throughput may also be a constraining factor.
Granularity – In a virtualised network, resource management actions can be driven by subscriber level SLAs, application level performance metrics, media and control plane metrics or some combination thereof. Hence, there is a strong need for highly granular performance metrics at the subscriber, network, and application level as well as at the control and user plane level.
Policy control mechanisms require granular data from the network to drive orchestration decisions in real-time. Operations teams also need correlated visibility across control and user plane traffic so issues can be correctly identified and precisely localised in the network.
Ubiquitous Deployment – In physical networks, packet flow analysis is typically done by hardware probes. In these networks, it was not feasible to deploy hardware probes on an end-to-end basis across datacenter, core and access networks, and customer premises.
However, in a virtualised infrastructure, software probes can be deployed to run on their own virtual machines. The key challenge is to ensure that these software probes can scale up or down in a cost-effective manner to be deployed all the way from high end datacenters to low end customer premise equipment.
With migration to a software-centric network, operators have the opportunity to greatly simplify and streamline their service assurance architecture from a plethora of loosely strung management systems with varying degrees of fidelity to a single, “high definition” end-to-end platform.
This technology provides real-time actionable intelligence at the protocol, application, and subscriber level that can be used for diagnosing service problems, for providing just-in-time resource management, and for enabling future on-demand services in a fully automated manner.
Operators can realise significant capital and operational savings while delivering new, innovative services using a next-generation service assurance platform as a key foundation of their virtualised infrastructure.
Dr. Vikram Saksena, Office of the CTO, NetScout (opens in new tab)