Skip to main content

The importance of a common framework for network visibility

(Image credit: Image Credit: Wynpnt / Pixabay)

Cloud technologies have been adopted across enterprises of all shapes and sizes as CIOs look to embrace new technologies and bring about digital transformation. The cloud offers multiple benefits such as flexibility, resilience, reduced costs, increased efficiency and, most importantly, increased agility. It is clear why enterprises would want to exploit the potential of cloud technologies and services.

Historically, a business would simply look to ‘migrate’ their applications into a cloud compute environment to gain access to the advantages offered by the cloud; but, it is now widely understood that to really achieve the technical and business benefits of the cloud, a much more substantial re-architecture of applications is needed. As a consequence, multi-tier, microservice ‘cloud-native’ architectures have become increasingly common. This architectural shift breaks up the monolithic applications that many enterprises’ IT operations have been used to managing and supporting. This creates a big challenge for IT in maintaining visibility. East-west traffic flows between application components have grown dramatically and virtualisation and containerisation technologies have made it more challenging to get a complete picture of what is going on. This makes it more difficult to see where bottlenecks, and other such issues, may be appearing.

In many environments, we have ended up with access to a plethora of different metrics and datasets, different tools and services that IT operations can use to assess performance and security. These datasets are delivered at different granularities, in different formats, some are real-time, some are not. Putting these together is very difficult; imagine having to build a clear picture of any landmark if you are just given ten pieces extracted from ten different pictures of that landmark – you’ll end up with something recognisable, but not particularly crisp and clear, and probably with bits missing. This is the challenge faced by many enterprises when they try to get a view across their evolving infrastructures as they go through digital transformation.

Sometimes, less is more

This challenge is not something that IT teams can work quietly in the background to overcome, digital transformation is at the top of the agenda for many organisations as focus is increasingly being drawn to the value that the right IT can bring to a business. An accurate understanding of the performance of mission-critical applications and how they operate within and across cloud/hybrid environments is hugely important, not just for business continuity but also to manage and report on the ROI for digital transformation initiatives.

One common way of addressing this problem is through the use of complex analytics which interpret and correlate the different base telemetry sets an organisation has access to. This approach relies heavily on mathematical manipulation to deliver an overall view of what is going, and whilst this approach may draw the correct conclusions ‘sometimes’, there will be inaccuracies and errors. The overall picture will, to a degree, represent what ‘may’ be happening rather than what ‘is’ happening.

So, how can these difficulties be resolved? The key to ensuring scalable, dependable network visibility is creating a cohesive data framework. To do this, IT teams must focus on using a smaller number of consistent data sets across their infrastructure – as the old adage says – sometimes less is more. Monitoring in this way reduces the need for complex data-processing pipelines, and, in general, can provide a more real-time view of actual application and infrastructure health, performance and security.

One data-source, that can provide huge value for availability, performance and security monitoring is network traffic. Network activity has traditionally been monitored by IT teams, within data-centres, as a primary data-source. Network traffic provides a clear view of activity in any domain and can gives IT teams an understanding of user experience, and a forensic view for troubleshooting and security investigations. This has been the case for a number of years, but the adoption of cloud-native architectures has changed the ‘shape’ of communications within our infrastructures.

Simplifying visibility

Traditionally, traffic ran predominantly north-south within our infrastructures – into and out of our applications, to and from the users – and our monitoring solutions were positioned to give us visibility into this traffic. This traffic is still there, and is still important, but with the adoption of cloud-native architectures there has been huge growth in the traffic between our application and service components – east-west traffic. East-west traffic is also important and has to be monitored to be able to really see what is going on, to trouble-shoot problems, and to really understand component and tier inter-dependencies.

Network traffic monitoring tools and capabilities are provided by many of the cloud technologies and services available today, but what we need to build our overall picture of health and security is consistency in the data we are using. In order to extract what we need from network traffic we want to avoid moving large numbers of ‘packets’ into and out of different environments; we want to convert packets into consistent meta-data and KPIs within the relevant cloud environment. This more portable data can then be exported to our monitoring solutions. If we can ensure that consistent, relevant information is exported from each of our technology domains in a simple and economic way then we have what we need. This information can then be used to drive multiple monitoring, troubleshooting, security and business reporting use-cases.

The above may seem pretty obvious, but many organisations continue to struggle trying to process huge numbers of disparate datasets. Businesses that make the effort to create a common framework for network visibility will gain an understanding of the performance of all applications in each cloud environment. This allows them to ensure that their customers and users are receiving the experience they are expecting and that workloads are managed, migrated, and scaled correctly and efficiently. A consistent view of performance and availability across technology domains will provide insight into which applications require further tuning, and where technology investments produce the greatest returns.

In short, consistent data simplifies visibility, and good visibility across the multiple domains that businesses are typically managing, will result in improved business agility and increased return on technology investment.

Darren Anstee, CTO for security, NETSCOUT