Skip to main content

Don’t overlook the backbone

(Image credit: Image Credit: Geralt / Pixabay)

While openness and universal connectivity are the Internet’s greatest strength, or raison d’être even, they are also its biggest challenge. Considering the sheer scale and complexity of the global

Internet ecosystem, it is a wonder that any degree of consistent performance is possible at all. In a perfect world, traffic quality wouldn’t be an issue but of course, the world is far from perfect. Many networks are simply not scaled to handle large traffic spikes, resulting in unavoidable packet loss during peak hours – when usage is at its highest. Others aren’t sufficiently diverse (or diverse at all), which means they shut down completely when outages do occur.

It is also important to remember that, at the end of the day, the Internet is built on physical infrastructure: cables, data centers, routers, switches and more – all things that are susceptible to damage and external factors. A fisherman putting his nets down in the wrong place, or the anchor of

a container vessel can easily cut a subsea cable, leading to a sustained outage until repairs can be carried out. Add to that the almost infinite number of human-network interactions and the potential for disruption is enormous.

The connection threat

However, the biggest threat to public Internet traffic performance is actually found in the connections between different networks themselves. Unfortunately, these aren’t always set up in the best interests of the traffic they carry. Since there are commercial relationships behind every connection, the routing of traffic between networks is largely based on cost considerations. If it is cheaper to offload traffic to certain networks regardless of their ability to carry it, many networks will do it anyway.

Sometimes, there are cost-driven capacity limitations between networks en-route. Throw a few political and legislative restrictions into the mix and it starts to become clear that ‘the’ Internet isn’t quite the same everywhere. Although your traffic will generally reach the destinations you require, the experience will vary significantly between different network paths.

Obviously, sensitivity to network performance will depend largely upon the application used, but poor Internet connectivity can result in anything from bad gameplay to major financial losses and potentially even life-and-death consequences. At the end of the day, direct routing delivers the best performance and greatest consistency. Wherever possible, it is always best to avoid ‘the scenic route’ and connect as closely to critical content and applications as possible.

Making the best of it

For many years, the content industry asked tier one networks for end-to-end quality assurances – all the way to the end-user. Although this has never been formally realized, it should be noted that most ISPs today have sufficient network quality to stream live sport, or facilitate gameplay on titles or platforms with low tolerance thresholds for packet loss or delay. Customers today take these services for granted and operators simply cannot afford to save a few dollars at the expense of service quality.

The variation in performance explains why the public Internet is often considered a ‘best effort’ environment. In such a diverse and open ecosystem, with so many stakeholders, it is nearly impossible to provide comprehensive performance guarantees, especially when traffic flows between different networks are constantly changing and hostage to lowest cost routing. Whilst most companies offer customers a guaranteed SLA (Service Level Agreement) for their own networks, very few dare to promise anything that extends into other networks, outside their jurisdiction.

The underlay matters

Despite the fact that the Internet is increasingly becoming an important delivery platform for enterprise business applications, many network architects remain unfamiliar with the public Internet and its fundamental workings. Despite a wealth of knowledge and expertise about point-to-point networking, or solutions built on layer two and layer three connections (in some cases even older technology such as SDH, ATM or Frame Relay), intimate knowledge and experience of the Internet backbone and its dynamics are still a fairly rare commodity. 

This means that when evaluating Internet based WAN solutions and cloud platforms, the Internet backbone is often overlooked. Considering the scope, complexity and potential for service variation across the wider Internet ecosystem, network buyers should look deeper into the Internet backbone underlay networks of their prospective suppliers.

This should include:

  • The extent of a supplier’s own network footprint – This will largely influence the extent to which a network partner has control of ‘on-net’ traffic. A larger footprint generally means greater control and the only way to ensure full visibility of network resources, and ultimately, quality.
  • Scalability – Is a supplier network built on leased capacity or their own infrastructure? This will dictate the ability of a supplier to scale-up capacity, quickly and efficiently.
  • The proximity of a supplier’s network to the Internet backbone – In what tier do they reside? The higher they sit in the network architecture the more capacity and direct long-distance links they will have on their own network.
  • How well connected are they? – In other words, do they connect via third party transit networks and public exchanges points, or do they have a well dimensioned/managed ecosystem of private peering connections towards critical Internet backbone networks?
  • How large is their directly connected customer base and how are they ranked in comparison to others?  This can help you understand whether they are really delivering the best service available in your region.
  • Proximity to major cloud networks – Does a potential supplier have direct onramps to the big clouds and is there sufficient capacity?
  • How is the network ecosystem managed? – When things go wrong, it is important to have direct access to highly skilled and motivated customer care resources.

For the foreseeable future at least, a blend of public Internet and private connectivity will deliver the best combination of flexibility and security for enterprise WANs. However, the quality of Internet connectivity varies significantly across different providers and geographies. Network buyers should therefore look carefully into the ‘underlay’ beneath the service platform on offer.

Mattias Fridström, Chief Evangelist, Telia Carrier