Deploying applications and websites is far less expensive and complex than it used to be. Cloud computing providers are distributed worldwide, and the use of deployment automation and Infrastructure as a Service (IaaS) is increasing. Uptime and performance have improved as users take advantage of the ability to deploy servers across the globe in minutes and benefit from decentralised environments by using an abundance of software frameworks, databases, and automation tools. This equates to nothing less than a revolution in modern computing practices: applications are now distributed by default.
Modern challenges in traffic management
Though impressive changes have been made on the infrastructure and application side regarding distributing applications, the tools that website operators have at their disposal to effectively route traffic to their newly distributed applications haven’t kept pace. Your app is distributed, but how do you get your users to the right points of presence (POPs)? Currently, traffic managers achieve their goal via prohibitively complex and expensive networking techniques like BGP anycasting, capex-heavy hardware appliances with global load balancing add-ons, or by leveraging a third-party managed DNS platform.
DNS is a great place to enact traffic management policies, being the ingress point to nearly every application and website on the Internet. However, the capabilities of most managed DNS platforms are severely limited because they were not designed with today’s applications in mind. For instance, most managed DNS platforms are built using off-the-shelf software like BIND or PowerDNS, onto which features like monitoring and geo-IP databases are grafted.
Until recently, a top-notch DNS platform was able to do two things regarding traffic management: first, it wouldn’t send users to a server that was down, and second, it would try to return the IP address of the server that’s closest to the end-user making the request.
This could be compared to using an early GPS system to find a cafe on a Friday night: it can give you the location of one that’s close by and may be open according to its Yellow Pages listing, but that’s about it. Maybe there are roadworks or congestion on the one route you can take to get there. Maybe the cafe is open but has a line out the door and stretching down the road. Perhaps a cafe that’s a bit further away would have been a better choice?
This is what’s going on with high-performing Internet properties today, but they go far beyond proximity and a binary notion of 'up/down'. Does the data centre have excess capacity? What’s traffic like getting there: is there a fibre cut or congestion to a particular ISP we should route around? Are there any data privacy or protection protocols we need to take into account?
Five best practices
A new method of DNS traffic management is called for in light of today’s data-driven application delivery models. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.
However, not all DNS platforms offer the same features, so be sure to research your options. Below are five best practices to consider when implementing an advanced, intelligent traffic management platform:
To handle planned or unplanned traffic spikes, leverage ready-to-scale infrastructure. If your primary co-location environment is becoming overloaded, make sure your are able to dynamically send new traffic to another environment according to your business rules, whether it’s AWS, the next nearest facility, or a DR/failover site.
Routing for modern realities
Identify platforms that route users based on their ISP, ASN, IP prefix, or geographical location. Geofencing can ensure users in the EU are only serviced by EU data centres, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache. Using IP fencing will make sure 'local-printer.company.com' automatically returns the IP of your local printer, regardless of which office an employee is visiting.
Find solutions that automatically adjust the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications. This can help prevent overloading a data centre without taking it offline entirely and seamlessly route users to the next nearest data centre with excess capacity.
By enacting business rules, you can meet your applications’ needs with filters that use weights, priorities, and even stickiness. Distribute traffic in accordance with commits and capacity. Combine weighted load balancing with sticky sessions (e.g. session affinity) to adjust the ratio of traffic distributed among a group of servers while ensuring that returning users continue to be directed to the same endpoint.
Look for platforms that enable you to constantly monitor endpoints from the vantage point of the end user and then send those coming from each network to the endpoint that will service them best.
A rethink of current DNS and traffic management capabilities is necessary for businesses that need to deliver Internet-scale performance and reliability for high-volume, mission-critical applications. Use the best practices above to help you find the solution that will carry your business forward.
Shannon Weyrick is the director of engineering for NS1
Image source: Shutterstock/stocker1970