Skip to main content

Why edge computing is the next wave for our increasingly digital lives

(Image credit: Image Credit: TZIDO SUN / Shutterstock)

Our online world has changed. Our increasing reliance on the Internet and our ability to use the connectivity for a growing number of daily activities cannot compare with the now unthinkable 15 seconds of load times users endured in 1998. With online-only businesses, mobile devices, App stores, bots and the proliferation of the Internet of Things (IoT), the focus has changed to an intelligent and human-centric internet - which means speed and low latency is everything. To meet today’s demands and tomorrow’s new challenges, companies must shift their approach to edge computing. Centralized and distributed servers - with their performance issues and high cost - can no longer cut it for online content delivery, even in a cloud environment.   

Racking up the costs   

When you open a website in your browser, a complicated but well-orchestrated dance occurs which moves data from the websites’ servers to your device across an interconnected edge network called a Content Distribution Network (CDN). It all happens in a split second, and it’s been this way since the late 1990’s when the godfather of all CDN’s,  Akamai, pioneered the idea. To accelerate the recalling process, most companies use multiple CDN providers for different domains, and pay for the bandwidth to deliver their content to the reader.  That’s not much for a typical 3.4 MB page served up to a user in New York, but if your customer is from Sao Paulo, Brazil (bom dia!), it’s a little bit more at about .091 cents (US) for an average web page. If you’re in Sydney (G’day mate) then the provider would have had to cough up roughly .33 cents (US). 

It doesn’t seem like much money per instance, but it builds up fast - and it’s a big business. The CDN market is roughly a $4 billion US market today, and that’s excluding the raw bandwidth prices everyone is paying on “the cloud.”   

Getting into Cloud is cheap, but getting the data out?...  

It’s true that doing work on data (compute) in the cloud is cheap, and storing data in the cloud is insanely cheap… but moving data out of the cloud? No. That’s not cheap. If you head over to the AWS pricing page you’ll quickly find its bandwidth pricing to get data OUT of the cloud is extremely well hidden.   

The secret to cloud margins (aside from scale economics on storage and compute) is to charge an absurd amount for bandwidth. In-bound data is almost always free, but outbound? Be ready to pay a hefty toll.    

When CDNs fail, they fail big!    

Looking at how traditional CDNs work in more detail, a CDN is a server (often called an edge server) that sits between your users and your content (called the origin). When a user requests a page (say the Domain Naming System (DNS) converts that to an IP address and directs the user to the nearest edge server. The CDN server then checks if it has the content you’ve requested and if so, serves the page. If it doesn’t have the content (a cache miss) then it simply requests the asset from the origin and then serves the page.     

This all appears seamless to the user, who doesn’t know they’re going to an edge server rather than an origin server. The DNS lookup (which here is technically called DNS hijacking) abstracts away all the complexity of finding the nearest server to you. If this seems like a lot of power to entrust to a vendor, it is. Luckily, big DNS providers don’t often fail - but when they do, it tends to take down everyone. So while most of the time this all works effortlessly, sometimes it does not. At the end of the day, there’s no getting around the fact that your online business depends on your chosen edge partner. Unfortunately, those edge partners all use the same basic networks and tend to (because of the economics of peering) exist in the same data centers which increase the risk of failure. 

The move towards the edge   

As well as price and performance issues, there are other things that have changed since Tim Berners-Lee invented the web 30 years ago, and they all help provide a new way to serve website content.     

Browser power: Today’s browsers are more powerful than most servers were in the 2000s. In fact, most desktop browsers will outperform all but the top end of cloud-based virtual machines in terms of network throughput, I/O latency and single threaded performance.    

Broadband speed: Broadband internet has changed the connection speed for most target areas (e.g. dense locations where content is consumed) and is now three orders of magnitude more than it was in 2000. The peak connection speed in the US is now 53 Mbps (which is 22nd in the world).     

The usefulness of JavaScript:  JavaScript has finally come of age and is a language we can do real work with today. Every time a web page does more than just sit there, JavaScript enables your browser to display information to look at — displaying timely content updates, or interactive images, maps, or animated 2D/3D graphics, or scrolling video jukeboxes, etc. Since all modern web browsers support JavaScript without the need for plug-ins, we now can simply deploy JavaScript by adding an open source JavaScript file (you can see it here) to your web server. When a browser visits your website, the JavaScript begins to execute.    

What do these advancements allow us to do?     

While users read a website page, the client calls out to other (nearby) browsers and asks for new content (which may be needed). This is done using a peer to peer protocol called WebRTC. If an additional asset is needed, which isn’t in the local cache, before the browser goes to the origin (or CDN) the client checks if any nearby user has the required content. If it does then the browser simply downloads the content from the nearest user. The effect is zero bandwidth fees for the website, and a much faster page load time for the user.  

For mobile devices, images are generally downloaded as they were — then resized down to fit the screen. Why not just get the optimal image? Well, that’s a lot of work for the website operator. There are dozens of image formats and even more screen sizes. Storing (and paying for cache storage) on all those formats and sizes is cost prohibitive. With this peer to peer approach, once a device has resized an image it stores that version on the mesh, so for example, the Google Pixel phone gets the mobile-optimized version of an image and not the bulky desktop version. The magic of javascript also converts image formats.   

Cache and Scale!  

Today, the web has an inherent bottleneck and single point of failure: the upstream server that delivers the content. As web traffic increases (hello Cyber Monday) the load on those edge servers increases, and eventually if pushed hard enough, they will fail. But when edge servers are the users — you get reverse scaling. The more users on the site — the faster and more resilient it is!

Moving towards a more distributed and resilient design is simply a natural progression of the web. In doing so companies can reduce bandwidth costs by up to 80% while simultaneously decreasing page load time by as much as a factor of 2. As we continue to move towards a reality that is enhanced by Artificial Intelligence, machine learning and natural language processing we need to ensure our driverless cars get the information to act upon instantly in real time, or our fridges can order the extra milk when we need it, all while we continue to experience high quality, digital content from our video streaming services. The future beckons, and we need to make sure our website delivery can keep up with it!     

Jacob Loveless, CEO of Edgemesh 

Image Credit: TZIDO SUN / Shutterstock

Jacob Loveless
Jacob is an experienced technology innovator developing large scale telecommunications and cloud computing companies. He is CEO of Edgemesh, a Web acceleration company with the largest CDN in the world.