When cloud services first came along they got their name because they were represented by a cloud icon in the center of technical diagrams. The grey cloud, with arrows representing traffic flow ultimately meant “someone else is responsible for this”.
Most consumers remain stumped about the mystery of the cloud, but professionals understand that, in practice, cloud services are hosted in “someone else’s” data center infrastructure and accessed by a user over the internet. With most of the big cloud vendors originating from the US, the lion share of the initial cloud infrastructure resided in the US – however in more recent years, regional data centers are now located in all key regional hubs for business continuity, performance and data residency purposes.
As the dependence on cloud has grown, with companies using it for increasingly business critical applications, this model has proved to be inadequate. As a result, we now see cloud vendors announcing local “points of presence”, counting off a “presence” in more and more specific regions and countries. But what exactly does this achieve? There’s much talk about latency statistics, but are they always as they appear?
- Best cloud storage of 2020 (opens in new tab)
Latency, what does it mean to you?
When we talk about latency, everyone has a different understanding of what is being measured. Latency can be measured as the Round Trip Time (RTT) which is the time it takes for a packet to travel between client and server or the Time to First Byte (TTFB) which is the time it takes for the server to receive the first byte. Network engineers often focus on the RTT as a measurement, while application managers expect TTFB to be included. When a vendor talks about their latency stats – which will sit at the core of their stats around local data centers - make sure they are clear what the overall outcome is and the actual end-to-end user experience.
The main cause of any delays in travel time is the public internet, which features in cloud service diagrams more than you might realize. The longer a cloud service provider can guarantee that traffic stays on dedicated enterprise or carrier-grade connections, the better the journey time. So, if your employees are in Johannesburg, South Africa, you want your cloud service to run through the best local connection to a routing data center that is as physically close to your employees as possible. Once there, you also want the cloud service provider to be able to provide all their services without needing to haul the data back to other data centers elsewhere in the world.
And this is where you need to carefully inspect the infrastructure of cloud service providers. Where is their nearest data center to your employees? And what exactly are they able to do at that data center? Is it effectively a glorified re-routing point back to a much more distant data center or is it a properly functioning hub from which they can deliver their services?
- Three cybersecurity questions every organisation should ask their cloud service providers (opens in new tab)
What does this look like in practice?
In the current global crisis we have seen organizations around the world focused on enabling remote working for employees. Traditional VPNs have struggled to provide appropriate security for cloud applications and the workarounds that have been used in the past for limited numbers of employees are high-risk or high-latency when rolled out for entire workforces.
By providing a cloud-based Zero Trust Network Access (ZTNA) solution which directly connects remote workers to private applications running in public cloud environments or private data centers, organizations can connect remote workers directly to applications hosted in public cloud and private data centers.
What is a ‘point of presence’?
Many in the industry use “point of presence” (POP) and “data center” interchangeably, but it is a worthwhile exercise to interrogate what a cloud service vendor means by their chosen term. As an example, a vendor may talk about a Stockholm local POP which is actually a local IP address pointing to a data center in Amsterdam (which causes significant data residency and language issues for web and inline services), or a Vienna POP that is in fact located in Frankfurt. It can also often be the case that not all POPs are fully functioning to support all inline services offered by a cloud provider. Always check whether traffic will be backhauled even further to access the full-service offering, and check the speed of that journey over the network they utilize (if they rely on the public internet then Vienna to Frankfurt can take as long as New York to Sydney). it’s a key question to ask whether your most important links between data centers are truly peer-to-peer or transition through a public exchange.
Does any of this actually matter?
When you are sitting across the table from a vendor and haggling over the terminology of performance and user experience SLAs (…do you want promises of “average” speeds or do you want guarantees of limits?) it can be hard to remember why this all matters.
A 50+ millisecond delay is significant and will impact end user experience. Just because they are used to it, or their expectations are low, does not mean it’s OK. We were all used to dial up internet connections not too long ago, and it’s our job in technology to keep raising expectations and deliver the best service on offer for the benefit of the workforce.
It’s actually ok to interrogate your cloud service provider, and make sure that the infrastructure hiding behind that cloud icon is as it should be. It’s surprising that two very similar diagrams can represent two very different infrastructure models that can have a serious impact on your workforce.
- Why regional cloud hosting matters (opens in new tab)
Neil Thacker, DPO and CISO, Netskope (opens in new tab)