Light Reading
There are many factors that determine the best location for a data center, but for most cloud applications, latency is not one of them.

In a Cloud Services World, Data Center Location is NOT About Latency

Philip Carden
3/28/2014
50%
50%

Reliable and low-cost green power, cool climate, physical security, geo and political stability, and access to skilled labor. Find those things, plus existing (or buildable) fiber infrastructure, so that there's cost-effective and protected dark fiber, or wavelength access to key IXPs (Internet exchange points), and you have yourself a data center location.

OK, it has to be on the right continent, but, apart from that, latency should not be a consideration. For clarity, I'm using the term data center to refer to industrial-scale, dedicated, secure facilities (as distinct from server rooms).

Living in the past
Before we talk protocols, let's talk people: We're all living in the past. About 80 milliseconds (ms) in the past to be exact, which is the time it takes for our brains and nervous systems to synchronize stimuli arriving on different neural paths of different latencies.

If you see a hand clap, you perceive the sound and sight at the same time even though the sound takes longer to arrive and to process. Your brain allows itself 80ms or so to reassemble events correctly. That's why a synchronization delay between video and audio suddenly becomes annoying if it's more than 80ms -- your built-in sensory auto-correct flushes its proverbial buffer.

That provides a bit of perspective -- 10ms just doesn't matter. So we can ignore several often cited contributors to latency: CPE and network packet processing times (tens or hundreds of microseconds); packet latency due to serialization (1ms for a 1500 byte packet on a 10Mbit/s link); even the user-plane radio latency in LTE (less than 10ms, assuming no radio congestion).

What really matters are three things: server response time; network queuing (radio or IP); and the speed-of-light in fiber, which is negligible across town, about 60ms round-trip across the Atlantic (London-New York), and 120ms round-trip across the Pacific (Sydney to San Jose).

Characterizing a cloud application
Behind the fancy jargon, cloud applications are still mostly about browsers or "apps" fetching pages using http or https over TCP, with each page made up of sub-elements that are described in the main HTML file. There's no such thing as a typical page, but these days there's likely around a hundred sub-elements totaling around 1MB for a transactional app (think software-as-a-service) and more than twice that for media-dense apps (think social networking).

Of the first megabyte, envisage 100k for HTML and CSS (Cascading Style Sheets), 400k for scripts and 500k for images (with vast variation between sites).

For most sites, each of those sub-elements is fetched separately over their own TCP connection from URIs (Uniform Resource Identifiers) identified in the main HTML file.

For frequently used pages, many of the elements will already be locally cached, including CSS and scripts, but the HTML page will still need to be retrieved before anything else can start. Once the HTML starts arriving, the browser can start to render the page (using a CSS that is locally cached) but only then do the other requests start going out to fetch the meat of the page (mostly dynamic media content, since the big scripts are also normally cached).

A small number of large sites have recently started using the SPDY protocol to optimize this process by multiplexing and compressing HTTP requests and proactively fetching anticipated content. However, this doesn't affect TCP and SSL, which, as we'll see, are the main offenders in the latency department (at least among protocols).

A page-load walkthrough
Let's walk through what happens without the complications of redirects, encryption, network caching or CDNs (we'll come back to them).

After DNS resolution (which is fast, since cached), we'll need two transpacific round trips before we start receiving the page -- one to establish the TCP connection and another for the first HTTP request.

Since the CSS, layout images, and key scripts will be locally cached, the page will start rendering when the HTML starts arriving, after about 300ms (two round trips, each with 120ms light-delay plus say 30ms of queuing and server time).

We're not close to done -- now that we have the HTML, we need to go back and fetch all the sub-elements that are not locally cached. If we assume a broadband access speed of 10 Mbit/s to be our slowest link, then we can calculate the serialization delay of files arriving -- minimal for the HTML (16ms if it's 20KB) and a few times that for the first content image (say 80ms for a largish image). We'll clock in at 700ms for the first image to start rendering -- 300ms for the HTML fetch, 300ms for the image fetch, and about 100ms of serialization delay for the HTML and first image file.

The sub-elements are not all fetched in parallel, because each browser limits the number of parallel TCP connections to a particular host (typically to six), but once the first wave of data starts arriving the limiting factor often becomes the serialization delay in the last mile -- if half of the 1MB page is not locally cached, then we've got 500KB of data to transfer -- so if all goes very well we could get the page fully rendered in about a second (four round trips at 600ms plus serialization of 500KB, which is 400ms on a 10Mbit/s link).

Moving the data center
Now let's move our data center from Sydney to Melbourne (a 90-minute flight apart). We've added 10ms per round-trip of light delay (assuming the fiber path is still via Sydney). So it's 320ms instead of 300ms before the user starts getting a response, and 740ms instead of 700ms before the images start rendering. No perceptible difference. Not even close.

What if we have congestion or a slow server response? Everything is much slower and the relative impact of the extra distance is further reduced -- so an even less perceptible difference.

What if we have more round trips? For example, what if there's a redirect (one round trip) or if the page uses SSL (one additional round trip if credentials are already cached)? Each only adds 10ms, so there's still no perceptible difference, especially compared with the bigger difference that comes from traversing the ocean extra times. What if the user is local (in Sydney say), or there's a high proportion of network cache or CDN-served content? Everything is much faster, but the difference between the two data center locations is still the same. Again, no perceptible difference.

Next page: Moving the data center even further away

(24)  | 
Comment  | 
Print  | 
Page 1 / 2 Next >
Newest First  |  Oldest First  |  Threaded View
Page 1 / 3   >   >>
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
4/3/2014 | 4:41:30 PM
Re: Latency matters
@brookseven, this is true - and bit torrents compound the problem (especially given people's tendency to forget they are active).  Measurement itself is of course a challenge, especially when customers use third party speed tests - hard to know what is actually getting measured.  Also, many people look at ping times which can be misleading because of the inconsistent treatment and prioritization of ICMP through different routers.
philipcarden
100%
0%
philipcarden,
User Rank: Blogger
4/1/2014 | 10:48:08 PM
Re: Latency matters
@t.bogataj - thanks for being more precise.  The cloud backup scenario is a very good use-case - I'll come back to that.  

For clarity, the 64k issue applies to Windows XP (which will no longer be supported by Microsoft next week).  To be specific, the issue is that that operating system does not support the TCP Window Scaling option (who knows why since it was defined in RFC 1323 in 1992, but anyway...).  This means that XP only has a 2 byte pointer for the window, so limits the window to 64k rather than 1GB which is the protocol limit if scaling is enabled (all other major OS including Windows since Vista).  You might be tempted to think that the 64k window on the client would be irrelevant for transactional scenarios since the data flow is asymmetric (the http requests are minuscule, so we care about the buffer on the server rather than the client).  The problem is that if you don't have the Windows Scaling option enabled neither end can use it, so its a bigger deal than the OS just limiting the client-send window size to 64k.  To be clear, having Window Scaling enabled does not mean the OS doesn't limit TCP Window Size – in the case of Microsoft Windows Vista and later the default is to limit it to 16MB per TCP connection.

This was a bigger deal a couple of years ago when many enterprises still hadn't migrated from XP, but it's still an issue worth considering since XP is still over a quarter of the desktop market if you believe browser stats.  That will now presumably fall off more quickly, though still be propped up by the pirate market in parts of the world.  Anyway, let's see what happens if you have a 64k Window. 

For interactive cloud services (which I keep coming back to since it is by far the main use case today) there is now a throughput limit of RTTx64k for EACH TCP CONNECTION.  As explained in my previous reply, this only becomes relevant (in terms of affecting overall performance) where the element being fetched is larger than the window size (64k) which is a small percentage of the elements on a typical page (especially if the main scripts and CSS are already cached locally which is always the case for frequently used applications). 

But what if we are retrieving several large images all bigger than 64k - unusual, but a good illustration.  As explained in the article, these images are NOT fetched sequentially - they are fetched on parallel TCP connections.  There's typically a limit of six parallel TCP connections to a particular server (varies by browser, but converging on six for modern browsers).  In a majority of pages the elements will get fetched from multiple different servers, but let's assume the worst case where they're all coming from one server.  Then the throughput is limited to 64kB (512kbits)/200ms = 2.5Mbps x 6 connections = 15Mbps. 

In other words, on a good broadband connection (say 10Mbps down, 1Mbps up) the limiting factor is the serialization rate of the broadband, even for an old-fashioned operating system.  If you have a faster connection, the impact of using XP trans-ocean or even trans-continent could indeed be material to the experience for pages with many large elements.

On modern operating systems, the TCP stacks will negotiate a window of whatever is appropriate (up to at least 16MB by default on Windows, or up to 1GB if you wanted, though that would probably break other things and isn't necessary).  To put that in perspective – for six concurrent TCP connections on a 200ms RTT that's 16MB x 8 (bytes) x 6 (concurrent connections) ÷ 200ms = nearly 4Gbps, i.e. not even close to a consideration for interactive apps.

I'm not sure if you read the whole article (the second half of your comment suggests that perhaps you missed the second page?) but anyway, my point was that moving a data center within a radius of 10 or 20ms RTT (1.5 or 3 hours flight time away) made no significant difference to these apps, which I stand by.  I also explained that for certain applications you absolutely do care about latency and cloud node/edge DC (pick your preferred term) is a necessary solution – and per my reply to @Ipraf there are a bunch of reasons why CDN nodes should also be reasonably close to users.  There's also often economic reasons that go well beyond latency for applications like video streaming.  But that's another story.

So finally, let's come to the cloud backup use-case, which I like because it's upstream and involves a sustained transfer, potentially from an XP user.  If done over FTP (a single TCP connection) you really would be limited to 2.5Mbps with a 64kB window size on XP with a 200ms RTT.  A move of 20ms RTT is not going to make much difference to that.  A more significant impact would be if the user starts next door to the data center which then moves 20ms away.  In that case the XP user would be limited to 25Mbps throughput by their window size of 64KB.  If we move to a modern OS then that 25Mbps becomes 6.5Gbps which is more throughput than most users are likely to have available on their network. 

Now, for fun, let's imagine you were coding the next dropbox client: you might choose to not limit yourself to a single TCP connection.  Still not something that would make a difference to data center location in the sense of the article, but no doubt an interesting consideration if you're trying to boost performance for XP users across an ocean. 

Anyway, that is probably a longer response than you were expecting.  Hope it was useful.  Thanks again for stimulating the discussion.

 

Philip
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
4/1/2014 | 10:51:13 AM
Re: Latency matters
One more thing left out here is the assymetric nature of most consumer connections.  Low speed upstream connections act like delay.  Spyware and other monitoring software can add upstream bandwidth and add essentially more delay.  It was one of the things we faced when we were doing initial FiOS installs.  We had to clean things up so that users would be able to get a speedtest equal to the speed they purchased.  So, when we talk about this topic just remember that an endpoint may have more traffic than might be thought about.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
4/1/2014 | 1:48:36 AM
Re: Latency matters
If my thin client (PC) runs Windows, then the default TCP WS will be 64kB. Unless I use another OS and know how to tweak it, I will be seriously limited by RTT only. With 64kB and 200ms, I cannot possibly go beyond 2.56Mb/s. If I use cloud storage service, my user experience will be lousy indeed: transfer of 10MB file will never take less than 31 seconds. Moving the DC closer and reducing latency will affect my QoE. Or the operator's ARPU: I may change my OTT provider.

So it is not an issue of buffering (only), or the cost of RAM.

Another unadressed point is that the effect of latency depends on specific use case. See, for example, http://w2020.carina.uberspace.de/wordpress/wp-content/uploads/2013/10/12_Walter_Haeffner_Vodafone.pdf [given for reference only; I am neither Vodafone employee nor customer].

T.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:56:28 PM
Re: Latency matters

@t.bogataj - I was waiting for someone to raise the bandwidth*delay question, so thanks for doing so. 

Quick context for anyone that is not familiar - like any connection-oriented protocol the maximum throughput of a TCP connection is not just a function of the size of the pipe, but also the ability of the end systems to buffer and process the amount of unacknowledged data that corresponds to keeping the pipe full.  

The amount of buffer required to keep the pipe full is equal to the product of bandwidth*delay.  As an example, if there's one second round trip delay between two systems then the sending system needs to store one second of data before it starts receiving acknowledgements.  So on a 1Gbps link, that would requuire a buffer of one gigabit of data (125MB).  The TCP protocol attempts to ensure that the rate at which each system sends data does not exceed the buffer of the receiving system by using the TCP Window Size mechanism.

So what is the impact of this on the 1MB page example used in the article?  Not much.  There's only 1MB of data to be transferred, with a max request size of 100k or so per connection.  There's no reason for buffer pressure and so data transfer should occur at line rate less any queuing.

Where bandwidth*delay is a consideration is for larger transfers.  For example if we had 10GB to transfer over a 10Gbps link, this is obviously going to take over 8 seconds to transfer.  If the round trip time is 100ms then we would need a one gigabit (125MB) buffer to maintain throughput at the line rate.  If we moved the data center 10ms RTT away we'd need another 12.5MB of buffer allocation.  

This used to be a much bigger deal because memory was much more expensive (and also because older operating systems did a poor job of auto-scaling window size).  Today it is much more likely that memory is available to do the job required.  But the point is taken that it can still be a consideration, especially for east-west traffic.

Thanks for raising the topic.

philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:49:21 PM
Re: Not all consumers of data are human.
@t.bogataj - serialization delay is a function of the slowest link.  It is also true (but a separate problem) that the total throughput of a particular TCP connection is limited by the round-trip time and available buffer, so if you want to sustain a particular throughput you need to have the memory (and TCP Window size) to achieve that on the two ends of the connection.  And granted, that may not always be the case.  

However, as I'll explain in response to your other post, this need not affect the experience of cloud services users (i.e. thin client).

Thanks for the post.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 5:39:33 PM
Re: Not all consumers of data are human.
@brookseven, yes this is true.  Apart from reliability and availability considerations there are a raft of other factors that drive the choice of data center locations - I listed a few of them at the start.  Couldn't fit all that in one piece though ;)
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 11:30:40 AM
Re: Not all consumers of data are human.
Philip,

Your statement "whether you're 1km apart or 1000km apart makes no difference - it's a function of the slowest link" is wrong. With TCP, it's a function of RTT.

Among other reasons, moving DCs closer to end users (mini DCs, distributed DC... whatever you call it) makes sense because it decreases latency. And thus improves throughput. Which improves QoE. Which, eventually, impacts ARPU.

T.
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
3/31/2014 | 9:24:17 AM
Re: Not all consumers of data are human.
One thing about Apps and locality even for well architected SaaS services is redundancy.  If you are operating a mission critical app then you need to plan for the failure of an entire data center from a natural disaster.  In the case of the service that I ran, we had customers active on both costs simultaneously so that a failure looked like a capacity and DNS change.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 9:13:55 AM
Latency matters

The latency does NOT mean only latency.

The column focuses on web-browsing experience (mostly), and (in)efficient use of HTTP. But the issue is not our perception of responsiveness, it is the throughput.

The column completely overlooks the fact that TCP throughput is determined by round-trip time (RTT). For a lossless connection, the TCP throughput will be

throughput = WS / RTT

where WS is TCP window size. For example, with WS=1MB and RTT = 200ms, you get the throughput of 40Mb/s -- even on a 10Gb/s link.

So as long we use TCP, latency (or RTT) is critical for data-intensive applications. We're far from any wide adoption of other reliable L4 protocols (QUIC is only mentioned) and will have to live with TCP.

So -- keep considering latency when planning your networks.

T.

Page 1 / 3   >   >>
Educational Resources
sponsor supplied content
Educational Resources Archive
More Blogs from Column
Operators can cut the cost and complexity of extending legacy services to all-IP environments.
Norway's Lyse Smart is showing how broadband-enabled applications can improve the lives of elderly people and provide a cost-effective alternative to traditional care home services.
If we can build complex systems from simple components with precise functionality, we can more easily change those systems as new technologies arise.
A list of 10 considerations for municipalities pondering building their own broadband networks.
Communications service providers need to become digital service providers, but what exactly does that entail?
Flash Poll
From The Founder
It's clear to me that the communications industry is divided into two types of people, and only one is living in the real world.
LRTV Custom TV
Using Service Quality to Drive WiFi Monetization

10|22|14   |   6:51   |   (0) comments


Live from the SCTE conference: Heavy Reading's Alan Breznick explores the forces shaping the WiFi opportunity in an interview with CableLabs' Justin Colwell and Amdocs' Ken Roulier.
LRTV Custom TV
Distributed Access Architectures – 2

10|21|14   |   8:51:00 AM   |   (0) comments


ARRIS CTO Network Solutions Tom Cloonan discusses why many if not most MSOs will continue with integrated CCAP, while addressing why some are also looking at two futuristic, distributed access architectures: Remote PHY and Remote CCAP.
LRTV Custom TV
Distributed Access Architectures – 1

10|21|14   |   9:01   |   (0) comments


SCTE Sr. Director of Engineering Dean Stoneback discusses the pros and cons of distributed access architecture (DAA) and its various forms, which range from basic Remote PHY to full CMTS functionality in the node.
LRTV Custom TV
The WiFi Road to Riches – 2

10|21|14   |   3:58   |   (0) comments


ARRIS Senior Solution Architect Eli Baruch talks about how MSOs can enable public and community WiFi through 1) outdoor access points, 2) businesses seeking to offer WiFi to customers, and 3) residential WiFi gateway extensions.
LRTV Custom TV
The WiFi Road to Riches – 1

10|21|14   |   10:15   |   (0) comments


SCTE Director of Advanced Technologies Steve Harris discusses WiFi deployments, drivers, challenges and advances, including 802.11ac, carrier-grade WiFi, community WiFi, Hotspot 2.0, Passpoint, WiFi-First and voice-over-WiFi.
LRTV Custom TV
Advantech Accelerates 100G Traffic Handling

10|17|14   |   7:56   |   (0) comments


Paul Stevens from Advantech explains why handling 100GbE needs a whole new platform design approach and how Advantech is addressing the needs of equipment providers and carriers to give them the flexibility and performance they will need for SDN and NFV deployment.
LRTV Huawei Video Resource Center
Holland's Imtech Traffic & Infra Discusses Huawei's ICT Solution and Services

10|16|14   |   4:49   |   (0) comments


Dimitry Theebe is from the business unit at Imtech Traffic & Infra which delivers communications solutions for transportations. His partnershp with Huawei began about a years ago. In this video, Theebe speaks more about this partnership and what he hopes to accomplish with Huawei.
LRTV Huawei Video Resource Center
Huawei's Comprehensive Storage Solutions Vital for SVR

10|16|14   |   6:16   |   (0) comments


SVR Information Technology provides cloud services for academic and special sectors. With Huawei's support, SVR and Yildiz Technical University has established Turkey's largest and most advanced High Performance Computing system. CSO Ismail Cem Aslan talks about what he hopes Huawei's OceanStor storage system will bring for him.
LRTV Huawei Video Resource Center
Mexico's Servitron's Impression of Huawei at CCW 2014

10|16|14   |   6:35   |   (0) comments


Servitron is a network operator in Mexico that has been in the trunking industry for the past 20 years. Its COO, Ing. Ragnar Trillo O., explains at Critical Communications World 2014 that his company has been interested in the long-term evolution of LTE technology and its adoption for TETRA.
LRTV Huawei Video Resource Center
Building a Better Dubai

10|16|14   |   2:06   |   (0) comments


Abdulla Ahmed Al Falasi is the director of commercial affairs, a telecommunications coordinator for the government of Dubai. Their areas of service span across multiple industries, including police, safety, shopping malls and more. In this video, Abdulla talks about his department's work with Huawei.
LRTV Huawei Video Resource Center
Huawei Lights Up Malaysia Partner Maju Nusa

10|16|14   |   1:59   |   (0) comments


Malaysia's Maju Nusa is an enterprise partner to Huawei in networking, route switches and telco equipment. At this year's Critical Communications World in Singapore, CTO Pushpender Singh talks about what Huawei's eLTE solutions mean to his company and for Malaysia.
LRTV Custom TV
Evolving From HFC to FTTH Networks

10|15|14   |   2:19   |   (0) comments


Cisco's Todd McCrum delves into the future of cable's HFC plant, examining how DOCSIS 3.1 and advanced video compression will extend its life and how the IP video transition will usher in GPON and EPON over FTTH.
Upcoming Live Events
October 29, 2014, New York City
November 6, 2014, Santa Clara
November 11, 2014, Atlanta, GA
December 2, 2014, New York City
December 3, 2014, New York City
December 9-10, 2014, Reykjavik, Iceland
February 10, 2015, Atlanta, GA
June 9-10, 2015, Chicago, IL
Infographics
WhoIsHostingThis.com presents six of the world's most extreme WiFi hotspots, enabling the most epic selfies you can imagine.
Hot Topics
Is Health the Killer App for the IoT?
Jason Meyers, Senior Editor, Gigabit Cities/IoT, 10/22/2014
Drones Hover Over the IoT Sector
Jason Meyers, Senior Editor, Gigabit Cities/IoT, 10/23/2014
Analysts Warn of Major NFV Gaps
Carol Wilson, Editor-at-large, 10/22/2014
Roku Raises $25M, But for What?
Mari Silbey, Independent Technology Editor, 10/23/2014
AT&T: Merger Review Halt Won't Hurt Us
Alan Breznick, Cable/Video Practice Leader, 10/23/2014
Like Us on Facebook
Twitter Feed