Light Reading

In a Cloud Services World, Data Center Location is NOT About Latency

Philip Carden
3/28/2014
50%
50%

Reliable and low-cost green power, cool climate, physical security, geo and political stability, and access to skilled labor. Find those things, plus existing (or buildable) fiber infrastructure, so that there's cost-effective and protected dark fiber, or wavelength access to key IXPs (Internet exchange points), and you have yourself a data center location.

OK, it has to be on the right continent, but, apart from that, latency should not be a consideration. For clarity, I'm using the term data center to refer to industrial-scale, dedicated, secure facilities (as distinct from server rooms).

Living in the past
Before we talk protocols, let's talk people: We're all living in the past. About 80 milliseconds (ms) in the past to be exact, which is the time it takes for our brains and nervous systems to synchronize stimuli arriving on different neural paths of different latencies.

If you see a hand clap, you perceive the sound and sight at the same time even though the sound takes longer to arrive and to process. Your brain allows itself 80ms or so to reassemble events correctly. That's why a synchronization delay between video and audio suddenly becomes annoying if it's more than 80ms -- your built-in sensory auto-correct flushes its proverbial buffer.

That provides a bit of perspective -- 10ms just doesn't matter. So we can ignore several often cited contributors to latency: CPE and network packet processing times (tens or hundreds of microseconds); packet latency due to serialization (1ms for a 1500 byte packet on a 10Mbit/s link); even the user-plane radio latency in LTE (less than 10ms, assuming no radio congestion).

What really matters are three things: server response time; network queuing (radio or IP); and the speed-of-light in fiber, which is negligible across town, about 60ms round-trip across the Atlantic (London-New York), and 120ms round-trip across the Pacific (Sydney to San Jose).

Characterizing a cloud application
Behind the fancy jargon, cloud applications are still mostly about browsers or "apps" fetching pages using http or https over TCP, with each page made up of sub-elements that are described in the main HTML file. There's no such thing as a typical page, but these days there's likely around a hundred sub-elements totaling around 1MB for a transactional app (think software-as-a-service) and more than twice that for media-dense apps (think social networking).

Of the first megabyte, envisage 100k for HTML and CSS (Cascading Style Sheets), 400k for scripts and 500k for images (with vast variation between sites).

For most sites, each of those sub-elements is fetched separately over their own TCP connection from URIs (Uniform Resource Identifiers) identified in the main HTML file.

For frequently used pages, many of the elements will already be locally cached, including CSS and scripts, but the HTML page will still need to be retrieved before anything else can start. Once the HTML starts arriving, the browser can start to render the page (using a CSS that is locally cached) but only then do the other requests start going out to fetch the meat of the page (mostly dynamic media content, since the big scripts are also normally cached).

A small number of large sites have recently started using the SPDY protocol to optimize this process by multiplexing and compressing HTTP requests and proactively fetching anticipated content. However, this doesn't affect TCP and SSL, which, as we'll see, are the main offenders in the latency department (at least among protocols).

A page-load walkthrough
Let's walk through what happens without the complications of redirects, encryption, network caching or CDNs (we'll come back to them).

After DNS resolution (which is fast, since cached), we'll need two transpacific round trips before we start receiving the page -- one to establish the TCP connection and another for the first HTTP request.

Since the CSS, layout images, and key scripts will be locally cached, the page will start rendering when the HTML starts arriving, after about 300ms (two round trips, each with 120ms light-delay plus say 30ms of queuing and server time).

We're not close to done -- now that we have the HTML, we need to go back and fetch all the sub-elements that are not locally cached. If we assume a broadband access speed of 10 Mbit/s to be our slowest link, then we can calculate the serialization delay of files arriving -- minimal for the HTML (16ms if it's 20KB) and a few times that for the first content image (say 80ms for a largish image). We'll clock in at 700ms for the first image to start rendering -- 300ms for the HTML fetch, 300ms for the image fetch, and about 100ms of serialization delay for the HTML and first image file.

The sub-elements are not all fetched in parallel, because each browser limits the number of parallel TCP connections to a particular host (typically to six), but once the first wave of data starts arriving the limiting factor often becomes the serialization delay in the last mile -- if half of the 1MB page is not locally cached, then we've got 500KB of data to transfer -- so if all goes very well we could get the page fully rendered in about a second (four round trips at 600ms plus serialization of 500KB, which is 400ms on a 10Mbit/s link).

Moving the data center
Now let's move our data center from Sydney to Melbourne (a 90-minute flight apart). We've added 10ms per round-trip of light delay (assuming the fiber path is still via Sydney). So it's 320ms instead of 300ms before the user starts getting a response, and 740ms instead of 700ms before the images start rendering. No perceptible difference. Not even close.

What if we have congestion or a slow server response? Everything is much slower and the relative impact of the extra distance is further reduced -- so an even less perceptible difference.

What if we have more round trips? For example, what if there's a redirect (one round trip) or if the page uses SSL (one additional round trip if credentials are already cached)? Each only adds 10ms, so there's still no perceptible difference, especially compared with the bigger difference that comes from traversing the ocean extra times. What if the user is local (in Sydney say), or there's a high proportion of network cache or CDN-served content? Everything is much faster, but the difference between the two data center locations is still the same. Again, no perceptible difference.

Next page: Moving the data center even further away

(24)  | 
Comment  | 
Print  | 
Page 1 / 2 Next >
Newest First  |  Oldest First  |  Threaded View
Page 1 / 3   >   >>
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
4/3/2014 | 4:41:30 PM
Re: Latency matters
@brookseven, this is true - and bit torrents compound the problem (especially given people's tendency to forget they are active).  Measurement itself is of course a challenge, especially when customers use third party speed tests - hard to know what is actually getting measured.  Also, many people look at ping times which can be misleading because of the inconsistent treatment and prioritization of ICMP through different routers.
philipcarden
100%
0%
philipcarden,
User Rank: Blogger
4/1/2014 | 10:48:08 PM
Re: Latency matters
@t.bogataj - thanks for being more precise.  The cloud backup scenario is a very good use-case - I'll come back to that.  

For clarity, the 64k issue applies to Windows XP (which will no longer be supported by Microsoft next week).  To be specific, the issue is that that operating system does not support the TCP Window Scaling option (who knows why since it was defined in RFC 1323 in 1992, but anyway...).  This means that XP only has a 2 byte pointer for the window, so limits the window to 64k rather than 1GB which is the protocol limit if scaling is enabled (all other major OS including Windows since Vista).  You might be tempted to think that the 64k window on the client would be irrelevant for transactional scenarios since the data flow is asymmetric (the http requests are minuscule, so we care about the buffer on the server rather than the client).  The problem is that if you don't have the Windows Scaling option enabled neither end can use it, so its a bigger deal than the OS just limiting the client-send window size to 64k.  To be clear, having Window Scaling enabled does not mean the OS doesn't limit TCP Window Size – in the case of Microsoft Windows Vista and later the default is to limit it to 16MB per TCP connection.

This was a bigger deal a couple of years ago when many enterprises still hadn't migrated from XP, but it's still an issue worth considering since XP is still over a quarter of the desktop market if you believe browser stats.  That will now presumably fall off more quickly, though still be propped up by the pirate market in parts of the world.  Anyway, let's see what happens if you have a 64k Window. 

For interactive cloud services (which I keep coming back to since it is by far the main use case today) there is now a throughput limit of RTTx64k for EACH TCP CONNECTION.  As explained in my previous reply, this only becomes relevant (in terms of affecting overall performance) where the element being fetched is larger than the window size (64k) which is a small percentage of the elements on a typical page (especially if the main scripts and CSS are already cached locally which is always the case for frequently used applications). 

But what if we are retrieving several large images all bigger than 64k - unusual, but a good illustration.  As explained in the article, these images are NOT fetched sequentially - they are fetched on parallel TCP connections.  There's typically a limit of six parallel TCP connections to a particular server (varies by browser, but converging on six for modern browsers).  In a majority of pages the elements will get fetched from multiple different servers, but let's assume the worst case where they're all coming from one server.  Then the throughput is limited to 64kB (512kbits)/200ms = 2.5Mbps x 6 connections = 15Mbps. 

In other words, on a good broadband connection (say 10Mbps down, 1Mbps up) the limiting factor is the serialization rate of the broadband, even for an old-fashioned operating system.  If you have a faster connection, the impact of using XP trans-ocean or even trans-continent could indeed be material to the experience for pages with many large elements.

On modern operating systems, the TCP stacks will negotiate a window of whatever is appropriate (up to at least 16MB by default on Windows, or up to 1GB if you wanted, though that would probably break other things and isn't necessary).  To put that in perspective – for six concurrent TCP connections on a 200ms RTT that's 16MB x 8 (bytes) x 6 (concurrent connections) ÷ 200ms = nearly 4Gbps, i.e. not even close to a consideration for interactive apps.

I'm not sure if you read the whole article (the second half of your comment suggests that perhaps you missed the second page?) but anyway, my point was that moving a data center within a radius of 10 or 20ms RTT (1.5 or 3 hours flight time away) made no significant difference to these apps, which I stand by.  I also explained that for certain applications you absolutely do care about latency and cloud node/edge DC (pick your preferred term) is a necessary solution – and per my reply to @Ipraf there are a bunch of reasons why CDN nodes should also be reasonably close to users.  There's also often economic reasons that go well beyond latency for applications like video streaming.  But that's another story.

So finally, let's come to the cloud backup use-case, which I like because it's upstream and involves a sustained transfer, potentially from an XP user.  If done over FTP (a single TCP connection) you really would be limited to 2.5Mbps with a 64kB window size on XP with a 200ms RTT.  A move of 20ms RTT is not going to make much difference to that.  A more significant impact would be if the user starts next door to the data center which then moves 20ms away.  In that case the XP user would be limited to 25Mbps throughput by their window size of 64KB.  If we move to a modern OS then that 25Mbps becomes 6.5Gbps which is more throughput than most users are likely to have available on their network. 

Now, for fun, let's imagine you were coding the next dropbox client: you might choose to not limit yourself to a single TCP connection.  Still not something that would make a difference to data center location in the sense of the article, but no doubt an interesting consideration if you're trying to boost performance for XP users across an ocean. 

Anyway, that is probably a longer response than you were expecting.  Hope it was useful.  Thanks again for stimulating the discussion.

 

Philip
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
4/1/2014 | 10:51:13 AM
Re: Latency matters
One more thing left out here is the assymetric nature of most consumer connections.  Low speed upstream connections act like delay.  Spyware and other monitoring software can add upstream bandwidth and add essentially more delay.  It was one of the things we faced when we were doing initial FiOS installs.  We had to clean things up so that users would be able to get a speedtest equal to the speed they purchased.  So, when we talk about this topic just remember that an endpoint may have more traffic than might be thought about.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
4/1/2014 | 1:48:36 AM
Re: Latency matters
If my thin client (PC) runs Windows, then the default TCP WS will be 64kB. Unless I use another OS and know how to tweak it, I will be seriously limited by RTT only. With 64kB and 200ms, I cannot possibly go beyond 2.56Mb/s. If I use cloud storage service, my user experience will be lousy indeed: transfer of 10MB file will never take less than 31 seconds. Moving the DC closer and reducing latency will affect my QoE. Or the operator's ARPU: I may change my OTT provider.

So it is not an issue of buffering (only), or the cost of RAM.

Another unadressed point is that the effect of latency depends on specific use case. See, for example, http://w2020.carina.uberspace.de/wordpress/wp-content/uploads/2013/10/12_Walter_Haeffner_Vodafone.pdf [given for reference only; I am neither Vodafone employee nor customer].

T.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:56:28 PM
Re: Latency matters

@t.bogataj - I was waiting for someone to raise the bandwidth*delay question, so thanks for doing so. 

Quick context for anyone that is not familiar - like any connection-oriented protocol the maximum throughput of a TCP connection is not just a function of the size of the pipe, but also the ability of the end systems to buffer and process the amount of unacknowledged data that corresponds to keeping the pipe full.  

The amount of buffer required to keep the pipe full is equal to the product of bandwidth*delay.  As an example, if there's one second round trip delay between two systems then the sending system needs to store one second of data before it starts receiving acknowledgements.  So on a 1Gbps link, that would requuire a buffer of one gigabit of data (125MB).  The TCP protocol attempts to ensure that the rate at which each system sends data does not exceed the buffer of the receiving system by using the TCP Window Size mechanism.

So what is the impact of this on the 1MB page example used in the article?  Not much.  There's only 1MB of data to be transferred, with a max request size of 100k or so per connection.  There's no reason for buffer pressure and so data transfer should occur at line rate less any queuing.

Where bandwidth*delay is a consideration is for larger transfers.  For example if we had 10GB to transfer over a 10Gbps link, this is obviously going to take over 8 seconds to transfer.  If the round trip time is 100ms then we would need a one gigabit (125MB) buffer to maintain throughput at the line rate.  If we moved the data center 10ms RTT away we'd need another 12.5MB of buffer allocation.  

This used to be a much bigger deal because memory was much more expensive (and also because older operating systems did a poor job of auto-scaling window size).  Today it is much more likely that memory is available to do the job required.  But the point is taken that it can still be a consideration, especially for east-west traffic.

Thanks for raising the topic.

philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:49:21 PM
Re: Not all consumers of data are human.
@t.bogataj - serialization delay is a function of the slowest link.  It is also true (but a separate problem) that the total throughput of a particular TCP connection is limited by the round-trip time and available buffer, so if you want to sustain a particular throughput you need to have the memory (and TCP Window size) to achieve that on the two ends of the connection.  And granted, that may not always be the case.  

However, as I'll explain in response to your other post, this need not affect the experience of cloud services users (i.e. thin client).

Thanks for the post.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 5:39:33 PM
Re: Not all consumers of data are human.
@brookseven, yes this is true.  Apart from reliability and availability considerations there are a raft of other factors that drive the choice of data center locations - I listed a few of them at the start.  Couldn't fit all that in one piece though ;)
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 11:30:40 AM
Re: Not all consumers of data are human.
Philip,

Your statement "whether you're 1km apart or 1000km apart makes no difference - it's a function of the slowest link" is wrong. With TCP, it's a function of RTT.

Among other reasons, moving DCs closer to end users (mini DCs, distributed DC... whatever you call it) makes sense because it decreases latency. And thus improves throughput. Which improves QoE. Which, eventually, impacts ARPU.

T.
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
3/31/2014 | 9:24:17 AM
Re: Not all consumers of data are human.
One thing about Apps and locality even for well architected SaaS services is redundancy.  If you are operating a mission critical app then you need to plan for the failure of an entire data center from a natural disaster.  In the case of the service that I ran, we had customers active on both costs simultaneously so that a failure looked like a capacity and DNS change.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 9:13:55 AM
Latency matters

The latency does NOT mean only latency.

The column focuses on web-browsing experience (mostly), and (in)efficient use of HTTP. But the issue is not our perception of responsiveness, it is the throughput.

The column completely overlooks the fact that TCP throughput is determined by round-trip time (RTT). For a lossless connection, the TCP throughput will be

throughput = WS / RTT

where WS is TCP window size. For example, with WS=1MB and RTT = 200ms, you get the throughput of 40Mb/s -- even on a 10Gb/s link.

So as long we use TCP, latency (or RTT) is critical for data-intensive applications. We're far from any wide adoption of other reliable L4 protocols (QUIC is only mentioned) and will have to live with TCP.

So -- keep considering latency when planning your networks.

T.

Page 1 / 3   >   >>
Educational Resources
sponsor supplied content
Educational Resources Archive
More Blogs from Column
Wi-Fi Alliance members have created Wi-Fi Aware technology to bring local experiences to life -- without requiring access to the Internet or GPS.
Bamboozled by the flood of abbreviations and acronyms that SDN and NFV have inspired? Here's a guide to some of the key standards bodies whose initials you will need to know.
Networks of the future will rely on "white box" switches and servers rather than proprietary hardware and that's going to alter the shape of the communications industry. Who says so? John Chambers.
There are a number of key questions to consider before any SDN migration plans are put into action.
Bigger. And Better. But definitely bigger.
Flash Poll
From The Founder
Networks of the future will rely on "white box" switches and servers rather than proprietary hardware and that's going to alter the shape of the communications industry. Who says so? John Chambers.
LRTV Huawei Video Resource Center
Huawei's Power Play With GrupoGiro

4|1|15   |   2:51   |   (0) comments


Daniel Heredia, CEO of Energiro, the power management services part of Spain's Grupo Giro, explains why his company has just struck a partnership with Huawei for uninterruptible power supply products and how he hopes it can take the partnership into other markets.
LRTV Huawei Video Resource Center
Huawei Intros Smart Device for eLTE

3|30|15   |   05:25   |   (0) comments


Huawei has developed a secure, location-aware multimedia smartphone for its eLTE trunked radio solution, says Huawei's Norman Frisch.
LRTV Huawei Video Resource Center
Win Video, Win All

3|30|15   |   06:44   |   (0) comments


Video is going to be the next main source of revenue for operators. Operators have big opportunities and advantages to monetize video services. Globally, Huawei has helped more than 70 operators achieve over 30 million video subscribers. Watch this video for more.
LRTV Custom TV
The Benefits of HyperScale Clouds for NFV

3|27|15   |   01:50   |   (0) comments


Hyperscale cloud has been developed by the Internet giants to support the creation and delivery of software-based services at blistering speeds, and at the lowest possible cost. The original ETSI NFV vision was to adopt hyperscale cloud architecture and practices. This vision has become somewhat obscured along the way, due to misunderstandings about the hyperscale ...
LRTV Huawei Video Resource Center
eLTE Rapid Meets the Need for Speed

3|26|15   |   4:45   |   (0) comments


Designed especially for emergency and dedicated ad hoc local mobile communications coverage, Huawei's eLTE Rapid solution can deliver trunked voice, video and data coverage for multiple users over a 6km range and be set up in just 15 minutes, explains Huawei's Norman Frisch.
LRTV Huawei Video Resource Center
On Videos: Challenges & Opportunities

3|26|15   |   5:56   |   (0) comments


Most everything is now connected. And along with 4K and 4G technologies, everyone could be creating and broadcasting video contents. Users are expecting better video experience with any screen, anywhere and anytime. Operators will meet new challenges, but also see some big opportunities.
LRTV Custom TV
JDSU: Delivering Dynamic Networks for a Personalized Experience

3|26|15   |   5:59   |   (0) comments


Light Reading speaks to JDSU at Mobile World Congress 2015 about new solutions in the areas of HetNets, VoLTE, backhaul, virtualization, big data analytics, and real-time intelligence.
LRTV Custom TV
Smarter Service Chaining & New Ways to Benefit From Qosmos Technology

3|25|15   |   03:11   |   (0) comments


David Le Goff, director of strategic and product marketing at Qosmos, explains how the company has added application awareness to subscriber information to make service chaining more efficient and reduce costs for networking and infrastructure. In addition, Qosmos technology, which has been delivered as C libraries, is now also available as a virtual machine, ...
Between the CEOs
Qosmos CEO: The Changing Face of DPI

3|24|15   |   13:53   |   (0) comments


LR CEO and Founder Steve Saunders sits down with the head of Qosmos to talk about the changing state of the art in deep packet inspection technology, including its role in SDN and NFV architectures. Also, how the comms market is becoming more like the automotive industry.
LRTV Huawei Video Resource Center
FC Schalke Scores With Its Agile Stadium

3|24|15   |   6:23   |   (0) comments


Top German soccer club FC Schalke 04 has deployed a new, agile WiFi network from Huawei in its Veltins-Arena stadium and is reaping the benefits in terms of customer satisfaction and business opportunities, explains marketing chief Alexander Jobst.
LRTV Huawei Video Resource Center
Huawei’s Insights on Mobile Video

3|24|15   |   7:51   |   (0) comments


More people than ever are now watching videos on smartphones. Seventy percent of mobile traffic will be video traffic until 2018. In this video, Huawei's exports give their insights on mobile video in terms of business model, network planning and 4G network construction.
LRTV Documentaries
The Rise of Industry 4.0

3|24|15   |   02:26   |   (9) comments


Are you ready for the fourth industrial revolution? It's a big deal for influential operators such as Deutsche Telekom.
Upcoming Live Events
April 14, 2015, The Westin Times Square, New York City, NY
May 5, 2015, Hyatt McCormick Place, Chicago, IL
May 6, 2015, Georgia World Congress, Atlanta, GA
May 12, 2015, Grand Hyatt, Denver, CO
May 13-14, 2015, The Westin Peachtree, Atlanta, GA
June 8, 2015, Chicago, IL
June 9-10, 2015, Chicago, IL
June 9, 2015, Chicago, IL
June 10, 2015, Chicago, IL
September 29-30, 2015, The Westin Grand Müchen, Munich, Germany
November 11-12, 2015, The Westin Peachtree Plaza, Atlanta, GA
All Upcoming Live Events
Infographics
More and more traffic is traveling within data centers, and operators need new networking approaches to manage this growth. Pica8's infographic shows how operators can confidently and painlessly deploy white box switches into existing networks today and leverage the benefits of SDN.
Hot Topics
AT&T Woos SMBs With Small-Scale WiFi
Sarah Thomas, Editorial Operations Director, 3/26/2015
Just Don't Say IBM Is 'Relaunching' Networking Business
Mitch Wagner, West Coast Bureau Chief, Light Reading, 3/26/2015
TV Everywhere Nears Mainstream Adoption
Mari Silbey, Independent Technology Editor, 3/27/2015
Carriers Are Bright Spot in BlackBerry Q4
Sarah Thomas, Editorial Operations Director, 3/27/2015
Comcast Says TWC Deal Will Close Later
Mari Silbey, Independent Technology Editor, 3/26/2015
Like Us on Facebook
Twitter Feed
Webinar Archive
BETWEEN THE CEOs - Executive Interviews
LR CEO and Founder Steve Saunders sits down with the head of Qosmos to talk about the changing state of the art in deep packet inspection technology, including its role in SDN and NFV architectures.
Chattanooga’s EPB publicly owned utility comms company has become a poster child for how to enable a local economy using next-gen networking technology. Steve Saunders, Founder of Light Reading, sits down with Harold DePriest, president and CEO of EPB, to learn how EPB is bringing big time tech to small town America.