& cplSiteName &

In a Cloud Services World, Data Center Location is NOT About Latency

Philip Carden
3/28/2014
50%
50%

Reliable and low-cost green power, cool climate, physical security, geo and political stability, and access to skilled labor. Find those things, plus existing (or buildable) fiber infrastructure, so that there's cost-effective and protected dark fiber, or wavelength access to key IXPs (Internet exchange points), and you have yourself a data center location.

OK, it has to be on the right continent, but, apart from that, latency should not be a consideration. For clarity, I'm using the term data center to refer to industrial-scale, dedicated, secure facilities (as distinct from server rooms).

Living in the past
Before we talk protocols, let's talk people: We're all living in the past. About 80 milliseconds (ms) in the past to be exact, which is the time it takes for our brains and nervous systems to synchronize stimuli arriving on different neural paths of different latencies.

If you see a hand clap, you perceive the sound and sight at the same time even though the sound takes longer to arrive and to process. Your brain allows itself 80ms or so to reassemble events correctly. That's why a synchronization delay between video and audio suddenly becomes annoying if it's more than 80ms -- your built-in sensory auto-correct flushes its proverbial buffer.

That provides a bit of perspective -- 10ms just doesn't matter. So we can ignore several often cited contributors to latency: CPE and network packet processing times (tens or hundreds of microseconds); packet latency due to serialization (1ms for a 1500 byte packet on a 10Mbit/s link); even the user-plane radio latency in LTE (less than 10ms, assuming no radio congestion).

What really matters are three things: server response time; network queuing (radio or IP); and the speed-of-light in fiber, which is negligible across town, about 60ms round-trip across the Atlantic (London-New York), and 120ms round-trip across the Pacific (Sydney to San Jose).

Characterizing a cloud application
Behind the fancy jargon, cloud applications are still mostly about browsers or "apps" fetching pages using http or https over TCP, with each page made up of sub-elements that are described in the main HTML file. There's no such thing as a typical page, but these days there's likely around a hundred sub-elements totaling around 1MB for a transactional app (think software-as-a-service) and more than twice that for media-dense apps (think social networking).

Of the first megabyte, envisage 100k for HTML and CSS (Cascading Style Sheets), 400k for scripts and 500k for images (with vast variation between sites).

For most sites, each of those sub-elements is fetched separately over their own TCP connection from URIs (Uniform Resource Identifiers) identified in the main HTML file.

For frequently used pages, many of the elements will already be locally cached, including CSS and scripts, but the HTML page will still need to be retrieved before anything else can start. Once the HTML starts arriving, the browser can start to render the page (using a CSS that is locally cached) but only then do the other requests start going out to fetch the meat of the page (mostly dynamic media content, since the big scripts are also normally cached).

A small number of large sites have recently started using the SPDY protocol to optimize this process by multiplexing and compressing HTTP requests and proactively fetching anticipated content. However, this doesn't affect TCP and SSL, which, as we'll see, are the main offenders in the latency department (at least among protocols).

A page-load walkthrough
Let's walk through what happens without the complications of redirects, encryption, network caching or CDNs (we'll come back to them).

After DNS resolution (which is fast, since cached), we'll need two transpacific round trips before we start receiving the page -- one to establish the TCP connection and another for the first HTTP request.

Since the CSS, layout images, and key scripts will be locally cached, the page will start rendering when the HTML starts arriving, after about 300ms (two round trips, each with 120ms light-delay plus say 30ms of queuing and server time).

We're not close to done -- now that we have the HTML, we need to go back and fetch all the sub-elements that are not locally cached. If we assume a broadband access speed of 10 Mbit/s to be our slowest link, then we can calculate the serialization delay of files arriving -- minimal for the HTML (16ms if it's 20KB) and a few times that for the first content image (say 80ms for a largish image). We'll clock in at 700ms for the first image to start rendering -- 300ms for the HTML fetch, 300ms for the image fetch, and about 100ms of serialization delay for the HTML and first image file.

The sub-elements are not all fetched in parallel, because each browser limits the number of parallel TCP connections to a particular host (typically to six), but once the first wave of data starts arriving the limiting factor often becomes the serialization delay in the last mile -- if half of the 1MB page is not locally cached, then we've got 500KB of data to transfer -- so if all goes very well we could get the page fully rendered in about a second (four round trips at 600ms plus serialization of 500KB, which is 400ms on a 10Mbit/s link).

Moving the data center
Now let's move our data center from Sydney to Melbourne (a 90-minute flight apart). We've added 10ms per round-trip of light delay (assuming the fiber path is still via Sydney). So it's 320ms instead of 300ms before the user starts getting a response, and 740ms instead of 700ms before the images start rendering. No perceptible difference. Not even close.

What if we have congestion or a slow server response? Everything is much slower and the relative impact of the extra distance is further reduced -- so an even less perceptible difference.

What if we have more round trips? For example, what if there's a redirect (one round trip) or if the page uses SSL (one additional round trip if credentials are already cached)? Each only adds 10ms, so there's still no perceptible difference, especially compared with the bigger difference that comes from traversing the ocean extra times. What if the user is local (in Sydney say), or there's a high proportion of network cache or CDN-served content? Everything is much faster, but the difference between the two data center locations is still the same. Again, no perceptible difference.

Next page: Moving the data center even further away

(24)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 3   >   >>
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
4/3/2014 | 4:41:30 PM
Re: Latency matters
@brookseven, this is true - and bit torrents compound the problem (especially given people's tendency to forget they are active).  Measurement itself is of course a challenge, especially when customers use third party speed tests - hard to know what is actually getting measured.  Also, many people look at ping times which can be misleading because of the inconsistent treatment and prioritization of ICMP through different routers.
philipcarden
100%
0%
philipcarden,
User Rank: Blogger
4/1/2014 | 10:48:08 PM
Re: Latency matters
@t.bogataj - thanks for being more precise.  The cloud backup scenario is a very good use-case - I'll come back to that.  

For clarity, the 64k issue applies to Windows XP (which will no longer be supported by Microsoft next week).  To be specific, the issue is that that operating system does not support the TCP Window Scaling option (who knows why since it was defined in RFC 1323 in 1992, but anyway...).  This means that XP only has a 2 byte pointer for the window, so limits the window to 64k rather than 1GB which is the protocol limit if scaling is enabled (all other major OS including Windows since Vista).  You might be tempted to think that the 64k window on the client would be irrelevant for transactional scenarios since the data flow is asymmetric (the http requests are minuscule, so we care about the buffer on the server rather than the client).  The problem is that if you don't have the Windows Scaling option enabled neither end can use it, so its a bigger deal than the OS just limiting the client-send window size to 64k.  To be clear, having Window Scaling enabled does not mean the OS doesn't limit TCP Window Size – in the case of Microsoft Windows Vista and later the default is to limit it to 16MB per TCP connection.

This was a bigger deal a couple of years ago when many enterprises still hadn't migrated from XP, but it's still an issue worth considering since XP is still over a quarter of the desktop market if you believe browser stats.  That will now presumably fall off more quickly, though still be propped up by the pirate market in parts of the world.  Anyway, let's see what happens if you have a 64k Window. 

For interactive cloud services (which I keep coming back to since it is by far the main use case today) there is now a throughput limit of RTTx64k for EACH TCP CONNECTION.  As explained in my previous reply, this only becomes relevant (in terms of affecting overall performance) where the element being fetched is larger than the window size (64k) which is a small percentage of the elements on a typical page (especially if the main scripts and CSS are already cached locally which is always the case for frequently used applications). 

But what if we are retrieving several large images all bigger than 64k - unusual, but a good illustration.  As explained in the article, these images are NOT fetched sequentially - they are fetched on parallel TCP connections.  There's typically a limit of six parallel TCP connections to a particular server (varies by browser, but converging on six for modern browsers).  In a majority of pages the elements will get fetched from multiple different servers, but let's assume the worst case where they're all coming from one server.  Then the throughput is limited to 64kB (512kbits)/200ms = 2.5Mbps x 6 connections = 15Mbps. 

In other words, on a good broadband connection (say 10Mbps down, 1Mbps up) the limiting factor is the serialization rate of the broadband, even for an old-fashioned operating system.  If you have a faster connection, the impact of using XP trans-ocean or even trans-continent could indeed be material to the experience for pages with many large elements.

On modern operating systems, the TCP stacks will negotiate a window of whatever is appropriate (up to at least 16MB by default on Windows, or up to 1GB if you wanted, though that would probably break other things and isn't necessary).  To put that in perspective – for six concurrent TCP connections on a 200ms RTT that's 16MB x 8 (bytes) x 6 (concurrent connections) ÷ 200ms = nearly 4Gbps, i.e. not even close to a consideration for interactive apps.

I'm not sure if you read the whole article (the second half of your comment suggests that perhaps you missed the second page?) but anyway, my point was that moving a data center within a radius of 10 or 20ms RTT (1.5 or 3 hours flight time away) made no significant difference to these apps, which I stand by.  I also explained that for certain applications you absolutely do care about latency and cloud node/edge DC (pick your preferred term) is a necessary solution – and per my reply to @Ipraf there are a bunch of reasons why CDN nodes should also be reasonably close to users.  There's also often economic reasons that go well beyond latency for applications like video streaming.  But that's another story.

So finally, let's come to the cloud backup use-case, which I like because it's upstream and involves a sustained transfer, potentially from an XP user.  If done over FTP (a single TCP connection) you really would be limited to 2.5Mbps with a 64kB window size on XP with a 200ms RTT.  A move of 20ms RTT is not going to make much difference to that.  A more significant impact would be if the user starts next door to the data center which then moves 20ms away.  In that case the XP user would be limited to 25Mbps throughput by their window size of 64KB.  If we move to a modern OS then that 25Mbps becomes 6.5Gbps which is more throughput than most users are likely to have available on their network. 

Now, for fun, let's imagine you were coding the next dropbox client: you might choose to not limit yourself to a single TCP connection.  Still not something that would make a difference to data center location in the sense of the article, but no doubt an interesting consideration if you're trying to boost performance for XP users across an ocean. 

Anyway, that is probably a longer response than you were expecting.  Hope it was useful.  Thanks again for stimulating the discussion.

 

Philip
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
4/1/2014 | 10:51:13 AM
Re: Latency matters
One more thing left out here is the assymetric nature of most consumer connections.  Low speed upstream connections act like delay.  Spyware and other monitoring software can add upstream bandwidth and add essentially more delay.  It was one of the things we faced when we were doing initial FiOS installs.  We had to clean things up so that users would be able to get a speedtest equal to the speed they purchased.  So, when we talk about this topic just remember that an endpoint may have more traffic than might be thought about.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
4/1/2014 | 1:48:36 AM
Re: Latency matters
If my thin client (PC) runs Windows, then the default TCP WS will be 64kB. Unless I use another OS and know how to tweak it, I will be seriously limited by RTT only. With 64kB and 200ms, I cannot possibly go beyond 2.56Mb/s. If I use cloud storage service, my user experience will be lousy indeed: transfer of 10MB file will never take less than 31 seconds. Moving the DC closer and reducing latency will affect my QoE. Or the operator's ARPU: I may change my OTT provider.

So it is not an issue of buffering (only), or the cost of RAM.

Another unadressed point is that the effect of latency depends on specific use case. See, for example, http://w2020.carina.uberspace.de/wordpress/wp-content/uploads/2013/10/12_Walter_Haeffner_Vodafone.pdf [given for reference only; I am neither Vodafone employee nor customer].

T.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:56:28 PM
Re: Latency matters

@t.bogataj - I was waiting for someone to raise the bandwidth*delay question, so thanks for doing so. 

Quick context for anyone that is not familiar - like any connection-oriented protocol the maximum throughput of a TCP connection is not just a function of the size of the pipe, but also the ability of the end systems to buffer and process the amount of unacknowledged data that corresponds to keeping the pipe full.  

The amount of buffer required to keep the pipe full is equal to the product of bandwidth*delay.  As an example, if there's one second round trip delay between two systems then the sending system needs to store one second of data before it starts receiving acknowledgements.  So on a 1Gbps link, that would requuire a buffer of one gigabit of data (125MB).  The TCP protocol attempts to ensure that the rate at which each system sends data does not exceed the buffer of the receiving system by using the TCP Window Size mechanism.

So what is the impact of this on the 1MB page example used in the article?  Not much.  There's only 1MB of data to be transferred, with a max request size of 100k or so per connection.  There's no reason for buffer pressure and so data transfer should occur at line rate less any queuing.

Where bandwidth*delay is a consideration is for larger transfers.  For example if we had 10GB to transfer over a 10Gbps link, this is obviously going to take over 8 seconds to transfer.  If the round trip time is 100ms then we would need a one gigabit (125MB) buffer to maintain throughput at the line rate.  If we moved the data center 10ms RTT away we'd need another 12.5MB of buffer allocation.  

This used to be a much bigger deal because memory was much more expensive (and also because older operating systems did a poor job of auto-scaling window size).  Today it is much more likely that memory is available to do the job required.  But the point is taken that it can still be a consideration, especially for east-west traffic.

Thanks for raising the topic.

philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:49:21 PM
Re: Not all consumers of data are human.
@t.bogataj - serialization delay is a function of the slowest link.  It is also true (but a separate problem) that the total throughput of a particular TCP connection is limited by the round-trip time and available buffer, so if you want to sustain a particular throughput you need to have the memory (and TCP Window size) to achieve that on the two ends of the connection.  And granted, that may not always be the case.  

However, as I'll explain in response to your other post, this need not affect the experience of cloud services users (i.e. thin client).

Thanks for the post.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 5:39:33 PM
Re: Not all consumers of data are human.
@brookseven, yes this is true.  Apart from reliability and availability considerations there are a raft of other factors that drive the choice of data center locations - I listed a few of them at the start.  Couldn't fit all that in one piece though ;)
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 11:30:40 AM
Re: Not all consumers of data are human.
Philip,

Your statement "whether you're 1km apart or 1000km apart makes no difference - it's a function of the slowest link" is wrong. With TCP, it's a function of RTT.

Among other reasons, moving DCs closer to end users (mini DCs, distributed DC... whatever you call it) makes sense because it decreases latency. And thus improves throughput. Which improves QoE. Which, eventually, impacts ARPU.

T.
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
3/31/2014 | 9:24:17 AM
Re: Not all consumers of data are human.
One thing about Apps and locality even for well architected SaaS services is redundancy.  If you are operating a mission critical app then you need to plan for the failure of an entire data center from a natural disaster.  In the case of the service that I ran, we had customers active on both costs simultaneously so that a failure looked like a capacity and DNS change.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 9:13:55 AM
Latency matters

The latency does NOT mean only latency.

The column focuses on web-browsing experience (mostly), and (in)efficient use of HTTP. But the issue is not our perception of responsiveness, it is the throughput.

The column completely overlooks the fact that TCP throughput is determined by round-trip time (RTT). For a lossless connection, the TCP throughput will be

throughput = WS / RTT

where WS is TCP window size. For example, with WS=1MB and RTT = 200ms, you get the throughput of 40Mb/s -- even on a 10Gb/s link.

So as long we use TCP, latency (or RTT) is critical for data-intensive applications. We're far from any wide adoption of other reliable L4 protocols (QUIC is only mentioned) and will have to live with TCP.

So -- keep considering latency when planning your networks.

T.

Page 1 / 3   >   >>
More Blogs from Column
As the industry looks to aggressively ramp up NFV efforts, it becomes critical for strong and interoperable industry standards to eliminate vendor lock-ins and create a marketplace for best-in-breed services.
Today's telcos and communication service providers are more vulnerable to large-scale DDoS attacks than ever.
But this story will take years to write.
A few myths have emerged about microservices that need to be addressed, says Ciena's Abel Tong.
New and exciting methods of automation – whether virtualization, the cloud, IoT or even best practices like network segmentation – tend to emphasize innovation over visibility. As such, networks develop blind spots that mask network problems and even faulty devices.
From The Founder
Kicking off BCE 2017, Light Reading founder Steve Saunders lays blame for NFV's slow ramp-up and urges telecom to return to old-fashioned standards building and interoperability.
Flash Poll
Live Streaming Video
Charting the CSP's Future
Six different communications service providers join to debate their visions of the future CSP, following a landmark presentation from AT&T on its massive virtualization efforts and a look back on where the telecom industry has been and where it's going from two industry veterans.
LRTV Custom TV
The Overall Objective Is to Win the Game

6|26|17   |     |   (0) comments


SCTE•ISBE's Chris Bastian discusses Energy 2020's success to date and the importance of a flexible approach that allows for changes in specific strategies in order to reach significant milestones.
LRTV Interviews
CenturyLink: Let's Get Past SD-WAN Hype

6|23|17   |   04:02   |   (0) comments


Technology becomes a "shiny object" unless it's properly focused on solving business needs for enterprise customers, says Bill Grubbs, network solutions architect for CenturyLink. He explains to Light Reading why SD-WAN deployments have to be tailored to specific needs – and more.
Women in Comms Introduction Videos
Infinera's Sales Director Paints Tech's Big Picture

6|21|17   |   4:14   |   (1) comment


Shannon Williams, Infinera's director of sales, shares how she achieves work's many balancing acts -- between her role and the broader company, today and tomorrow's tech and more.
LRTV Custom TV
SD-WAN Innovation & Trends

6|20|17   |     |   (0) comments


Versa CEO Kelly Ahuja discusses with Carol Wilson the current status and trends in the SD-WAN market, Versa's innovation around building a software platform with broad contextualization, and the advantages that startups can bring to the SD-WAN market.
LRTV Interviews
Ovum's Dario Talmesio on 5G in Europe

6|20|17   |   02:16   |   (0) comments


At 5G World 2017, Dario Talmesio, principal analyst and practice leader on Ovum's fixed and mobile telecoms European team, explains the emerging trends amongst European operators as they prepare for 5G.
LRTV Custom TV
Putting Power on a Pedestal

6|19|17   |     |   (0) comments


ARRIS's John Ulm says a major accomplishment of SCTE•ISBE's Energy 2020 program is increased focus on power cost and consumption, including inclusion of energy requirements in operators' RFPs and RFIs.
LRTV Custom TV
Gigabit Access: The Last-Mile Pipe for All Future Services

6|19|17   |     |   (0) comments


A Gigabit access platform being deployed today must be able to deliver all types of services to an increasing number of devices. A non-blocking architecture is necessary to support the ever-increasing growth in bandwidth demand. The Huawei Gigabit access solution is based on a distributed design that is fully scalable to deliver a unprecedented performance.
LRTV Custom TV
Key Factors to Successfully Deploy an SD-WAN Service

6|19|17   |     |   (0) comments


As service providers transition their SD-WAN solution from trials and limited deployments into production at large scale, there are important considerations to successfully operationalize these solutions and realize their full potential, without adding complexity, introducing uncertainty or disrupting current business operations. Sunil Khandekar, CEO and Founder ...
LRTV Custom TV
IoT Solutions: Rational Exuberance

6|19|17   |     |   (0) comments


IoT solutions are morphing from hype into viable business opportunities. Huawei has the platform and ecosystem support to help carriers successfully address new business opportunities in the IoT space.
LRTV Custom TV
Realizing ICN as a Network Slice for Mobile Data Distribution

6|19|17   |     |   (1) comment


Network slicing in 5G allows the potential introduction of new network architectures such as Information-centric Networks (ICN) as a slice, managed over a shared pool of compute, storage and bandwidth resource. Services over an ICN slice can benefit from many architectural features such as Name Based Networking, Security, Multicasting, Multi-homing, Mobility, ...
LRTV Interviews
Ovum's Mike Roberts on 5G Uptake

6|19|17   |   04:08   |   (0) comments


Mike Roberts, research director for Ovum's service provider markets group, explains why he has boosted his 5G subscriptions forecast.
LRTV Interviews
AT&T's Hubbard on Intersection of SD-WAN & MPLS

6|15|17   |     |   (0) comments


Rick Hubbard, SVP of Network Product Management for AT&T Business Solutions, discusses how AT&T's approach to SD-WAN fits in with its overall virtualization strategy, explains how SD-WAN can improve enterprise customers' use of the cloud and addresses the intersection of SD-WAN and MPLS.
Upcoming Live Events
October 18, 2017, Colorado Convention Center - Denver, CO
November 1, 2017, The Montcalm Marble Arch
November 1, 2017, The Montcalm Marble Arch
November 30, 2017, The Westin Times Square
All Upcoming Live Events
Infographics
With the mobile ecosystem becoming increasingly vulnerable to security threats, AdaptiveMobile has laid out some of the key considerations for the wireless community.
Hot Topics
Netflix's Lesson in Culture Expectation Settings
Sarah Thomas, Director, Women in Comms, 6/21/2017
Kalanick Steps Down as Uber CEO
Sarah Thomas, Director, Women in Comms, 6/21/2017
No Imagination: UK Chip Biz Goes Up for Sale
Iain Morris, News Editor, 6/22/2017
Does AT&T Deserve Time Warner?
Mari Silbey, Senior Editor, Cable/Video, 6/23/2017
Like Us on Facebook
Twitter Feed
BETWEEN THE CEOs - Executive Interviews
Following a recent board meeting, the New IP Agency (NIA) has a new strategy to help accelerate the adoption of NFV capabilities, explains the Agency's Founder and Secretary, Steve Saunders.
One of the nice bits of my job (other than the teeny tiny salary, obviously) is that I get to pick and choose who I interview for this slot on the Light Reading home ...
Animals with Phones
Live Digital Audio

Playing it safe can only get you so far. Sometimes the biggest bets have the biggest payouts, and that is true in your career as well. For this radio show, Caroline Chan, general manager of the 5G Infrastructure Division of the Network Platform Group at Intel, will share her own personal story of how she successfully took big bets to build a successful career, as well as offer advice on how you can do the same. We’ll cover everything from how to overcome fear and manage risk, how to be prepared for where technology is going in the future and how to structure your career in a way to ensure you keep progressing. Chan, a seasoned telecom veteran and effective risk taker herself, will also leave plenty of time to answer all your questions live on the air.