& cplSiteName &

In a Cloud Services World, Data Center Location is NOT About Latency

Philip Carden
3/28/2014
50%
50%

Reliable and low-cost green power, cool climate, physical security, geo and political stability, and access to skilled labor. Find those things, plus existing (or buildable) fiber infrastructure, so that there's cost-effective and protected dark fiber, or wavelength access to key IXPs (Internet exchange points), and you have yourself a data center location.

OK, it has to be on the right continent, but, apart from that, latency should not be a consideration. For clarity, I'm using the term data center to refer to industrial-scale, dedicated, secure facilities (as distinct from server rooms).

Living in the past
Before we talk protocols, let's talk people: We're all living in the past. About 80 milliseconds (ms) in the past to be exact, which is the time it takes for our brains and nervous systems to synchronize stimuli arriving on different neural paths of different latencies.

If you see a hand clap, you perceive the sound and sight at the same time even though the sound takes longer to arrive and to process. Your brain allows itself 80ms or so to reassemble events correctly. That's why a synchronization delay between video and audio suddenly becomes annoying if it's more than 80ms -- your built-in sensory auto-correct flushes its proverbial buffer.

That provides a bit of perspective -- 10ms just doesn't matter. So we can ignore several often cited contributors to latency: CPE and network packet processing times (tens or hundreds of microseconds); packet latency due to serialization (1ms for a 1500 byte packet on a 10Mbit/s link); even the user-plane radio latency in LTE (less than 10ms, assuming no radio congestion).

What really matters are three things: server response time; network queuing (radio or IP); and the speed-of-light in fiber, which is negligible across town, about 60ms round-trip across the Atlantic (London-New York), and 120ms round-trip across the Pacific (Sydney to San Jose).

Characterizing a cloud application
Behind the fancy jargon, cloud applications are still mostly about browsers or "apps" fetching pages using http or https over TCP, with each page made up of sub-elements that are described in the main HTML file. There's no such thing as a typical page, but these days there's likely around a hundred sub-elements totaling around 1MB for a transactional app (think software-as-a-service) and more than twice that for media-dense apps (think social networking).

Of the first megabyte, envisage 100k for HTML and CSS (Cascading Style Sheets), 400k for scripts and 500k for images (with vast variation between sites).

For most sites, each of those sub-elements is fetched separately over their own TCP connection from URIs (Uniform Resource Identifiers) identified in the main HTML file.

For frequently used pages, many of the elements will already be locally cached, including CSS and scripts, but the HTML page will still need to be retrieved before anything else can start. Once the HTML starts arriving, the browser can start to render the page (using a CSS that is locally cached) but only then do the other requests start going out to fetch the meat of the page (mostly dynamic media content, since the big scripts are also normally cached).

A small number of large sites have recently started using the SPDY protocol to optimize this process by multiplexing and compressing HTTP requests and proactively fetching anticipated content. However, this doesn't affect TCP and SSL, which, as we'll see, are the main offenders in the latency department (at least among protocols).

A page-load walkthrough
Let's walk through what happens without the complications of redirects, encryption, network caching or CDNs (we'll come back to them).

After DNS resolution (which is fast, since cached), we'll need two transpacific round trips before we start receiving the page -- one to establish the TCP connection and another for the first HTTP request.

Since the CSS, layout images, and key scripts will be locally cached, the page will start rendering when the HTML starts arriving, after about 300ms (two round trips, each with 120ms light-delay plus say 30ms of queuing and server time).

We're not close to done -- now that we have the HTML, we need to go back and fetch all the sub-elements that are not locally cached. If we assume a broadband access speed of 10 Mbit/s to be our slowest link, then we can calculate the serialization delay of files arriving -- minimal for the HTML (16ms if it's 20KB) and a few times that for the first content image (say 80ms for a largish image). We'll clock in at 700ms for the first image to start rendering -- 300ms for the HTML fetch, 300ms for the image fetch, and about 100ms of serialization delay for the HTML and first image file.

The sub-elements are not all fetched in parallel, because each browser limits the number of parallel TCP connections to a particular host (typically to six), but once the first wave of data starts arriving the limiting factor often becomes the serialization delay in the last mile -- if half of the 1MB page is not locally cached, then we've got 500KB of data to transfer -- so if all goes very well we could get the page fully rendered in about a second (four round trips at 600ms plus serialization of 500KB, which is 400ms on a 10Mbit/s link).

Moving the data center
Now let's move our data center from Sydney to Melbourne (a 90-minute flight apart). We've added 10ms per round-trip of light delay (assuming the fiber path is still via Sydney). So it's 320ms instead of 300ms before the user starts getting a response, and 740ms instead of 700ms before the images start rendering. No perceptible difference. Not even close.

What if we have congestion or a slow server response? Everything is much slower and the relative impact of the extra distance is further reduced -- so an even less perceptible difference.

What if we have more round trips? For example, what if there's a redirect (one round trip) or if the page uses SSL (one additional round trip if credentials are already cached)? Each only adds 10ms, so there's still no perceptible difference, especially compared with the bigger difference that comes from traversing the ocean extra times. What if the user is local (in Sydney say), or there's a high proportion of network cache or CDN-served content? Everything is much faster, but the difference between the two data center locations is still the same. Again, no perceptible difference.

Next page: Moving the data center even further away

(24)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 3   >   >>
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
4/3/2014 | 4:41:30 PM
Re: Latency matters
@brookseven, this is true - and bit torrents compound the problem (especially given people's tendency to forget they are active).  Measurement itself is of course a challenge, especially when customers use third party speed tests - hard to know what is actually getting measured.  Also, many people look at ping times which can be misleading because of the inconsistent treatment and prioritization of ICMP through different routers.
philipcarden
100%
0%
philipcarden,
User Rank: Blogger
4/1/2014 | 10:48:08 PM
Re: Latency matters
@t.bogataj - thanks for being more precise.  The cloud backup scenario is a very good use-case - I'll come back to that.  

For clarity, the 64k issue applies to Windows XP (which will no longer be supported by Microsoft next week).  To be specific, the issue is that that operating system does not support the TCP Window Scaling option (who knows why since it was defined in RFC 1323 in 1992, but anyway...).  This means that XP only has a 2 byte pointer for the window, so limits the window to 64k rather than 1GB which is the protocol limit if scaling is enabled (all other major OS including Windows since Vista).  You might be tempted to think that the 64k window on the client would be irrelevant for transactional scenarios since the data flow is asymmetric (the http requests are minuscule, so we care about the buffer on the server rather than the client).  The problem is that if you don't have the Windows Scaling option enabled neither end can use it, so its a bigger deal than the OS just limiting the client-send window size to 64k.  To be clear, having Window Scaling enabled does not mean the OS doesn't limit TCP Window Size – in the case of Microsoft Windows Vista and later the default is to limit it to 16MB per TCP connection.

This was a bigger deal a couple of years ago when many enterprises still hadn't migrated from XP, but it's still an issue worth considering since XP is still over a quarter of the desktop market if you believe browser stats.  That will now presumably fall off more quickly, though still be propped up by the pirate market in parts of the world.  Anyway, let's see what happens if you have a 64k Window. 

For interactive cloud services (which I keep coming back to since it is by far the main use case today) there is now a throughput limit of RTTx64k for EACH TCP CONNECTION.  As explained in my previous reply, this only becomes relevant (in terms of affecting overall performance) where the element being fetched is larger than the window size (64k) which is a small percentage of the elements on a typical page (especially if the main scripts and CSS are already cached locally which is always the case for frequently used applications). 

But what if we are retrieving several large images all bigger than 64k - unusual, but a good illustration.  As explained in the article, these images are NOT fetched sequentially - they are fetched on parallel TCP connections.  There's typically a limit of six parallel TCP connections to a particular server (varies by browser, but converging on six for modern browsers).  In a majority of pages the elements will get fetched from multiple different servers, but let's assume the worst case where they're all coming from one server.  Then the throughput is limited to 64kB (512kbits)/200ms = 2.5Mbps x 6 connections = 15Mbps. 

In other words, on a good broadband connection (say 10Mbps down, 1Mbps up) the limiting factor is the serialization rate of the broadband, even for an old-fashioned operating system.  If you have a faster connection, the impact of using XP trans-ocean or even trans-continent could indeed be material to the experience for pages with many large elements.

On modern operating systems, the TCP stacks will negotiate a window of whatever is appropriate (up to at least 16MB by default on Windows, or up to 1GB if you wanted, though that would probably break other things and isn't necessary).  To put that in perspective – for six concurrent TCP connections on a 200ms RTT that's 16MB x 8 (bytes) x 6 (concurrent connections) ÷ 200ms = nearly 4Gbps, i.e. not even close to a consideration for interactive apps.

I'm not sure if you read the whole article (the second half of your comment suggests that perhaps you missed the second page?) but anyway, my point was that moving a data center within a radius of 10 or 20ms RTT (1.5 or 3 hours flight time away) made no significant difference to these apps, which I stand by.  I also explained that for certain applications you absolutely do care about latency and cloud node/edge DC (pick your preferred term) is a necessary solution – and per my reply to @Ipraf there are a bunch of reasons why CDN nodes should also be reasonably close to users.  There's also often economic reasons that go well beyond latency for applications like video streaming.  But that's another story.

So finally, let's come to the cloud backup use-case, which I like because it's upstream and involves a sustained transfer, potentially from an XP user.  If done over FTP (a single TCP connection) you really would be limited to 2.5Mbps with a 64kB window size on XP with a 200ms RTT.  A move of 20ms RTT is not going to make much difference to that.  A more significant impact would be if the user starts next door to the data center which then moves 20ms away.  In that case the XP user would be limited to 25Mbps throughput by their window size of 64KB.  If we move to a modern OS then that 25Mbps becomes 6.5Gbps which is more throughput than most users are likely to have available on their network. 

Now, for fun, let's imagine you were coding the next dropbox client: you might choose to not limit yourself to a single TCP connection.  Still not something that would make a difference to data center location in the sense of the article, but no doubt an interesting consideration if you're trying to boost performance for XP users across an ocean. 

Anyway, that is probably a longer response than you were expecting.  Hope it was useful.  Thanks again for stimulating the discussion.

 

Philip
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
4/1/2014 | 10:51:13 AM
Re: Latency matters
One more thing left out here is the assymetric nature of most consumer connections.  Low speed upstream connections act like delay.  Spyware and other monitoring software can add upstream bandwidth and add essentially more delay.  It was one of the things we faced when we were doing initial FiOS installs.  We had to clean things up so that users would be able to get a speedtest equal to the speed they purchased.  So, when we talk about this topic just remember that an endpoint may have more traffic than might be thought about.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
4/1/2014 | 1:48:36 AM
Re: Latency matters
If my thin client (PC) runs Windows, then the default TCP WS will be 64kB. Unless I use another OS and know how to tweak it, I will be seriously limited by RTT only. With 64kB and 200ms, I cannot possibly go beyond 2.56Mb/s. If I use cloud storage service, my user experience will be lousy indeed: transfer of 10MB file will never take less than 31 seconds. Moving the DC closer and reducing latency will affect my QoE. Or the operator's ARPU: I may change my OTT provider.

So it is not an issue of buffering (only), or the cost of RAM.

Another unadressed point is that the effect of latency depends on specific use case. See, for example, http://w2020.carina.uberspace.de/wordpress/wp-content/uploads/2013/10/12_Walter_Haeffner_Vodafone.pdf [given for reference only; I am neither Vodafone employee nor customer].

T.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:56:28 PM
Re: Latency matters

@t.bogataj - I was waiting for someone to raise the bandwidth*delay question, so thanks for doing so. 

Quick context for anyone that is not familiar - like any connection-oriented protocol the maximum throughput of a TCP connection is not just a function of the size of the pipe, but also the ability of the end systems to buffer and process the amount of unacknowledged data that corresponds to keeping the pipe full.  

The amount of buffer required to keep the pipe full is equal to the product of bandwidth*delay.  As an example, if there's one second round trip delay between two systems then the sending system needs to store one second of data before it starts receiving acknowledgements.  So on a 1Gbps link, that would requuire a buffer of one gigabit of data (125MB).  The TCP protocol attempts to ensure that the rate at which each system sends data does not exceed the buffer of the receiving system by using the TCP Window Size mechanism.

So what is the impact of this on the 1MB page example used in the article?  Not much.  There's only 1MB of data to be transferred, with a max request size of 100k or so per connection.  There's no reason for buffer pressure and so data transfer should occur at line rate less any queuing.

Where bandwidth*delay is a consideration is for larger transfers.  For example if we had 10GB to transfer over a 10Gbps link, this is obviously going to take over 8 seconds to transfer.  If the round trip time is 100ms then we would need a one gigabit (125MB) buffer to maintain throughput at the line rate.  If we moved the data center 10ms RTT away we'd need another 12.5MB of buffer allocation.  

This used to be a much bigger deal because memory was much more expensive (and also because older operating systems did a poor job of auto-scaling window size).  Today it is much more likely that memory is available to do the job required.  But the point is taken that it can still be a consideration, especially for east-west traffic.

Thanks for raising the topic.

philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 9:49:21 PM
Re: Not all consumers of data are human.
@t.bogataj - serialization delay is a function of the slowest link.  It is also true (but a separate problem) that the total throughput of a particular TCP connection is limited by the round-trip time and available buffer, so if you want to sustain a particular throughput you need to have the memory (and TCP Window size) to achieve that on the two ends of the connection.  And granted, that may not always be the case.  

However, as I'll explain in response to your other post, this need not affect the experience of cloud services users (i.e. thin client).

Thanks for the post.

 
philipcarden
50%
50%
philipcarden,
User Rank: Blogger
3/31/2014 | 5:39:33 PM
Re: Not all consumers of data are human.
@brookseven, yes this is true.  Apart from reliability and availability considerations there are a raft of other factors that drive the choice of data center locations - I listed a few of them at the start.  Couldn't fit all that in one piece though ;)
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 11:30:40 AM
Re: Not all consumers of data are human.
Philip,

Your statement "whether you're 1km apart or 1000km apart makes no difference - it's a function of the slowest link" is wrong. With TCP, it's a function of RTT.

Among other reasons, moving DCs closer to end users (mini DCs, distributed DC... whatever you call it) makes sense because it decreases latency. And thus improves throughput. Which improves QoE. Which, eventually, impacts ARPU.

T.
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
3/31/2014 | 9:24:17 AM
Re: Not all consumers of data are human.
One thing about Apps and locality even for well architected SaaS services is redundancy.  If you are operating a mission critical app then you need to plan for the failure of an entire data center from a natural disaster.  In the case of the service that I ran, we had customers active on both costs simultaneously so that a failure looked like a capacity and DNS change.

seven

 
t.bogataj
50%
50%
t.bogataj,
User Rank: Light Sabre
3/31/2014 | 9:13:55 AM
Latency matters

The latency does NOT mean only latency.

The column focuses on web-browsing experience (mostly), and (in)efficient use of HTTP. But the issue is not our perception of responsiveness, it is the throughput.

The column completely overlooks the fact that TCP throughput is determined by round-trip time (RTT). For a lossless connection, the TCP throughput will be

throughput = WS / RTT

where WS is TCP window size. For example, with WS=1MB and RTT = 200ms, you get the throughput of 40Mb/s -- even on a 10Gb/s link.

So as long we use TCP, latency (or RTT) is critical for data-intensive applications. We're far from any wide adoption of other reliable L4 protocols (QUIC is only mentioned) and will have to live with TCP.

So -- keep considering latency when planning your networks.

T.

Page 1 / 3   >   >>
More Blogs from Column
New and exciting methods of automation – whether virtualization, the cloud, IoT or even best practices like network segmentation – tend to emphasize innovation over visibility. As such, networks develop blind spots that mask network problems and even faulty devices.
Unlicensed spectrum will help the 3GPP's 5G specification proliferate.
Outages are inevitable, but how can we deal with them better?
The arrival of NFV and IoT is driving a greater need for Service Quality Management (SQM) capabilities, argues Sandeep Raina.
An updated architecture, new approach to software and the ability to launch new services will give telcos a successful path to 5G within the next few years.
From The Founder
Following a recent board meeting, the New IP Agency (NIA) has a new strategy to help accelerate the adoption of NFV capabilities, explains the Agency's Founder and Secretary, Steve Saunders.
Flash Poll
Live Streaming Video
Charting the CSP's Future
Six different communications service providers join to debate their visions of the future CSP, following a landmark presentation from AT&T on its massive virtualization efforts and a look back on where the telecom industry has been and where it's going from two industry veterans.
LRTV Interviews
CenturyLink: Are We All Security Service Providers Now?

5|29|17   |   3:40   |   (2) comments


As the security environment gets more threatening, in the wake of WannaCry and other attacks, service providers need to shoulder more responsibility for securing networks, says Tim Kelleher, vice president for IT security services at CenturyLink, in an interview at the Light Reading Big Communications ...
LRTV Interviews
Verizon: Service Providers' Futures Rest on Software

5|29|17   |   4:22   |   (0) comments


If SPs want to avoid being dumb pipes, they'll have to dive deep into software and rethink the centralized cloud, Verizon's Ed Chan says. Find more Big Communications Event coverage here.
LRTV Documentaries
Verizon's Ed Chan on the New World for Networks

5|29|17   |   19:08   |   (0) comments


In this 2017 Big Communications Event keynote, Ed Chan, senior vice president of Technology, Strategy and Planning for Corporate Networking and Technology at Verizon, discusses the coming world of sensors and the smart edge, powered by new spectrum and faster fiber uploads. For more BCE coverage and videos,
LRTV Interviews
Zayo's CTO: Respect the Infrastructure!

5|29|17   |   3:04   |   (0) comments


At Light Reading's Big Communications Event in Austin, Texas, Zayo's CTO Jack Waters talked about infrastructure and the changing customer landscape for network providers. While everyone else is chatting up applications and services, Waters urges the industry not to downplay the role that ...
LRTV Interviews
AT&T: Creating Dynamic Networks to Meet Business Needs

5|26|17   |   4:24   |   (1) comment


As enterprises need more dynamic networks, service providers need to deliver on-demand, virtual services to meet those needs. AT&T is creating a networking fabric to mix-and-match SDN technologies for enterprise customers, says Roman Pacewicz, AT&T senior vice president for offer management and service integration, in an interview at Light Reading's
LRTV Interviews
EdgeConneX on Industry Headwinds & Tailwinds

5|26|17   |   2:41   |   (0) comments


At Light Reading's Big Communications Event 2017, EdgeConneX CTO Don MacNeil discussed the value of partnerships in the digital world.
LRTV Documentaries
4 Steps Toward a Higher Network IQ

5|26|17   |     |   (0) comments


At the Big Communications Event in Austin, Texas, EXFO CEO Philippe Morin explains how sensors and analytics can boost a network's intelligence and enable on-demand customer experiences. Find more BCE 2017 coverage here.
LRTV Interviews
BT's McRae Sheds Light on 4K Strategy

5|25|17   |   4:45   |   (0) comments


At Light Reading's Big Communications Event 2017 in Austin, Texas, BT Group's Chief Network Architect Neil McRae talks about what it took for BT to broadcast live sports in 4K. Catch up with all our BCE coverage at http://www.lightreading.com/bce.asp.
From the Founder
How the NIA Aims to Advance NFV

5|25|17   |   08:07   |   (1) comment


Following a recent board meeting, the New IP Agency (NIA) has a new strategy to help accelerate the adoption of NFV capabilities, explains the Agency's Founder and Secretary, Steve Saunders.
LRTV Custom TV
Better Solutions That Address Growing Scale

5|25|17   |     |   (0) comments


For Comcast, the X1 rollout and 17-fold increases in broadband speeds in the past 16 years are among factors driving the need for Energy 2020 solutions that reduce cost and consumption, says Mark Hess.
LRTV Custom TV
Ethernity Network Delivers Instant Offloading of Network Functions With All-Programmable Intelligent NIC

5|25|17   |     |   (0) comments


David Levi, CEO of Ethernity Networks, explains that programmability of the hardware makes the company's All-Programmable Intelligent NIC uniquely beneficial for communications service providers that need advanced data appliances with agile support of virtualization. Utilizing the company's patented network processing technology, Ethernity offers data path ...
LRTV Documentaries
BCE 2017: Vodafone Gets Obsessed With Cloud-Native

5|25|17   |     |   (0) comments


Vodafone's Matt Beal updates us on Project Ocean and explains why simple virtualization isn't enough of a goal for network transformation. Catch up with other BCE 2017 keynotes and news at http://www.lightreading.com/bce.asp.
Infographics
With the mobile ecosystem becoming increasingly vulnerable to security threats, AdaptiveMobile has laid out some of the key considerations for the wireless community.
Hot Topics
Cities Clamor for More Clout at FCC
Mari Silbey, Senior Editor, Cable/Video, 5/23/2017
Sonus & Genband Finally Combine to Form $745M Company
Dan Jones, Mobile Editor, 5/23/2017
NB-IoT? Not at Those Prices, Say DT Customers
Iain Morris, News Editor, 5/26/2017
Like Us on Facebook
Twitter Feed
BETWEEN THE CEOs - Executive Interviews
One of the nice bits of my job (other than the teeny tiny salary, obviously) is that I get to pick and choose who I interview for this slot on the Light Reading home ...
TEOCO Founder and CEO Atul Jain talks to Light Reading Founder and CEO Steve Saunders about the challenges around cost control and service monetization in the mobile and IoT sectors.
Animals with Phones
Everyone Is Computer Literate These Days Click Here
Live Digital Audio

Playing it safe can only get you so far. Sometimes the biggest bets have the biggest payouts, and that is true in your career as well. For this radio show, Caroline Chan, general manager of the 5G Infrastructure Division of the Network Platform Group at Intel, will share her own personal story of how she successfully took big bets to build a successful career, as well as offer advice on how you can do the same. We’ll cover everything from how to overcome fear and manage risk, how to be prepared for where technology is going in the future and how to structure your career in a way to ensure you keep progressing. Chan, a seasoned telecom veteran and effective risk taker herself, will also leave plenty of time to answer all your questions live on the air.