Customers of any cloud should cheer any new installation, but for companies that prize consistent cloud latency performance throughout the day, Google's intent to establish almost a dozen new 'cloud regions' should be especially welcome.

Brian Santo, Senior editor, Test & Measurement / Components, Light Reading

July 25, 2016

4 Min Read
By the Numbers: Google's Cloud Advantage

Customers of any cloud should cheer any new installation, but for companies that prize consistent cloud latency performance throughout the day, Google (Nasdaq: GOOG)'s intent to establish almost a dozen new "cloud regions" should be especially welcome. (See Google Launches West Coast Cloud Region, Natural Language Tools.)

The term "cloud" suggests a certain nebulousness, but the cloud is anything but. Where any given cloud provider locates its physical facilities and how many facilities that provider chooses to run are vitally important. Every signal still needs to traverse a wire, and distance still has a direct relationship with time. Physical proximity will always be a virtue, yet the biggest cloud operators are coming up short on physical presence.

"The Internet is not in enough places," opined Phill Lawson-Shanks, chief architect at EdgeConneX Inc. , in Friday's Upskill U webinar, "The Future of Metro Data Center Interconnect." Amazon, Facebook, Google and other large cloud companies operate only a relative few hyperscale data centers each, and with the way Internet traffic patterns are developing, their risk of running into congestion at peering points (including at the network edge) keeps rising, Lawson-Shanks explained. (See The Future of the Metro Data Center Interconnect.)

One strategy is to peer with companies that specialize in edge connectivity; another is increase the number of physical facilities you operate. Of the biggest cloud operators, Microsoft is most intent on running smaller regional data centers. Google is now heading in that direction too.

Three of the key metrics in cloud performance are latency, throughput and availability. Light Reading has been isolating on latency for a few reasons: latency is exceedingly important for video, it is very important for other web applications and focusing on one important metric keeps the comparisons relatively simple.

We looked at the latency of Google Compute Engine in the US Central region. Google's cloud has a latency performance characteristic that's unique among the largest cloud providers. Latency performance on any cloud fluctuates, sometimes wildly, whether you break it down by month, day, hour or even minute. But not Google's. Google's latency performance tends to have far less pronounced peaks and valleys.

Clouds are subject to spikes in latency at certain times in the day, and often those spikes are experienced by multiple clouds simultaneously. Why?

The cloud is heavily dependent on peering, as Lawson-Shanks pointed out. There are few connectivity nodes in any given region, and multiple clouds are likely to peer through those same nodes. So when congestion occurs at one of those nodes, every cloud connecting to it is going to experience it.

Want to know more about the latest developments in T&M, service assurance, monitoring and other test issues? Check out our dedicated test channel here on Light Reading.

But among the largest clouds, Google tends to be immune to that one particular cause of congestion because it owns most of its own fiber.

Google does not always have the best latency performance compared to multiple AWS and Microsoft Azure clouds in different geographical regions, although -- depending on which region you look at, and the time of day -- it sometimes does. Still, it is usually among the best three or four in terms of latency, wherever it does business. But when latency spikes hit several providers all at once, Google almost always has the best latency performance during the spike.

The biggest, most experienced cloud customers all buy access from multiple cloud providers, so that they can average out performance variations from the different cloud providers by distributing traffic among them.

Google -- like Microsoft Azure, Facebook, IBM Softlayer and Oracle -- is playing catch-up with Amazon Web Services (AWS). But Google, with its expansion and its unique latency performance profile stands a good shot at making itself one of the top second options as customers evaluate their cloud access blends.

This analysis was based on latency performance measurements provided by Cedexis of the AWS, Microsoft Azure and Google Compute Engine clouds in several regions in the US. The measurements were of a 24-hour period on July 21.

Cedexis measures every cloud instance in the world billions of times a day from every ISP and cable network. About 7 billion measurements a day are taken of latency, throughput and availability to each of these clouds using Real User Measurements (RUM).

— Brian Santo, Senior Editor, Components, T&M, Light Reading

About the Author(s)

Brian Santo

Senior editor, Test & Measurement / Components, Light Reading

Santo joined Light Reading on September 14, 2015, with a mission to turn the test & measurement and components sectors upside down and then see what falls out, photograph the debris and then write about it in a manner befitting his vast experience. That experience includes more than nine years at video and broadband industry publication CED, where he was editor-in-chief until May 2015. He previously worked as an analyst at SNL Kagan, as Technology Editor of Cable World and held various editorial roles at Electronic Engineering Times, IEEE Spectrum and Electronic News. Santo has also made and sold bedroom furniture, which is not directly relevant to his role at Light Reading but which has already earned him the nickname 'Cribmaster.'

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like