& cplSiteName &

By the Numbers: Google's Cloud Advantage

Brian Santo
7/25/2016

Customers of any cloud should cheer any new installation, but for companies that prize consistent cloud latency performance throughout the day, Google (Nasdaq: GOOG)'s intent to establish almost a dozen new "cloud regions" should be especially welcome. (See Google Launches West Coast Cloud Region, Natural Language Tools.)

The term "cloud" suggests a certain nebulousness, but the cloud is anything but. Where any given cloud provider locates its physical facilities and how many facilities that provider chooses to run are vitally important. Every signal still needs to traverse a wire, and distance still has a direct relationship with time. Physical proximity will always be a virtue, yet the biggest cloud operators are coming up short on physical presence.

"The Internet is not in enough places," opined Phill Lawson-Shanks, chief architect at EdgeConneX Inc. , in Friday's Upskill U webinar, "The Future of Metro Data Center Interconnect." Amazon, Facebook, Google and other large cloud companies operate only a relative few hyperscale data centers each, and with the way Internet traffic patterns are developing, their risk of running into congestion at peering points (including at the network edge) keeps rising, Lawson-Shanks explained. (See The Future of the Metro Data Center Interconnect.)

One strategy is to peer with companies that specialize in edge connectivity; another is increase the number of physical facilities you operate. Of the biggest cloud operators, Microsoft is most intent on running smaller regional data centers. Google is now heading in that direction too.

Three of the key metrics in cloud performance are latency, throughput and availability. Light Reading has been isolating on latency for a few reasons: latency is exceedingly important for video, it is very important for other web applications and focusing on one important metric keeps the comparisons relatively simple.

We looked at the latency of Google Compute Engine in the US Central region. Google's cloud has a latency performance characteristic that's unique among the largest cloud providers. Latency performance on any cloud fluctuates, sometimes wildly, whether you break it down by month, day, hour or even minute. But not Google's. Google's latency performance tends to have far less pronounced peaks and valleys.

Clouds are subject to spikes in latency at certain times in the day, and often those spikes are experienced by multiple clouds simultaneously. Why?

The cloud is heavily dependent on peering, as Lawson-Shanks pointed out. There are few connectivity nodes in any given region, and multiple clouds are likely to peer through those same nodes. So when congestion occurs at one of those nodes, every cloud connecting to it is going to experience it.


Want to know more about the latest developments in T&M, service assurance, monitoring and other test issues? Check out our dedicated test channel here on Light Reading.


But among the largest clouds, Google tends to be immune to that one particular cause of congestion because it owns most of its own fiber.

Google does not always have the best latency performance compared to multiple AWS and Microsoft Azure clouds in different geographical regions, although -- depending on which region you look at, and the time of day -- it sometimes does. Still, it is usually among the best three or four in terms of latency, wherever it does business. But when latency spikes hit several providers all at once, Google almost always has the best latency performance during the spike.

The biggest, most experienced cloud customers all buy access from multiple cloud providers, so that they can average out performance variations from the different cloud providers by distributing traffic among them.

Google -- like Microsoft Azure, Facebook, IBM Softlayer and Oracle -- is playing catch-up with Amazon Web Services (AWS). But Google, with its expansion and its unique latency performance profile stands a good shot at making itself one of the top second options as customers evaluate their cloud access blends.

This analysis was based on latency performance measurements provided by Cedexis of the AWS, Microsoft Azure and Google Compute Engine clouds in several regions in the US. The measurements were of a 24-hour period on July 21.

Cedexis measures every cloud instance in the world billions of times a day from every ISP and cable network. About 7 billion measurements a day are taken of latency, throughput and availability to each of these clouds using Real User Measurements (RUM).

— Brian Santo, Senior Editor, Components, T&M, Light Reading

(2)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Pete Mastin
Pete Mastin
7/26/2016 | 12:33:28 PM
As the old saying goes....
There is no Cloud is just someome else's data center. The adendum is "Located somewhere specific and connected to specific ISPs with very specific peering and latency characteristics". 

The closer to the edge (eyeballs) that clouds go the more they start t olook like Content Delivery Netwroks (CDNs) with compute power. Very interesting convergance.  Nice Article Brian. 

 
Faisal Khan
Faisal Khan
7/26/2016 | 10:24:38 AM
owning fiber is good ?
So shall we conclude, owning fiber is good than leasing transport ?
Featured Video
Upcoming Live Events
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3, 2019, New York, New York
December 3-5, 2019, Vienna, Austria
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events