& cplSiteName &

Network Positioning System

Light Reading
Series Column
Light Reading
2/5/2012
50%
50%

EXECUTIVE SUMMARY: When provided access to multiple data centers with the identical service, Cisco’s NPS correctly chose the best performing option for user traffic.


If cloud services are so important, than so too must be the availability of cloud services, which will require cloud providers to use multiple data centers to tackle issues involving scale, reduction in application latency based on geographical proximity, and resource distribution. If the existence of multiple data centers is a given, resource distribution and customer experience optimization becomes a critical business concern: Will the data center operators distribute load across the data centers? Will they provide customers with the data center that will give them the best experience based on network proximity or performance?

Cisco says it can arm the network with the intelligence to make these decisions. This is the idea behind Cisco's Network Positioning System (NPS). To see NPS in action we needed two data centers and, as luck would have it, our test setup came equipped with two data centers.

Our intention was to verify that when the same customer requests a service, NPS makes the decision as to where that request would go -- Data Center 1, or Data Center 2. When we discussed this idea with Cisco, we prepared to have NPS work based on proximity, but they also explained that NPS was built as a customizable tool. We felt it would be more relevant to see NPS choose data centers based on performance -- delay, for example. Cisco agreed with us and got to work.

The NPS system database was incorporated into our customer-facing CRS-1. Cisco then set up some special NPS client software on our client laptop, and configured NPS on the ASR 1002 -- the router used as a Customer Edge (CE). The CRS-1's central purpose in the NPS setup was always knowing which data center is the optimal match for the defined metrics. The client laptop and ASR 1002 poll the CRS-1 with the preferred metrics and use the CRS-1's response for the customer traffic. In our test case, Cisco set up their IP-SLA measurement probes between an ASR 9010 in each data center, and the CE (ASR 1002) to constantly measure and report the delay to the CRS-1.

Cisco installed two simple video servers in each data center. We connected a laptop client to our ASR 1002, and began requesting video through a Web portal Cisco had setup. In the beginning, the Web portal would almost randomly choose different data centers each time we refreshed. We found this was because the latency measurements were extremely close, and mildly fluctuating. No problem, this meant both video servers were working. Also, we came prepared. We connected Ixia's new shiny ImpairNet impairment generator between the customer edge ASR 1002 and its upstream CRS-1. This was the customer's link to both data centers, but, by using a filter on the impairment generator, we could add delay to all packets for a given destination. We toggled back and forth between adding 50 milliseconds on all the IP-SLA measurement packets to Data Center 1, and then disabling it and adding the delay to Data Center 2. Each time, we observed that when the video client was refreshed it was showing video from a different data center according to the path with the lowest latency.

In addition we verified that NPS would not include data center options that didn't run a service all together. We used "CPU hog" on Data Center 1 to disrupt the video server. The NPS system detected the failure of this virtual server to respond, and the CRS-1 updated its NPS database to no longer include DC-1 as a viable option for this video service. We refreshed our browser and were consistently directed to Data Center 2.

For service providers offering cloud services the ability to optimize the customer experience when accessing geo-redundant or distributed data centers could well be a competitive edge, especially when the cloud services begin to be commoditized. It is impressive to see that functions that required complicated traffic engineering knowledge in the past have been simplified and repackaged for general consumption.


Next Page: DHCPv6 in the Cloud
Previous Page: IPv6 Dual Stack Performance


Back to the Cisco Test Main Page

(0)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Featured Video
Flash Poll
Upcoming Live Events
March 12-14, 2019, Denver, Colorado
April 2, 2019, New York, New York
April 8, 2019, Las Vegas, Nevada
May 6, 2019, Denver, Colorado
May 6-8, 2019, Denver, Colorado
May 21, 2019, Nice, France
September 17-19, 2019, Dallas, Texas
October 1, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
November 5, 2019, London, England
December 3, 2019, New York, New York
December 3-5, 2019, Vienna, Austria
All Upcoming Live Events