IPv6 Rapid Deployment (RD) Performance
Cisco's CRS-1 forwarded 79.6Gbit/s across 1 million IPv6 RD tunnels to aid in quickly migrating customer networks.
There are several technologies designed to to ease the migration from IPv4 to IPv6. The "right" one depends on the use case, but it often boils down to a simpler question: Which part of the network will be based on IPv4 and which part will be based on IPv6? We've reported Cisco's Stateful NAT64 Performance to document its use of the technology required when there are IPv4 endpoints communicating with IPv6 endpoints.
What about when a new customer plans to access IPv6 based services, with an IPv6 address, but is connected to an IPv4 access network? Expecting these scenarios to occur frequently, the IETF defined technology for IPv6 Rapid Deployment (IPv6 RD). The technology is pretty straightforward -- encapsulate IPv6 packets with IPv4 headers, no control plane is required. The idea is for residential gateway routers to implement IPv6 RD on the customer side, and for these IPv6 RD tunnels to converge on a more powerful platform in the provider's network, which can then de-capsulate the packets from IPv4 and route them based on their IPv6 destinations.
The implications are that there would be additional overhead in the IPv4 network, and the gateways used would still have IPv4 addresses, but this allows operators to ramp up IPv6 deployments even where the access or aggregation network is IPv4 based -- hence "rapid deployment."
Cisco says it would like to be ready to support its customers in any migration scenario, therefore IPv6 RD had to be included. We wanted to use the same setup we had already created using the CRS-1, loaded with four CGSE modules, and connected with 4 x 10 Gigabit Ethernet interfaces to again reach line rate, but the more interesting question was how many residential gateways we could scale to. Cisco claimed that 1 million tunnels would not be a problem. In order to test such a high number of residential users, we emulated them using our Ixia equipment running IxNetwork. Behind the 1 million residential gateways, evenly spread across the four physical ports, we emulated 1 million users -- 1 user behind each gateway.
We sent bidirectional traffic in pairs between these 1 million users and 20,000 emulated IPv6-based servers, using IMIX frame sizes: 122:7, 512:4, 1500:1. We had to account for the overhead incurred by the IPv6 RD encapsulation, so we were focusing on the data rates on the IPv6 interfaces. These native IPv6 interfaces connected via Cisco's Nexus 5548 transmitted a full 10Gbit/s, and received 9.9Gbit/s each. In total we generated 79.6Gbit/s of traffic. The good news for Cisco was that no frames were lost. Given the encapsulation/decapsulation, we were also interested in recording latency, expecting it to be lower than NAT64 latency since it was not stateful. The results are shown below.
Finding no issues, Cisco proved that they are ready to help operators deploy IPv6 rapidly, scaling to a million customers, each with the potential of having multiple users, all communicating simultaneously.
Next Page: IPv6 Dual Stack Performance
Previous Page: Stateful NAT64 Performance
Back to the Cisco Test Main Page