Determine the maximum forwarding performance for mixed IPv4 and IPv6 traffic.
For detailed information on the test setup, please see the complete Methodology, page 20.
In 2004, the percentage of IPv6 traffic in the Japanese Internet backbone reached 1 percent of all traffic. A worst-case guess is that IPv6 traffic might reach no more than 15 percent of all Internet traffic until 2010, which is the figure we used in this test. IPv6 is slowly getting rid of its research-network-only image: Carriers now require it for all new backbone equipment.
In our test, the CRS-1 had to prove that it processes IPv6 forwarding purely in hardware, and that the higher overhead to analyze the header does not lead to any performance issues – specifically in a realistic IPv4 and IPv6 environment. We prepared the Agilent emulators as in the first test case; the only difference was to substitute 15 percent of all traffic with IPv6. (The Internet-like mix for IPv6 also used different packet sizes.)
The CRS-1 clearly proved that it processes IPv6 completely in hardware. In our mixed scenario, the single-chassis system mastered a packet rate of 820 million pps at line rate. The packet rate is slightly smaller than in the pure IPv4 scenario because IPv6 packets are longer.
Surprisingly, the system showed 1.83 percent loss in the first test run. Cisco explained that packet flows arriving at the Modular Services Card are processed by a chipset with 188 parallel channels, each of which can deal with one packet at a time. Under normal conditions, this is much more than enough for wire-speed throughput, as we have seen in all the other tests. The forwarding decision for an IPv6 packet takes 20 percent more time than for an IPv4 packet – our latency tests confirmed this value (see below). The scheduler assigning packets to each of the 188 channels has been optimized to perform at wire-speed for most IPv4 and IPv6 combinations. Our 85/15 percent case triggered a corner case by chance.
Cisco optimized the micro-code and installed a patch on the system under test. We reran the test case and verified that the CRS-1 now performed at 100 percent throughput without packet loss. The patch remained installed for all further test cases, to make sure that it didn’t harm the performance in other areas.
Table 1: Mixed IPv4/IPv6 packet rates
|Transmit Rate (packet/s total)||Receive Rate (packet/s total)||IPv6 Transmit Rate (packet/s)||IPv6 Receive Rate (packet/s)||Packet Loss per Second||Receive Bandwidth L2|
|First test run||819,904,533||802,170,750||130,840,624||115,837,061||15,003,563 (1.83 %)||98.17 %|
|Second test run, after micro-code change||819,927,902||819,927,902||130,840,608||130,840,608||0||100 %|
The latency tests were run at 98 percent load and showed that the average latency is around 20 percent higher than in a pure IPv4 environment, and the maximum latency in a multi-chassis environment is 25 percent higher than in a single-chassis system. However, it is important to note that all IPv4 and IPv6 packets were forwarded within at most 0.34 milliseconds of time – this was the absolute maximum latency, which is a great result for IPv6.
Because of parallel packet processing in the Modular Service Card, we were aware of the theoretical possibility of out-of-sequence packets. EANTC ran a separate test again to verify that all packets within each flow were delivered in the correct sequence. In this test case (and in all of the others, too) Cisco maintained the sequence for each and every flow under any conditions. Also, there were no stray (misrouted) frames or any other unexpected issues. The implementation needs to be credited for the outstandingly precise scheduling in an ultra-high-speed parallel computing environment.