As was the case in our most recent test, we took an up-to-date mix of carrier network requirements and created a test plan that we intended to apply to all vendors participating in the program. Vendors also had the opportunity to define two of their own tests in addition, at our approval.
With the help of testing vendor Ixia, then performed agreed-upon tests at each vendor’s lab. The results will be presented in a series of articles right here on Light Reading -- one for each vendor who goes through the tests.
This report will recap the results for Alcatel-Lucent (NYSE: ALU), our latest test participant. Alcatel-Lucent's 7750 SR-12 has been around for some time and the company showed that adding 100GbE interfaces doesn't change much about the features operators are already familiar with. Alcatel-Lucent sometimes has their own way of doing things, but as you'll read they ably took on the challenge of this test.
The contents of this report are as follows:
Test Contents
- Page 2: Alcatel-Lucent's Solution
- Page 3: IP Unicast Forwarding Performance
- Page 4: IP Multicast Forwarding Performance
- Page 5: MPLS Traffic Differentiation
- Page 6: Services Scalability: VPLS
- Page 7: Services Scalability: BGP/MPLS L3VPNs
- Page 8: Transport Scalability: RSVP-TE
- Page 9: Energy Efficiency
- Page 10: Scaled Services
- Page 11: 400Gbit/s Link Aggregation
- Page 12: Conclusion
Testing Info
We worked closely with Ixia Communications to ensure that our scalability goals were reached while meeting our No. 1 goal of testing services realistically. It's also worth reminding those who don’t use these tools day to day, that long passed are the days of so-called "packet blasting." Thankfully, IxNetwork -- the software we used for all of the tests -- allowed us to emulate more realistic scenarios. MPLS services testing requires intelligence in the tester to represent a virtual network with hundreds of nodes. IxNetwork calculates the resulting traffic to be sent to the device under test across the directly attached interfaces -- including signaling and routing protocol data as well as emulated customer data frames. This way, a reasonably complex environment with hundreds of VPNs and tens of thousands of subscribers can be emulated in a representative and reproducible way. For the 100Gigabit Ethernet test interface we used Ixia's K2 HSE100GETSP1. The software used was IxNetwork version 5.70.120.14, IxOS version 6.00.700.3.
About EANTC
The European Advanced Networking Test Center (EANTC) is an independent test lab founded in 1991 and based in Berlin, conducting vendor-neutral proof of concept and acceptance tests for service providers, governments and large enterprises. EANTC has been testing MPLS routers, measuring performance and interoperability, for nearly a decade at the request of industry publications and service providers.
EANTC’s role in this program was to define the test topics in detail, communicate with the vendors, coordinate with the test equipment vendor (Ixia) and conduct the tests at the vendors’ locations. EANTC engineered, then extensively documented the results. Vendors participating in the campaign had to submit their products to a rigorous test in a controlled environment contractually defined. For this independent test, EANTC exclusively reported to Light Reading. The vendors participating in the test did not review the individual reports before their release. Each vendor had a right to veto publication of the test results as a whole, but not to veto individual test cases.
— Carsten Rossenhövel is Managing Director of the European Advanced Networking Test Center AG (EANTC) , an independent test lab in Berlin. EANTC offers vendor-neutral network test services for manufacturers, service providers, governments and large enterprises. Carsten heads EANTC's manufacturer testing and certification group and interoperability test events. He has over 20 years of experience in data networks and testing.
Jonathan Morin, EANTC, managed the project, worked with vendors and co-authored the article.
Hello,
The article stated that the 100GigE interface was an SR-10. To my knowledge, this interface is a parallel interface with multiple 10G lanes. Aggregated to 100G.
Question: is this true? That the "100G" side of the tests were actually multiple 10G physical lanes?
If so, how is this different than the other side of the test where it was acknowledged that the links were indeed multiple 10G lanes.
I realize SR-10 was used due to availability of interfaces from ALU suppliers. And that the statement that the box also supports CFP LR4 interface, but that interface was not used in the tests. Even though the SR10 is a "single" interface, it is actually multiple 10G lanes each over it's own multimode fiber.
So, some clarification would be appreciated. It seems to me that this entire test was basically multiple 10G lanes talking to multiple 10G lanes with one side aggregating the 10G lanes to make a "100G" port. And the other side treating the 10G lanes as independent oversubscribed ports.
Still an impressive achievement from the box and the tests. However, it seems clear the electrical IO was all still running at 10G.
When will we see a 4x25G test?
sailboat