Light Reading

10G Ethernet Switches Pass the Test

Craig Matsumoto
News Analysis
Craig Matsumoto
1/31/2011
50%
50%

Some recent tests show that 10Gbit/s switch chips can handle high-end data center requirements. So, how long might it be before systems vendors stop doing their own ASICs?

Nick Lippis, principal analyst for Lippis Enterprises , thinks that's a valid question. His company is releasing data from a test of seven vendors' 10Gbit/s switches, and the results seem to confirm that merchant semiconductors do just fine in terms of throughput, latency, power consumption and action under duress (that is, working at 150 percent of capacity).

Lippis will be presenting his findings via WebEx on Tuesday, Feb. 1 at noon EDT; here's the link for the event.

Cisco Systems Inc. (Nasdaq: CSCO) is the vendor that's most famously stuck with its own ASICs, but Alcatel-Lucent (NYSE: ALU) and Juniper Networks Inc. (NYSE: JNPR) tend to use their own chips as well. "When those who spin their own ASICs start to see what's being done with merchant chips, we'll have to see whether they'll start to go with the Broadcom Corp. (Nasdaq: BRCM), Marvell Technology Group Ltd. (Nasdaq: MRVL) or Fulcrum Microsystems Inc. kinds of fabrics," he tells Light Reading.

Lippis had invited Light Reading for a peek at the tests, which were conducted in December at an Ixia (Nasdaq: XXIA) facility called iSimCity. Lippis had limited resources available, so some big names such as Brocade Communications Systems Inc. (Nasdaq: BRCD) and Cisco got left out, but the tests still gave an indication of how well this new generation of switches performs. (See Friday Show & Tell: Testing the New Ethernet.)

One unexpected twist in the results is that the U.S.-based companies' switches performed better than the others. For instance, the highest latency, in most test cases, went to the Voltaire Inc. (Nasdaq: VOLT) Vantage 6048.

And while every switch scored 100 percent on Layer 3 throughput tests, the Hitachi Cable Ltd. Apresia 15000-64XL-PSR was the only one to score less than 100 on Layer 2 throughput. It dipped as low as 97.3 percent throughput when dealing with 128-byte frames. Apresia was also the only box to show performance degradation during congestion tests.

Overall, though, Lippis says he was impressed by the switches' performance, especially when it came to power consumption.

Lippis's tests included switches from AlcaLu, Arista Networks Inc. , and Juniper, and top-of-rack switches from Force10 Networks Inc. , Hitachi, IBM Corp. (NYSE: IBM) (with switches from Blade Network Technologies) and Voltaire.

Lippis has a second round of testing planned for the week of April 4. Brocade, which Lippis says was interested in the December test but didn't respond in time, is a likely candidate -- and Lippis isn't shy about saying who else he'd like to include. "I'd love to get Cisco top-of-rack switches in there," he says.

— Craig Matsumoto, West Coast Editor, Light Reading

(9)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
BigBro
50%
50%
BigBro,
User Rank: Moderator
12/5/2012 | 5:14:10 PM
re: 10G Ethernet Switches Pass the Test


Sure, the shape tells you a lot about the underlying switch ASIC.


One would expect the latency of a store-and-forward switch to increase with packet size, because the ASIC has to receive the entire packet before it makes its forwarding decision.


On a cut-through switch, you'd expect the latency to be basically flat across packet size, because as soon as the ASIC has received enough of the header, it can make its forwarding decision, and start forwarding the packet (as long as the output port is not busy).


I'm not sure I understand why larger packets would have *less* latency than smaller ones. That's got to be an artifact of the ASIC, or perhaps even the test equipment: once you start getting down into the sub-microsecond range, the test equipment itself becomes a variable in your test that you shouldn't ignore. What MAC and PHY-layer hardware is at that end, for example?

spc_vancem
50%
50%
spc_vancem,
User Rank: Light Beer
12/5/2012 | 5:14:10 PM
re: 10G Ethernet Switches Pass the Test


In the article 97.3 % throughput is said to be low. However, in practical use, is this really a low number? Almost any other traffic types than constant bitrate traffic will result in long queues and full buffers when loads are rising this high. My question is: When is such a high performance actually needed? Any comments anyone?  

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 5:14:10 PM
re: 10G Ethernet Switches Pass the Test


One thing that surprised me -- and I'd commented to Nick Lippis about this -- was the variety of profiles in the latency testing.  I'm not talking about the actual latency figures, but about the *shapes* of the graphs.


Lippis tested each switch on a variety of packet sizes -- 128-byte packets, 256-byte packets, etc. Some switches had good, low latency for small packet sizes and bad latency for bigger packets. Others were the other way around. Some, IIRC, were consistently flat.


It was interesting to me. It seems to imply that latency effects are rather unpredictable from switch to switch.


As for what causes these different profiles, Lippis was saying a lot of it might be the fingerprint of the chipset being used. Each vendor's different software plays a role, too.

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 5:14:09 PM
re: 10G Ethernet Switches Pass the Test


> I'm not sure I understand why larger packets would have *less* latency than smaller ones.


Same here. I guess latency is just a tricky beast to wrestle.


I should specify:  Most of the switches either show a mostly flat profile (very good latency for small packets, flatly "less good" for all other sizes) or a sharp, sharp spike for ridiculously large packets (9,216 bytes).  So, some of the visual differences in the graphs come from corner cases. 


But it's true that a couple of boxes showed worse latency with small packets. I found that interesting.

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 5:14:09 PM
re: 10G Ethernet Switches Pass the Test


You know, that question did occur to me.  But when everybody else is scoring perfect 100s... 97.3 is certainly low by comparison!


You've got a valid question, though, considering not all these datacenter operators will be looking for five 9s kind of performance. Anybody have any real-world experience to apply here?

cross
50%
50%
cross,
User Rank: Light Beer
12/5/2012 | 5:14:07 PM
re: 10G Ethernet Switches Pass the Test


Hi Steinarb,


The average packet size in the Internet is not that small indeed, and keeps growing due to increased video traffic and other bulk applications (see http://www.caida.org/research/.... It is certainly larger than 128 bytes in all cases - the averages measured are somewhere between 150-300 bytes.


"The" worldwide average packet size does not exist, though - it all depends where one measures and which applications dominate in each part of the world - and the situation in data centers is certainly even less uniform. That said, data centers often see an increased fraction of video and storage applications so the average packet size is going up as well according to our findings (but we have no representative proof). Some remote desktop applications (Citrix, for example) and Voice over IP generate small packet sizes around 128 bytes or even less; however, I am not aware of networks where remote desktop or VoIP traffic would drive 10GE ports to full utilization. If a remote desktop application sends bulk screen updates, it uses larger packet sizes as well.


We typically measure the 128-byte single packet size line rate throughput only if a customer explicitly requests it - since the old RFC2544 mentions this measurement as a reference.  The result is of limited value.  Sometimes service providers say they would like to check the chipset's design limitations. In fact, however, today's chipset challenges are more related to bursty traffic of variable packet sizes simultaneously sent, mixed with multicast traffic, coming from many sources and going to many destinations in an imbalanced, meshed traffic pattern.


My alarm bell rather goes off if I see a system reaching 100 % line rate at even the smallest single packet size (64 bytes for IPv4) since it is likely the chipset has been optimized for RFC2544, which does not guarantee its perfection otherwise. We have seen and published such test results in the past.  A good and competitively priced chipset needs to balance throughput requirements of artificial RFC2544 tests with those seen in real-life, complex networks. The art of lab testing is to replicate such real-life scenarios.


Best regards, Carsten (EANTC)


 

tmmarvel
50%
50%
tmmarvel,
User Rank: Light Beer
12/5/2012 | 5:13:57 PM
re: 10G Ethernet Switches Pass the Test


The major throughput and latency problems arise at multi-node network scale, rather than at individual switch scale.

An individual packet switch can be engineered to perform fine on most conditions, but is there a good way for network scale QoS control and throughput optimization in cases of meshed packet streams across multiple packet switches? The operators tend to have to resort to low average network utilization to be able to provide QoS guarantees for services over packet layer shared network 'clouds' serving multiple application/customer contracts.

The major latency, jitter and packet loss problems arise when the volumes of multiple uncoordinated packet streams exceed the capacity of shared physical links, and individual packet switch performance cannot solve these network scale QoS problems inherent to services over multi-client packet layer shared networks.

Which brings up the point of why use packet switched networks as the infra across different service contracts where QoS guarantees are a requirement? WDM/TDM as a mechanism to divide the physical infrastructure between different packet switched contracts certainly eliminates the major network scale congestion control issues. WDM/TDM also supports any packet length mixes up to continued 100% throughput transparently. Within such isolated L1/0 clouds, which can internally provide packet-switched connectivity, the packet layer QoS control will be much more manageable as it can be handled purely from each individual client's edge devices.
 
It would appear that such customer/application level isolation will deal with the bulk of the QoS and throughput problems with packet switched network services. The remaining economic issue is network scale throughput optimization for meshed packet streams, but again, 'better' packet switches do not appear to be the solution there either, as the matter to be focused on is the network scale performance.



stochasticprocess
50%
50%
stochasticprocess,
User Rank: Light Beer
12/5/2012 | 5:13:55 PM
re: 10G Ethernet Switches Pass the Test


In the report, they mention that "For store and forward DUT switches latency is defined in RFC 1242 as the time interval starting when the last bit of the input frame reaches the input port and ending when the first bit of the output frame is seen on the output port."  The latency is not increasing with frame size on store and forward switches because of how the latency is measured (essentially subtracting out the length).  Lippis couldn't have used this measurement method with the cut through switches because this would give you negative latency for jumbo frames (the first bit of output frame would be received before the last bit on the input frame).  Instead you need to do first bit in- first bit out.  Was it the case that Lippis and IXIA measured latency differently for cut through and store and forward switches?

BigBro
50%
50%
BigBro,
User Rank: Moderator
12/5/2012 | 5:11:57 PM
re: 10G Ethernet Switches Pass the Test


This post explains why latency goes down for larger packets:


http://www.fulcrummicro.com/bl...


"Frame processing time is masked for larger packets. As can be seen in some of the results, the latency gets lower as the size of the packet gets larger. Since the latency clock starts after the last bit arrives, large packets have plenty of time to process the frame header before the first bit is seen on the output port.  With small packets, even after the last bit arrives, the output must wait until frame header processing is complete before the first bit is seen on the output port."

Flash Poll
From The Founder
Last week I dropped in on "Hotlanta," Georgia to moderate Light Reading's inaugural DroneComm conference – a unique colloquium investigating the potential for drone communications to disrupt the world's telecom ecosystem. As you will see, it was a day of exploration and epiphany...
LRTV Documentaries
Verizon's Emmons: SDN Key to Cost-Effective Scaling

5|22|15   |   03:53   |   (0) comments


For Verizon and other network operators to ramp up available bandwidth cost effectively, they need to move to SDN and agree on how to do that.
LRTV Documentaries
Lack of Universal SDN a Challenge

5|21|15   |   04:51   |   (3) comments


Heavy Reading Analyst Sterling Perrin talks about how uncertainty about SDN standards and approaches may be slowing deployment.
LRTV Custom TV
Steve Vogelsang Interview: Carrier SDN

5|20|15   |   05:02   |   (0) comments


Sterling Perrin speaks to Steve Vogelsang, Alcatel-Lucent CTO for IP Routing & Transport business, about the new Carrier SDN-enabling Network Services Platform and the operator challenges it solves.
LRTV Custom TV
Carrier SDN: On-Demand Networks for an On-Demand World

5|20|15   |   20:52   |   (0) comments


Steve Vogelsang, Alcatel-Lucent CTO for IP Routing & Transport business, talks about requirements and benefits of Carrier SDN during the keynote address at the Light Reading Carrier SDN event May 2015.
LRTV Documentaries
The Security Challenge of SDN

5|19|15   |   02:52   |   (0) comments


CenturyLink VP James Feger discusses concerns that virtualization could create new vulnerabilities unless network operators build in safeguards.
LRTV Custom TV
NFV Elasticity – Highly Available VNF Scale-Out Architectures for the Mobile Edge

5|18|15   |   5:50   |   (0) comments


Peter Marek and Paul Stevens from Advantech Networks and Communications Group talk about their NFV Elasticity initiative and the company's latest platforms for deploying virtual network functions at the edge of the network. Packetarium XL and the new Versatile Server Module: 'designed to reach parts of the network that other servers cannot reach.'
LRTV Huawei Video Resource Center
Bay Area Spark Meetup 2015

5|14|15   |   3:54   |   (0) comments


Developed in 2009, Apache Spark is a powerful open source processing engine built around speed, ease of use and sophisticated analytics. This spring, Huawei hosted a meetup for Spark developers and data scientists in Santa Clara, California. Light Reading spoke with organizers and attendees about Huawei's code contributions and long-term commitment to Spark.
LRTV Custom TV
The Transport SDN Buzz

5|12|15   |   06:01   |   (1) comment


Sterling Perrin, senior analyst at Heavy Reading, speaks with Peter Ashwood-Smith of Huawei and Guru Parulkar of ON.Lab about the evolution of transport SDN and the integration of technologies.
LRTV Custom TV
Next-Generation CCAP: Cisco cBR-8 Evolved CCAP

5|5|15   |   04:49   |   (0) comments


John Chapman, Cisco's CTO of Cable Access Business Unit and Cisco Fellow, explained the innovation design of Cisco's cBR-8, the industry's first Evolved CCAP, including DOCSIS 3.1 design from ground-up, distributed CCAP with Remote PHY and path to virtualization. Cisco's cBR-8 Evolved CCAP is the platform that will last through the transitions.
LRTV Custom TV
Meeting the Demands of Bandwidth & Service Group Growth

5|1|15   |   5:35   |   (0) comments


Jorge Salinger, Comcast's Vice President of Access Architecture, explains how DOCSIS 3.1 and multi-service CCAP can meet the demands of the bandwidth and service group growth.
LRTV Custom TV
DOCSIS 3.1: Transforming Cable From Hardware-Defined Network to Software-Defined Network

4|29|15   |   03:48   |   (0) comments


John Chapman, Cisco's CTO of Cable Access Business Unit and Cisco Fellow, explains how DOCSIS 3.1 can transform cable HFC network to a more agile software-defined network.
LRTV Huawei Video Resource Center
Predicting Traffic Patterns for Quality Mobile Broadband

4|29|15   |   6:45   |   (0) comments


Accessing information ubiquitously creates complexity and creates heavy traffic onto the network, especially at large-scale events like sporting events or festivals. In this video, Huawei's Mohammad Hussain speaks to experts about how to predict traffic and improve user experience during periods of heavy traffic.
Upcoming Live Events
June 8, 2015, Chicago, IL
June 9, 2015, Chicago, IL
June 9-10, 2015, Chicago, IL
June 10, 2015, Chicago, IL
September 29-30, 2015, The Westin Grand Müchen, Munich, Germany
October 6, 2015, Westin Peachtree Plaza, Atlanta, GA
November 11-12, 2015, The Westin Peachtree Plaza, Atlanta, GA
All Upcoming Live Events
Infographics
Network functions virtualization (NFV) is not the easiest of topics to take on board, so here's a Light Reading infographic, developed following conversations with the folks at HP, that helps make sense of where NFV is taking the industry.
Hot Topics
AT&T Testing Virtualized GPON
Carol Wilson, Editor-at-large, 5/15/2015
Choosing a Technology Supplier? Consider Changing Your Selection Criteria
Steve Saunders, CEO and founder, Light Reading, 5/18/2015
Verizon Saves 60% Swapping Copper for Fiber
Sarah Thomas, Editorial Operations Director, 5/19/2015
Chattanooga Charts Killer Gigabit Apps
Mari Silbey, Senior Editor, Cable/Video, 5/20/2015
Smarter 'Dumb' TVs Will Drive OTT Adoption
Mari Silbey, Senior Editor, Cable/Video, 5/18/2015
Like Us on Facebook
Twitter Feed
Webinar Archive
BETWEEN THE CEOs - Executive Interviews
With 200 customers in 60 countries, Stockholm-based Net Insight has carved out a solid leadership position in one of the hottest vertical markets going in comms right now: helping service providers and broadcasters deliver video and other multimedia traffic over IP networks. How has Net Insight managed to achieve this success in the face of immense competition from the industry giants?
My ongoing interview tour of the leading minds of the telecom industry recently took me to Richardson, Texas, where I met with Rod Naphan, CTO and SVP, Solutions, ...
I recently popped down to Texas to chat with CEO Eric L. Pratt about his company, Taqua.
Cats with Phones