Optical/IP Networks

10-Gigabit Ethernet

Now that second-generation 10-Gigabit Ethernet products are starting to arrive, the time has come to think about serious deployment of the technology.

In simple terms, it means that kinks in first-generation products will have been ironed out. It also means that vendors are now focused on ramping up production by driving down costs, just as they have been with each previous tenfold increase in Ethernet speeds.

Now service providers need to get up to speed themselves – with what 10-Gig Ethernet technology actually is, what products currently exist, and what applications it addresses.

That’s what this report is all about. Here’s a hyperlinked summary:
  • Applications

    Where 10-Gig Ethernet is likely to be deployed first

  • Market Overview

    The 10-Gig Ethernet switch market could grow at more than 40% a year

  • Technology

    Details of standards, differences between Gig and 10-Gig Ethernet

  • Components

    Devices defined, 10-Gig Ethernet transponder MSAs compared

  • Selected Systems

    Examples of leading 10-Gig products, together with key parameters

A preview of this report was given in a recent Light Reading Webinar.

Previous Light Reading reports related to this subject may be of interest as well: Metro Ethernet and 10-Gig Ethernet Transponders. — Simon Stanley is founder and principal consultant of Earlswood Marketing Ltd. He is also the author of several other Light Reading reports on communications chips, including Security Processors, Packet Switch Chips, Traffic Manager Chips, 10-Gig Ethernet Transponders, Network Processors, and Next-Gen Sonet Silicon.

1 of 6
Next Page
Page 1 / 3   >   >>
metroshark 12/5/2012 | 12:24:30 AM
re: 10-Gigabit Ethernet The table of 10GigE vendors needs a little clean-up.

1. Switch bandwidth numbers (in the last column) quoted by Force10 does not make any sense. What does 40Gb/s for a 10Gb/s port mean? Also, several tests have shown that Cisco's Catalyst product cannot sustain wire-speed for 64B packets on 10GigE Interfaces. Maybe they fixed this problem since then, but it is difficult to figure it out from this table. This column should simpy be a measure of what data rate can be achieved using 64B Ethernet packets on the 10GigE port. Labeling it as switch bandwidth leads to the usual marketing tricks like quoting numbers with speed-up in the fabric necessary to eliminate blocking, counting the fabric memory bandwidth which is actually twice the port throughput, counting bits twice, etc. A good way to eliminate all this crap is to ask the vendors what is the maximum packet forwarding rate they can achieve on 10GigE ports in one direction while forwarding traffic in both directions. The number should be 14.88 million packets per second for wire-speed 10GigE performance.

2. Listing of I/O bandwidth versus switching bandwidth is also confusing. Again, Force10 quotes a switching bandwidth of 640Gb/s. Can they support 64 10G ports at wire-speed on their product? I don't think so. This is a meaningless number. Bandwidth or bit rates that are used inside the box to achieve performance is not very interesting for the end customer. The really interesting number is the maximum data switcing rate while running a standard test like the full-mesh test where each port sends traffic to every other port in the system. What the customer cares is that that the fabric is fully blocking and supports the maximum number of 10GigE ports that can be populated on the system at wire-speed.
pavelz 12/5/2012 | 12:23:59 AM
re: 10-Gigabit Ethernet The $500 optics for 10GbE are a world apart from the price of a port on central equipment . Are we effectively at the end of the 10GbE 'bubble' in which we saw the hopes rise on the sharp drop of the phy price, only to be squashed by the sustained multi-$10k price of router or switch ports? Or is there a drop in the price of the big iron per port comming as well?
(Any sightings of Linksisco 10G switches at Fry's ...?-)
metroshark 12/5/2012 | 12:23:57 AM
re: 10-Gigabit Ethernet I haven't seen any 10G product from LinkSys (or Netgear or D-Link) so far, but I am willing to bet that we will see sub $10K 10GigE port pricing before the end of this year.
pavelz 12/5/2012 | 12:23:49 AM
re: 10-Gigabit Ethernet On page 5 is the list of comp. vendors, this one disappoints somewhat
(a) in its timeliness (hinted at by the sentence "During 2001 semiconductor companies started rolling ...").


by the fact author failed to separate the market into the two (INMO) appropriate broader categories, that is, XENPAK, X2, and XPAK on one side and XFP on the other.
E.g. the fact that Infineon no longer makes XENPAKs (mentioned) is not that interesting given that they _do_ make an XPAK (which is not mentioned).

metroshark 12/5/2012 | 12:22:53 AM
re: 10-Gigabit Ethernet Paragraph from part 5 of the article:
Xenpak transponders are significantly smaller and cheaper, enabling more cost-effective solutions with up to four 10-Gbit/s ports per line card. They are hot-pluggable, allowing a “pay as you go” approach, with the expensive transponder modules being added as additional ports are required.

I don't know how the author came to the conclusion that Xenpak transponders are significantly smaller and cheaper. The latest batch of 300MSA transponders are actually smaller than the Xenpak from factor in terms of total footprint and height. The only advantage of Xenpak was hot pluggability. However, most vendors who looked into the challanges of placing and cooling 4 (or even 2) Xenpak transceivers on a line card pretty much gave-up and started looking for other solutions. This is why XPAK and X2 form factors were introduced even before systems using Xenpak went into production.

Another sentence from part 5 of the article:
XPAK and X2 transponders use the same XAUI interface as Xenpak, but will also support 10-Gigabit Fiber Channel.

This is also not true. Fiber Channel decided to go with XFP form factor for 10G applications.

At this time, nearly all component and system vendors have decided that XFP will be the strategical product for 10G optics. X2 and XPAK are introduced as a stop-gap measure to deal with some of the problem Xenpak created for the systems designers. However, in the long term, there will be convergence on XFP. Last week's announcement from Intel made is very clear.
Simon_Stanley 12/5/2012 | 12:22:38 AM
re: 10-Gigabit Ethernet Xenpak transponders are half the size of the older 300pin transponders. As you point out however, the latest 300pin transponders are actually slightly smaller than Xenpak so this conclusion is no longer true.

It is possible to build line cards with 2-4 Xenpak transponders. Whether this is the best approach is a different question. Vendors, such as Foundry, are already shipping line cards with 2 Xenpak transponders.

It is certainly clear that XFP is the format of choice for 10GFC and 10GE in the long term. XENPAK, X2 and XPAK are all likely to be interim solutions. As far as I am aware there is nothing to say that 10GFC solutions must use an XFP transceiver. Both XPAK and X2 were developed to support 10GFC as well as 10GE.
Simon_Stanley 12/5/2012 | 12:22:38 AM
re: 10-Gigabit Ethernet The purpose of this table is to present a selection of the systems available and give a rough guide to performance.

With 640Gbit/s switching bandwidth (1.2Tbit/s if you include in and out) the Force10 system should be able to handle line rate performance on 28 10Gbit/s ports. On the other hand the Foundry system with 120Gbit/s switching bandwidth clearly cannot.

The real system performance will depend on many factors and can only be determined by running realistic tests with fully loaded systems, as you suggest at the bottom.
Simon_Stanley 12/5/2012 | 12:22:37 AM
re: 10-Gigabit Ethernet Future reports will go into significantly more detail covering both 10GE physical layer devices and 10GE transponders
arak 12/5/2012 | 12:22:23 AM
re: 10-Gigabit Ethernet The Force10 product supports 40Gb full-duplex thruput per linecard slot. With 14 linecard slots, this works out to 1.120 Tb/s.

Their current line-card supports 2 x 10G ports per card, while they probably have a 4 x 10G product in the works sometime later.

The switching is fully distributed and supports 500Mpps. Works out to ~35Mpps per slot. While this is enough to support full line-rate forwarding with 64K packets for 2 x 10G ports, it certainly does not for 4 x 10G line-card scenarios. My suspicion is that they might tinker with the switch fabric cards at that point to support line-rate. Looking at the switch fabric cards themselves, I think they cost under $500 to make and they probably might sell them at $1K per card as an ugrade option.

All in all, it sounds like Force10 is better bang for the buck than any of the competing products out there.

metroshark 12/5/2012 | 12:22:11 AM
re: 10-Gigabit Ethernet The Force10 product supports 40Gb full-duplex thruput per linecard slot. With 14 linecard slots, this works out to 1.120 Tb/s.

Here is my problem. Can I actually send 40Gb/s of user data through this fabric - especially for any traffic distribution that does not oversubscribe outputs?

A lot of switch vendors quote the aggregate raw bandwidth that flies over their backplane as the switching capacity. In reality, this can be quite meaningless. There are several overheads that need to be subtracted from the raw bandwidth to come up with a number that represents the useful bandwith:

1. Many vendors double count bits going in and out of the backplane, effectively double stating their actual capacity. This dates back to the days when backplane was a bus and the vendor used to quote the bus bandwidth. Most of the time, vendors are forced to continue with this double-counting trick because of competitive pressure. If a vendor chooses to state their actual backplane throughput, they look bad in competitive tables (like the one in this article) when all the other vendors quote double-counted numbers.

2. Encoding overhead: These days, many vendors use serial links with 8B/10B encoding on their backplanes. This encoding scheme uses 10-bits to represent every 8-bits of data, so 20% of the bandwidth is used for coding overhead. The actual useful bandwidth that can actually carry data would be 80% of the raw number.

3. Cell overhead: Many vendors use a cell-based switch fabric in their larger boxes. In this case, packets are broken into uniform cells when they are carried across the backplane. Each cell typically has a header and maybe a crc/checksum, which accounts for 5% to 10% of overhead.

4. Internal overhead: In addition to the cell headers, many vendors also carry some additional data with each cell across the fabric. This adds up to more internal overhead and can be quite significant for small packet sizes.

5. Congestion overhead: Many high capacity boxes use input buffered switch fabrics. To deal with swithing efficiency issues (like HoL blocking) these products typically use a speed-up of around 2X on their backplane links. This may make the theoretical bandwidth add up to a large number, but the actual throughput under a mesh test configuration would typically be 50% to 70% of the raw bandwidth.

Since there are so many different factors that can cause the raw backplane capacity numbers to be inaccurate, the only way achieve a fair comparison among different vendors is to state the actual amount of traffic that can be forwarded in a fully loaded configuration under specified traffic pattern for a specified packet size. If the emphasis is on measuring the backplane capacity as opposed to forwarding rate, it would be fair to use longer packet sizes as long as this is stated.

In Force10's case, it looks like as of today, the product can forward 280Gb/s of traffic
Page 1 / 3   >   >>
Sign In