Comms chips

Switch-Fabric Chipsets

As network equipment vendors begin to ramp up development of their next-generation systems in response to signs of a recovery in carrier spending, a raft of vendors of high-performance switch chipsets scent a significant commercial opportunity.

Many equipment vendors are in a bind. The switch fabric has traditionally been a core technology developed with ASICs designed in-house, but new requirements for very-high-speed serial interfaces and multiservice support have increased design costs just when most companies have been forced to cut their R&D investments. Enter the switch-chipset vendors – both established and startup semiconductor players – with a wide variety of chipsets that are simple to integrate and support advanced features.

And there is greater technology choice for designers, too. There are now highly efficient packet-based solutions coming to market from startups such as Sandburst and industry heavyweight Broadcom to potentially challenge the well established, cell-based, multiservice chipsets from Agere Systems and AMCC. With Ethernet again starting to look attractive for metro and access as well as the enterprise, could this be a golden opportunity for the upstarts to shift AMCC from its current switch chipset market dominance? Price, performance, and availability will be key factors determining the winners and losers.

And looming over everything is the pending Advanced Telecom Computing Architecture (AdvancedTCA) specifications. AdvancedTCA is a standard chassis system for carrier-grade telecom and computing applications that is being defined by the PCI Industrial Computer Manufacturers Group (PICMG). This is intended to allow vendors to reduce time to market and reduce total cost of ownership while still maintaining the five-nines availability required for telecom applications. Subsystems that meet these specs will take a big part of the market, and AdvancedTCA will be a key requirement for switch chipsets.

For some insight into this rapidly developing market, take a look into this report, which covers high-performance switch chipsets from leading vendors including:

Here’s a hyperlinked summary of the report:


This report was previewed in a Webinar moderated by the author and sponsored by TeraChip Inc. and ZettaCom Inc.. It may be viewed free of charge in our Webinar archives by clicking here.

Background Reading — Simon Stanley is founder and principal consultant of Earlswood Marketing Ltd. He is also the author of several other Light Reading reports on communications chips, including: PHY Chips, Packet Switch Chips, Traffic Manager Chips, 10-Gig Ethernet Transponders, Network Processors, and Next-Gen Sonet Silicon.

1 of 7
Next Page
Page 1 / 2   >   >>
Peter Heywood 12/5/2012 | 2:19:29 AM
re: Switch-Fabric Chipsets

eucerin 12/5/2012 | 2:19:11 AM
re: Switch-Fabric Chipsets First, the report says that it support 4K VCs per 10 Gbps, 16 subports and consumes only 6 Watts/10Gbps. On the other hard, the web site (http://www.tera-chip.com/pdf/T... says that TCI1x2 supports 2K unicast and 1K multicast. 1K or 2K difference.

Second, the report says that it consumes 6 Watts/10 Gbps of traffic. But the web site says that only the TCI1x2 consumes 6 Watts. The web site also states that TCF16x10 consumes 15 Watts (http://www.tera-chip.com/pdf/p.... This is approximately 1 Watt per 10 Gbps port and 2 Watts per port when 2 TCF16x10 devices are used for speedup.

Finally, by looking at the artcles I came across the article in EETimes (http://www.eetimes.com/story/O... where it is mentioned that TCI1x2 is an FPGA that can be implemented in an Altera or Xilinx FPGA. Futhermore the web site (http://www.tera-chip.com/pdf/T... says that the TCI1x2 has 16 SERDESes. Interesting to see that Terachip implements CSIX, SPI4.4(?) and NPSI in an FPGA (http://www.tera-chip.com/produ.... This must be a very large and state-of-the art FPGA.

Finally, the TCF document (http://www.tera-chip.com/produ... says that "TCF supports 8 QoS queues, thereby eliminating head-of-line blocking problems." Looks to me as a very challenging problem to solve the head-of-line problem with 2K unicast flows generated per TCI1x2 FPGA.
rs50terra 12/5/2012 | 2:19:04 AM
re: Switch-Fabric Chipsets The 160G solution presented in Figure 6 does not do justice to the different solutions. It is obvious that some of the solutions are for a 8 line cards with 20G each, or 4 cards with 40G each, neither of which are likely to be implemented by any system vendor. A more reasonable analysis would compare the solutions for a system with 16 line cards at 10G each. The result would be a apples to apples comparison.
Sisyphus 12/5/2012 | 2:19:04 AM
re: Switch-Fabric Chipsets
> head-of-line blocking

Hint: shared memory architecture.
taro 12/5/2012 | 2:19:01 AM
re: Switch-Fabric Chipsets A spin-out of Cisco and Marvell these guys are now ready to rock and roll. Who else out there does 32GE + 4 incl. MPLS, ACL, IPv4/6 at full line-speed?

eucerin 12/5/2012 | 2:18:57 AM
re: Switch-Fabric Chipsets Sisyphus, can you explain how 16x2K flows (=32K flows), can be mapped to a device that is advertised to support "TCF supports 8 QoS queues, thereby eliminating head-of-line blocking problems."

Are you saying that the whole switch does not really support 32K distinct flows? Head-of the line blocking, from what I knew, required one distinct queue per flow.

Also could you explain to me why in the whole description of the TCF16x10 there is no mentioning of how many distinct flows it really supports?

You may know something, but it appears that the numbers still do not make any sense. If the numbers reported for TCF16x10 are accurate, the numbers reported for TCI1x2 are in all likelihood wrong.

Also if the company supports distinct queues per virtual-outout queue, does this mean that this multi-purpose FPGA TCI1x2 is also a queue manager that can manager 3 to 4 thousand queues and support CSIX, NPSI or SPI4.4(?)? And on top of that the customer can choose between Altera and Xilinx?

Does this story make sense to you? To me, it does not.

eucerin 12/5/2012 | 2:18:55 AM
re: Switch-Fabric Chipsets rs50terra, you are raising a very good question. Along the lines of your comments, I would also like to see each vendor provide chip count numbers for multiple configurations. For example, if a company says that it supports 2.56 Tbps, let it describe how it implements it. Usually questionable architectures are on shaky grounds when they try to implement the extreme configurations.

Also some of these companies do not count the traffic manager as part of their chip count.
Thus, ideally they should provide how many chips they need, for multiple configurations by taking into consideration the traffic management as well.

Sisyphus 12/5/2012 | 2:18:54 AM
re: Switch-Fabric Chipsets Eucerin - it's really simple: TeraChip advertises a shared memory architecture. No virtual output queuing on the ingress device is required. Virtual output queues on the ingress device are crossbar architecture stuff. You're trying to apply crossbar logic to a shared memory architecture, which will never make any sense.
eucerin 12/5/2012 | 2:18:52 AM
re: Switch-Fabric Chipsets Thanks for the comment Sisyphus.
But now I have one more question. Does this mean that the shared memory is able to support 32K queues in the shared memory? If that is not the case, what is the effect of shared memory with "8 QoS classes"? To me this description looks like this: "I have a family and we all agree to SHARE a loaf of bread, which can be sliced into 8 slices. My family has 16 members. Even though we SHARE the loaf, it is clear to everybody that only 8 will take a slice of bread, the rest will go hungry -except if we have a miracle and out of some shared memory of "an unknown size," 32K unicast and another a few thoushand multicast flows can get their own dedicated queues. I would not bet the farm that this could be done. Would you?

Again, 2K flows per FPGA TCI1x2 require their own space in the shared memory. I do not see how this can be done. But probably my knowledge is rusty. I assume this is what is hiding behing the statement "Through an unconventional approach and patented technology, the company has developed groundbreaking switching capacity in a single chip. TeraChip components enable networking vendors to boost the performance of next-generation switches at a fraction of the cost, size and power consumption of current offerings." (http://www.tera-chip.com/).

Sisyphus, in appreciation of you kind help, I will share with you and the gendle people of this discussion group the definition of the word Sisyphus:

Definitions of sisyphus on the Web:

(Greek legend) a king in ancient Greece who offended Zeus and whose punishment was to roll a huge boulder to the top of a steep hill; each time the boulder neared the top it rolled back down and Sisyphus was forced to start again

Cheers Sisyphus. You seem to be carrying a heavy burden.

Simon_Stanley 12/5/2012 | 2:18:51 AM
re: Switch-Fabric Chipsets Hi Eucerin,

Thanks for raising these points.

The Terachip solution is a shared memory switch and therefore has the classical advantages and disadvantages of such an architecture. By having a shared architecture a single port may use most of the shared resources (eg Queues). On the other hand the agregate resources used must not exceed the total available.

Making apples to apples comparisons is very difficult when comparing a shared memory with a solution that has fixed resources per port such as a crossbar. Neither stating the maximum number of queues per port nor the average queues per port for a shared memory switch is entirely fair to both solutions. When reading the tables you should always consider the architecture.

The TCI1x2 is indeed an FPGA although they are also planning an ASIC equivalent. According to Terachip they have worked with Xilinx but should be able to support Altera as well. My understanding is that a single device can be configured to support one of CSIX, SPI-4.2 and NPSI. Terachip have not released pricing information for either the FPGA or ASIC equivalent.

The power number I was given by Terachip for the TCI1x2 was 5W. With 15W for the TCF16X10 this makes 6W per 10G port.
Page 1 / 2   >   >>
Sign In