AMCC Raises Eyebrows
Crosspoint switch fabrics, which simply interconnect ports in a non-blocking manner, have already reached this goal (see Mindspeed Unveils Terabit Switch Chip). But intelligent switch fabrics, which are optimized for handling packets, are more difficult to design because they require traffic management -- the process of balancing traffic flows into the switch fabric to prevent bottlenecks.
According to Jon Siann, director of marketing for AMCC's switching and networking division, the nPX8005 can scale to 1.28 Tbit/s of real, terminated, full-duplex traffic -- a figure that has competitors raising their eyebrows.
"If they can do that I'd be really impressed," says Hugh Thomas, VP of marketing for the traffic manager and switch fabric groups at Vitesse Semiconductor Corp. (Nasdaq: VTSS). For comparison, a few months ago, Vitesse introduced an intelligent switch fabric called TeraStream, which can scale up to 160 Gbit/s using 24 chips in a fully redundant configuration (see Vitesse Offers PaceMaker, TeraStream).
However, working out exactly how many chips AMCC would need to scale to its maximum capacity isn't easy, since the company isn't very clear on this point. Siann does say that it requires approximately 2.6 chips to switch an OC192's worth (10 Gbit/s) of traffic.
The fractional numbers are due to the fact that, in common with other switch fabrics of this type, the nPX8005 chipset comprises a number of different chips. Sitting on the switch card is a single chip for arbitration and crosspoint functions, which has 80 Gbit/s of raw bandwidth (counting both ingress and egress ports this time). The line card is home to a dedicated memory chip, as well as a choice of two chips for traffic management (they implement different queuing algorithms).
The upshot is that AMCC must need somewhere in the region of 34 chips to reach its maximum configuration. But Vitesse's Thomas is still sceptical. "They must be using a multistage fabric to play those kind of games," he says.
"There are all sorts of complicated blocking issues with multistage fabrics," he adds. "The likes of Abrizio [acquired by PMC-Sierra Inc.] and Hyperchip Inc. do multistage on the grounds that it will get them up to 4 or 5 Tbit/s, but performance-wise you're better off with single stage."
But while the size of AMCC's switch fabric leaves competitors impressed, some of its other claims do not. For starters, Vitesse's Thomas says that the nPX8005 is not unique in offering so-called "active redundancy," as AMCC appears to claim.
Active redundancy offers an alternative to having separate primary and backup switch cards (so called 1+1 redundancy) or spare switch chips (N+1 redundancy). Instead, all the switch chips carry traffic, and, if one of them should fail, the remainder take up the strain. According to Siann, the big advantage is that it allows for rapid failover -- the control system does not have to get involved because the fabric automatically rebalances itself.
Vitesse considers that it, not AMCC, was first with this feature, which it introduced in its Gigastream fabric nearly a year ago (see Vitesse's Balancing Act ). "It generated an internal email storm [at Vitesse] when we read about it," he says. Perhaps the Yuni guys knew some of the folks developing the GigaStream, he speculates.
AMCC's Siann ducked the question, saying he can't comment on how the GigaStream works.
Thomas also calls into question whether AMCC is using standard or proprietary interfaces on the nPX8005. In its press release, AMCC notes that the chipset integrates seamlessly with the network processors and traffic managers from MMC Networks, another startup it bought in 2001. MMC's products use a proprietary interface called ViX.
"Being a promoter of open standards, we think that's a problem," says Thomas. AMCC did not respond to this question before press time.
Portions of the nPX8005 chipset are already sampling to development partners. The full chipset is set to sample this quarter.
— Pauline Rigby, Senior Editor, Light Reading