The core of next-generation systems * ASIC killers? * Packet vs cells * Product tables

March 3, 2004

23 Min Read
Switch-Fabric Chipsets

As network equipment vendors begin to ramp up development of their next-generation systems in response to signs of a recovery in carrier spending, a raft of vendors of high-performance switch chipsets scent a significant commercial opportunity.

Many equipment vendors are in a bind. The switch fabric has traditionally been a core technology developed with ASICs designed in-house, but new requirements for very-high-speed serial interfaces and multiservice support have increased design costs just when most companies have been forced to cut their R&D investments. Enter the switch-chipset vendors – both established and startup semiconductor players – with a wide variety of chipsets that are simple to integrate and support advanced features.

And there is greater technology choice for designers, too. There are now highly efficient packet-based solutions coming to market from startups such as Sandburst and industry heavyweight Broadcom to potentially challenge the well established, cell-based, multiservice chipsets from Agere Systems and AMCC. With Ethernet again starting to look attractive for metro and access as well as the enterprise, could this be a golden opportunity for the upstarts to shift AMCC from its current switch chipset market dominance? Price, performance, and availability will be key factors determining the winners and losers.

And looming over everything is the pending Advanced Telecom Computing Architecture (AdvancedTCA) specifications. AdvancedTCA is a standard chassis system for carrier-grade telecom and computing applications that is being defined by the PCI Industrial Computer Manufacturers Group (PICMG). This is intended to allow vendors to reduce time to market and reduce total cost of ownership while still maintaining the five-nines availability required for telecom applications. Subsystems that meet these specs will take a big part of the market, and AdvancedTCA will be a key requirement for switch chipsets.

For some insight into this rapidly developing market, take a look into this report, which covers high-performance switch chipsets from leading vendors including:

Here’s a hyperlinked summary of the report:

  • Market & Applications
    Growth, consolidation, and lots of apps

  • Switch Architectures
    What’s in a switch chipset and how do they work?

  • Vendors & Device Counts
    How many devices do you need for a 160-Gbit/s system? That all depends on the vendor...

  • Cell-Based Switch Chipsets
    Compare products for multiservice applications

  • Packet-Based Switch Chipsets
    Compare products for Ethernet and MPLS systems

  • AdvancedTCA
    The road to off-the-shelf telecom linecards?

Webinar

This report was previewed in a Webinar moderated by the author and sponsored by TeraChip Inc. and ZettaCom Inc.. It may be viewed free of charge in our Webinar archives by clicking here.

Background Reading

  • Survey Rates Chip Suppliers

  • Marvell Plays Catchup in GigE

  • Ethernet Chips

  • Switch Chips Debut at Conference

  • Packet Switch Chips

— Simon Stanley is founder and principal consultant of Earlswood Marketing Ltd. He is also the author of several other Light Reading reports on communications chips, including: PHY Chips, Packet Switch Chips, Traffic Manager Chips, 10-Gig Ethernet Transponders, Network Processors, and Next-Gen Sonet Silicon.

Over the last two or three years telecom equipment manufacturers have scaled down their in-house design capabilities in the face of difficult markets. But, with signs of a revival in various telecom market sectors, equipment manufacturers are now starting to develop new systems again. Because of their now limited in-house design resources, however, coupled with tight financial constraints, many companies are finding it necessary to look outside for off-the-shelf solutions.

Third-party suppliers now offer a range of standard components for switching (see Figure 1). This stretches from ASICs with standardized I/O and third-party IP blocks, through standard interface devices and switching chipsets, to off-the-shelf linecards. The end-system development cost and time-to-market fall significantly as one moves from left to right across Figure 1. At the same time, the value of the standard components used increases, making this opportunity very attractive to semiconductor and subsystem component vendors.

47959_1.gifIn the last three to five years, standard components have moved from framers and high-speed serial interface devices to the development of the standard switch chipsets covered in this report. The development of AdvancedTCA will continue this trend with the introduction of off-the-shelf telecom linecards based on merchant silicon.

The latest switch chipset report from In-Stat/MDR (see Figure 2) illustrates this trend. This market forecast predicts a significant growth in switch chipsets shipment from approximately 500,000 10-Gbit/s-equivalent ports last year to more than 2 million in 2007.

47959_2.gifA difficult market has encouraged consolidation over the past 12 months, and Petaswitch, Teracross, and Zagros have closed (see PetaSwitch Kicks the Bucket, TeraCross Shuts Down, and Oath (#@%#!!) of Allegiance). Also AMCC has bought the PRS product family from IBM Corp. (NYSE: IBM); and Marvell has introduced the Prestera-FX based on the Dune SAND chipset. But surviving vendors are looking forward to significant growth in the next few years.

“We continue to see quite a strong demand in Asia,” says Mark Hung, director of product marketing at ZettaCom. “America has been much tougher. However, in the last three months there have been a lot of new projects coming on line in North America and Europe.”

These high-performance chipsets cover three main application areas:

  • Enterprise systems: Here the overriding requirement is for a low-cost solution that meets the bandwidth demands.

  • Multiservice provisioning platforms: MSPPs require a switch solution that is protocol agnostic, handling Ethernet, IP, ATM, and ideally TDM traffic as well.

  • Carrier-class metro and core routers: Carrier-class availability of 99.999 percent is achieved through a combination of redundancy and in-service maintenance. To support growth in user numbers and traffic, these systems must support scaling through higher line rates and additional ports.

Many of the latest switch fabrics are also suitable for storage networks. Key requirements here are low latency – ideally below 2 microseconds – support for large packets of up to 4 kbytes, and a protocol-agnostic solution supporting Ethernet and IP as well as Fiber Channel, iSCSI, and InfiniBand.

Multiservice systems are either transport- or data-based:

  • Transport-based systems have a TDM switch core supporting telecom native interfaces such as T1/E1, T3/E3, and Sonet/SDH. Ethernet interfaces are supported through Ethernet-over-Sonet or GFP mappers on the linecards.

  • Data-based systems have a cell-based switch core and native Ethernet interfaces. TDM support requires mappers on the line card and quality-of-service (QOS) guarantees through the switch fabric.

For pure Ethernet and Multiprotocol Label Switching (MPLS), systems a packet-based switch core can be used. This report covers both cell-based and packet-based switch chipsets.

The switch fabric has three key functions:

  • Provide a connection between the linecards across the backplane: The connections on the backplane are typically four or eight 2.5- or 3.125-Gbit/s serial lines. In many cases an 8B/10B coding is used, reducing the effective rate to 2 or 2.5 Gbit/s. These connections support hot-swappable modules, allowing an inservice upgrade to either the linecards or the switch-fabric cards.

  • Support the switching of packets or cells between the ports on the different linecards: For IP applications the switching solution must support both unicast and multicast traffic.

  • Meet the QOS requirements for the end application: Typical parameters include bandwidth guarantees, latency, jitter, and availability.

The switch chipsets covered by this report have one of three architectures:

  • Shared memory

  • Arbitrated crossbar

  • Buffered crossbar

The shared-memory architecture is shown in Figure 3. Ingress packets are queued in small buffers on the linecard and passed to the central switch chip as quickly as possible. The packets are then queued in a shared memory block before being forwarded to the correct output port and linecard. This architecture is simple to develop with a single switch device but is difficult to scale, especially across multiple switch cards. The market-leading AMCC PRS product family uses this architecture.

47959_3.gifThe crossbar switch (see Figure 4) has virtual output queues on the ingress line card, and is available in two forms: arbitrated crossbar and buffered crossbar. Packets are stored on the ingress side until required at the output, when a path is then scheduled through the central crossbar. This approach forms the arbitrated crossbar.

47959_4.gifTo simplify the complex arbitration required to set up a path from ingress to egress, buffering can be added to the crossbar. In the buffered-crossbar architecture, arbitration is handled separately for ingress and egress. Both crossbar architectures scale well across multiple devices and switch cards. Crossbar architectures tend to support more flexible redundancy schemes.

For more information on different switch architectures and redundancy schemes, take a look at the earlier Light Reading Report: Packet Switch Chips.

All three architectures work with similar line cards. Figure 5 shows a typical 10-Gbit/s linecard with an optical connection on the left and serial electrical connections to the switch fabric on the right.

47959_5.gifPackets pass through the MAC, PHY, and transponder to a packet switching or routing subsystem, such as a network processor and traffic manager, which forwards packets to the switch-fabric interface. The network processor and traffic management subsystem are also connected to a control plane processor and a large packet store. The interface between the network processor and switch fabric may be SPI-3 (2.5 Gbit/s) or SPI-4 (10 Gbit/s) or a Network Processor Forum (NPF) interface such as CSIX-L1 or Streaming Interface (NPSI).

Data is sent through the switch fabric in frames. Frames can be either a fixed size (cells) or variable sizes (packets). For some switch fabrics, the traffic manager or the switch-fabric interface must split the packets to pass through the switch fabric.

In many switch chipsets and traffic managers the linecard can be used as a standalone pizza-box system. In a chassis-based system the linecards can be connected directly back-to-back or through a mesh, or connected through a standard switch fabric. This gives system designers significant flexibility and scaleability when using a single switch chipset solution.

Switch chipsets can be divided into three groups: TDM, cell-based, and packet-based. TDM switch chipsets are available from Agere, Vitesse, PMC-Sierra Inc. (Nasdaq: PMCS), TranSwitch Corp. (Nasdaq: TXCC), Velio Communications Inc., and Zarlink Semiconductor Inc. (NYSE/Toronto: ZL), but are not covered further in this report. Another group may emerge as companies such as Intel Corp. (Nasdaq: INTC), IMC Semiconductor Inc. (formerly Internet Machines), and Vitesse shift their focus to PCI Express and Advanced Switching.

“We are talking about a whole economy of scale and availability of product that has been leveraged around PCI Express and Advanced Switching,” notes John Chiang, product manager at Vitesse.

Table 1 lists the 12 vendors with cell- and/or packet-based switch chipsets capable of at least 160-Gbit/s aggregate switching, supporting sixteen 10-Gbit/s interfaces or equivalent. Broadcom is the only company to have both cell- and packet-based switch chipsets.

Table 1: Switch-Chipset Vendors

Company

Cell-based

Packet-based

Agere

2.5 Tbit/s

No

AMCC

1.2 Tbit/s

No

Broadcom

640 Gbit/s

320 Gbit/s

Dune

No

40 Tbit/s

Erlang

No

640 Gbit/s

Marvell

No

20 Tbit/s

Mindspeed

320 Gbit/s

No

Sandburst

No

640 Gbit/s

Tau Networks

640 Gbit/s

No

TeraChip

1.2 Tbit/s

No

Vitesse

160 Gbit/s

No

Zettacom

640 Gbit/s

No



Cell-based and packet-based switches have different characteristics and are discussed separately on pages 5 and 6 of this report.

Device Count for Cell-Based Chipsets

Figure 6 shows the device count for 160-Gbit/s systems that use the different cell-based switch chipsets. With the exception of the Agere PI40 and the Mindspeed iScale, the device count includes the linecard SerDes devices. The Agere and Mindspeed devices are intended for use with dedicated network processors or traffic managers that already include the SerDes functionality.

47959_6.gifFigure 6 shows the number of devices for a 16x10-Gbit/s configuration with and without 1:1 redundancy, which provides a duplicate path through the switch. Redundancy is likely to be a key issue for many users looking for a cell-based solution, and so the TeraChip and Tau solutions come out with the lowest device counts for this application. The AMCC Cyclone (nPX8005) is clearly inefficient at 160 Gbit/s, and this is one of the reasons AMCC has added the PRS chipsets from IBM to its product range.

Device Count for Packet-Based Chipsets

Figure 7 shows the device count for 160-Gbit/s systems that use the different packet-based switch chipsets. With the exception of the Broadcom BCM5670/1, the device count includes the linecard SerDes devices. The Broadcom chipset is designed for use with dedicated BCM5673 and BCM5690 devices that include Ethernet MACs, packet processing, and SerDes.

47959_7.gifThe Dune SAND chipset scores significantly higher than the similar Marvell FX, owing to the use of a 20-Gbit/s line interface. Unlike the other chipsets, the Broadcom BCM5670/1 and the Erlang Xe use a multistage approach to build larger switches. This leads to significant increases in device count once the aggregate capacity exceeds that of one or two switch devices.

Cell-based switch fabrics use a fixed frame size, giving excellent QOS capabilities, and are therefore ideal for multiservice applications. Cell-based switch chipsets are available from eight companies (see the following table).



Dynamic Table: Cell-Based Switch Chipsets

Select fields:
Show All Fields
CompanyChipsetSwitching CapacityNPU/TM InterfacesHost InterfaceGuaranteed Latency, microsecondsTDM SupportSubports per 10Gbit/s Line InterfaceTraffic flows per 10Gbit/s PortSwitch ArchitectureFrame Distribution Across FabricFrame Payload, bytesLink OverspeedBackplane Link SpeedBackplane Links per 10Gbit/s PortPower (per 10Gbit/s)Price (per 10Gbit/s)Sample Availability

Switching Capacity

The switching capacities presented in the table are all for a full-duplex switch. These range from 160 Gbit/s for the Vitesse TeraStream up to 2.5 Tbit/s for the Agere PI40.

For smaller systems, chipsets from Agere, AMCC, Broadcom, Tau Networks, and ZettaCom can be used in a 40-Gbit/s switch. For a chassis-based solution, with a single switch device and multiple line interfaces, this is still cost effective. However, the latest Traffic Manager Switch devices from Teradiant Networks Inc. (TN9250/9450) promise significant cost reductions for 20-Gbit/s and 40-Gbit/s pizza-box systems.

NPU/TM Interface

First-generation switch chipsets were designed to connect directly to a network processor or traffic manager from the same vendor. These chipsets have either a high-speed serial interface, such as the Agere PI40, or a proprietary interface, such as the AMCC ViX. To allow the use of switch chipsets with network processors and traffic managers from various vendors, the Network Processing Forum (NPF) developed the 2.5-Gbit/s CSIX-L1 interface. CSIX-L1 specifies a frame protocol that allows the traffic manager or network processor to segment the traffic according to the cell size used by the switch fabric.

Most cell-based switch chipsets now support CSIX-L1, with the AMCC PRS chipsets also supporting the POSPHY Level 3 interface used on older network processors such as the IBM NP4GS3 (recently sold to Hifn Inc.). For 10-Gbit/s ports, Tau Networks' T64 supports the NPF Streaming Interface (NPSI). Chipsets from Mindspeed, TeraChip, and ZettaCom support an SPI-4.2-based interface, allowing them to communicate directly with the Intel IXP-2800 10-Gbit/s Network Processor. TeraChip is unique in using an FPGA solution for the line interface. This allows customers to customize a solution to their own requirements.

Most of these chipsets support a generic 16-bit or 32-bit host interface. This is in line with the multiservice application, where the host processor usually has a generic interface.

Quality of Service

Quality of service is a key issue for cell-based switches. All the chipsets listed, except the AMCC Cyclone and the Vitesse TeraStream, support a guaranteed latency of below 4 microseconds for some traffic. With these low latencies, TDM support is claimed by most of the vendors. In practice, however, as none of the vendors supply switch-specific TDM interfaces for the linecard, this is a theoretical capability. To support TDM traffic in a multiservice application, the data must be packed into cells and transported with suitable latency and jitter guarantees through the switch. Where TDM traffic is the dominant application, a dedicated TDM switch chipset should be used.

All the chipsets support at least four subports per 10-Gbit/s line interface. For 160-Gbit/s multiservice applications the most common line card configuration is 4xOC48. Chipsets from ZettaCom and TeraChip support 16 subports, and the Tau T64 will support 64. With most of these chipsets being used with a network processor or traffic manager on the linecard, this additional capability will not be required in most applications.

One area where there is a significant difference in the features of chipsets is the number of traffic flows per 10-Gbit/s port. The eight or 16 flows supported by the AMCC PRS and Zettacom devices are adequate for most applications. For more advanced multiservice applications, support for 32 or 64 flows will improve the QOS granularity that can be supported by the switch. The chipsets from Tau and TeraChip provide over 1,000 flows per port; however, it is not clear whether system designers will really use this level of QOS in a standard switch fabric.

None of these chipsets includes a full traffic management function, and therefore for most applications a separate traffic manager or network processor with integrated traffic management will be required.

Switch Architecture and Interconnect

The switch architecture used for these chipsets is a mixture of shared memory, buffered crossbar, and arbitrated crossbar.

Where multiple fabrics are used, the frame distribution across fabric can either be striped – with a single frame (cell) being split across several links – or sent inline down a single link. Cell-based fabrics use a frame with a fixed payload. The frame payload can be selected from a number of options or, for Tau and TeraChip, from a range, based on the expected traffic characteristics.

Cell-based fabric simplifies the switching and improves the QOS features; however, there is a major downside. The additional bandwidth required by the header of each frame must be provided on each link. This link overspeed, in theory, should be 1.6x or more. In practice, owing to inefficiencies in the switch fabrics and the use of fixed frame sizes, 2x or more is required. Although the Broadcom BCM83xx and TeraChip TeraFlex chipsets allow a link overspeed of 1x, this configuration could be used only in pure packet applications. The latest PRS Q-80G chipset from AMCC increases the overspeed to 2x by increasing the backplane link speed to 3.125 Gbit/s (3.4 Gbit/s maximum).

All chipsets use 8B/10B encoding on the serial links, reducing the data bandwidth to 2 Gbit/s for a 2.5-Gbit/s link and 2.5 Gbit/s for a 3.125-Gbit/s link. Combining this per-link bandwidth with an overspeed of 2x means that most of the chipsets require at least eight backplane links per 10-Gbit/s port.

Power, Price, and Availability

The total power available for a typical chassis is 100 to 150W per line card for a 16-slot system. Most of the chipsets shown have a power (per 10 Gbit/s) of about 15W. These are suitable for use in 160-Gbit/s systems with 10-Gbit/s linecards.

As OEMs roll out 320-Gbit/s and 640-Gbit/s systems, power becomes an issue. These systems use 20-Gbit/s and 40-Gbit/s linecards, so a power consumption of 30 to 60W just for the switching chipset gives the system designer a headache. The Agere PI40 and the Mindspeed iScale have a quoted power consumption of 7W and 8W, but this does not include the linecards' SerDes. Lowering power consumption must be a key focus for the next generation of switch chipsets. The only companies with quoted power consumptions of 10W and below are Tau and TeraChip. Together with their low chip count and aggressive pricing, this is the opportunity for these startups to achieve market success.

Price (per 10 Gbit/s) is approximately $300 to $400, with Tau leading with an announced price at $270 per 10-Gbit/s port. The last column gives sample availability. Almost half the chipsets are now in production, with only the latest PRS chipset still to sample. Since the last Light Reading report in December 2002, Packet Switch Chips, Tau and TeraChip have sampled their first chipsets. In the last 12 months chipsets that were available as samples from Broadcom and Vitesse have reached production status.

Several chipsets introduced during 2003 transport variable payload frames across the backplane. These packet-based switches can be significantly more efficient than cell-based switches (see the following table). Latency and jitter, however, are much larger, limiting most of these chipsets to pure packet applications. The exception is the Dune SAND architecture, which promises more advanced QOS for multiservice applications. The Marvell Prestera-FX uses the same architecture but must be used with the Prestera packet processors, limiting the application to Ethernet and MPLS.



Dynamic Table: Packet-Based Switch Chipsets

Select fields:
Show All Fields
CompanyChipsetSwitching CapacityNPU/TM InterfacesHost InterfaceIntegrated Traffic ManagementSubports per 10Gbit/s Line InterfaceTraffic flows per 10Gbit/s PortFrame Payload, bytesSwitch ArchitectureFrame Distribution Across FabricLink OverspeedBackplane Link SpeedBackplane Links per 10Gbit/s PortPower (per 10Gbit/s)Price (per 10Gbit/s)Sample Availability

Switching Capacity

Chipsets, most having a switching capacity of up to 320 Gbit/s or 640 Gbit/s, are now starting to appear in systems. Both Riverstone Networks Inc. (OTC: RSTN.PK) and Accton Technology Corp. have announced systems using the Sandburst HiBeam chipset.

The Dune SAND architecture is designed to go way beyond 640Gbit/s – up to 20 Tbit/s with a 10-Gbit/s line interface, as used in the Marvell Prestera-FX; and up to 40 Tbit/s using the 20-Gbit/s line interface that Dune plans to sample in the next few months. With 40-Gbit/s linecards, the switch chipset can support 80-Tbit/s switching distributed across several chassis. This is the first standard chipset to offer this level of scaleability, previously only promised by core router vendors.

“Some customers are planning to use the Marvell FAP10M in most of the linecards and then use the FAP20V with a network processor in a couple of linecards,” says Michael Kahan, VP of marketing at Dune.

NPU/TM Interface

All these chipsets are designed to work with external network processors or packet processors as part of a complete system solution.

The Broadcom BCM5670/1 is designed to work with either the BCM5673 for 10-Gigabit Ethernet interface linecards or the BCM5690 for Gigabit Ethernet linecards. The interface between these devices and the switch card is a single XAUI.The Marvell Prestera-FX connects directly to the Marvell Prestera-MX packet processors; and the Sandburst HiBeam chipset includes the FE-1000 forwarding engine.

The Erlang ENET-Xe and can be used with third-party network processors using the POSPHY Level 3 (PL3) or CSIX-L1 interfaces. A second-generation line interface planned for later in 2004 will add SPI-4.2 and NPSI. The Sandburst chipsets already support SPI-4.2.

Quality of Service

The chipsets from Dune, Marvell, and Sandburst include integrated traffic management functionality. The Broadcom and Erlang chipsets require a separate traffic manager or packet processor with integrated traffic management.

As with the cell-based switch fabrics, there is significant variation in the number of subports per 10-Gbit/s line interface and traffic flows per 10-Gbit/s port. For Ethernet and MPLS systems, four subports and eight traffic flows per port are likely to be adequate.

Switch Architecture and Interconnect

The switch architecture used for these chipsets is a mixture of shared memory, buffered crossbar, and arbitrated crossbar.Where multiple fabrics are used, the frame distribution across fabric can either be striped across several links, or all the bytes in a single frame (packet) can be sent inline down a single link. For packet-based fabrics, using the inline approach will significantly increase the latency. The Broadcom BCM5670/1 uses a standard XAUI interface (4x3.125-Gbit/s) for each switch fabric connected.

By definition, the frame payload for a packet-based switch must be variable. All the chipsets support the 9 kbytes required for jumbo packets.

There is significant difference among chipsets in the link overspeed required. At the top end, the Sandburst chipsets require an overspeed of 2x. The Dune and Marvell chipsets will work with an overspeed of 1.25x. By carefully defining the tags used on the frames, Broadcom has limited the bandwidth required over its XAUI link to 10 Gbit/s and can therefore run with no overspeed (1x).

All the packet-based chipsets support the 3.125-Gbit/s backplane link speed, although the SandBurst chipsets will also work at 2.5 Gbit/s. The link overspeed has a direct impact on the number of backplane links per 10-Gbit/s port. The Sandburst chipsets require ten at 3.125 Gbit/s, with the rest, except Broadcom, requiring five to eight. The difference between four and five-to-eight may not be significant for a proprietary chassis, but may become very significant as companies move to the AdvancedTCA, which limits the number of interfaces per linecard.

Power, Price, and Availability

As for the cell-based chipsets, power is below 10W for the leaders and 15W for the others. The 8.3W of the Broadcom BCM5670/1 does not include the linecard SerDes, leaving Dune with the lowest power in this group.

Price (per 10 Gbit/s) has not been disclosed by most of the vendors, but pricing should be significantly below the $350 per port for a cell-based chipset. Production devices are available from Broadcom, Marvell, and Sandburst. Price will be a key factor in this market over the coming year.

“I think you can expect people to be delivering gear at price points of $1,000 to $3,000 per 10-Gigabit Ethernet wire-speed port, available in the second half of 2004”, states Eric Hayes, senior product line manager with Broadcom.

A recent report from RHK Inc. forecast that the market for third-party commercial building blocks in telecom systems would grow from a negligible level today to $3.7 billion in 2007. A significant share of this may be taken by subsystems that meet the AdvancedTCA specifications. AdvancedTCA is a standard chassis system (see Figure 8) for carrier-grade telecom and computing applications, which is being defined by the PCI Industrial Computer Manufacturers Group (PICMG).

Using this standard solution, manufacturers can reduce time to market and the total cost of ownership while still maintaining the 99.999 percent (five 9s) availability required for telecom applications. In addition to the backplane, the specification defines the mechanicals, system management, and power distribution. The backplane is 2.5, 3.125, or, in the future, 10 Gbit/s serial interfaces. The specification covers both dual-star topologies and a full mesh.

47959_8.gifAn advancedTCA chassis can contain up to 16 cards. The cards are connected across the backplane using four buses: Sync Clock, Update Channel, Base Interface, and Fabric Interface. The fabric interface consists of 15 full-duplex channels. In the first Ethernet implementation each channel is 4x3.125-Gbit/s full duplex, but in the future this XAUI implementation will be extended to 4x10-Gbit/s, increasing the chassis capacity from 160 Gbit/s to 640 Gbit/s. The Base Interface provides a management channel across the switching system.

In a dual-star configuration, the chassis can have one or two switch cards and up to 14 linecards. Each linecard has a full-duplex channel (for example, 4x3.125-Gbit/s) to each switch card. The final system capacity will depend on the switch chipset used and the overspeed required.

AdvancedTCA is likely to be a key specification for switch chipsets. It enables off-the-shelf linecards, leading to greater demand for specific chipsets, and it significantly reduces the time to market for system manufacturers. Related documents cover Ethernet, Fiber Channel, InfiniBand, Starfabric, and PCI Express connectivity.

“We have quite a few slots going AdvancedTCA,” says Broadcom's Hayes. “I see people building boxes using cards from other manufacturers, similar to what we see with PCI today.”

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like