Featured Story
A Nokia sale of mobile, especially to the US, would be nuts
Nokia's hiring of Intel's Justin Hotard to be its new CEO has set tongues wagging again about a mobile exit, but it would look counterintuitive and inadvisable.
A survey of switching silicon for tomorrow’s data switches and routers * Business Case * Tech Tutorial * Product Matrix
February 2, 2003
For many network equipment manufacturers, the switch chipset has been a core technology that has traditionally been handled using custom ASICs designed in-house. The switch chipset is often the first component to be selected and is critical to the long-term capabilities of a product, giving companies a key differentiator from their competitors.
But now the need for innovative architectures and very-high-speed serial interfaces has increased the cost of the design and manufacture of switch chipsets, at a time when most companies have been forced to reduce development costs and increase margins. Third-party switch chipsets, now available from both established companies and startups, provide market-leading performance at unit costs that are competitive with in-house solutions.
With many different packet switch architectures and a variety of technologies, there also remains plenty of choice in the marketplace.
In this report, we take a detailed look at a typical switch and various switch architectures. We discuss the delivery of quality of service (QOS) as well as carrier-class reliability. We then review the high-performance packet switch market, concluding with a vendor summary and a table showing all the 160-Gbit/s and 640-Gbit/s packet switch chipsets that are available or will be available in the next 6 months.
Read on to learn about:
A Typical Switch
Switch Architectures
Scheduling and Arbitration
Redundancy
The Packet Switch Chipset
Vendors
Product Matrix
— Simon Stanley is founder and principal consultant of Earlswood Marketing Ltd. He is also the author of several other Light Reading reports on communications chips, including Traffic Manager Chips, 10-Gig Ethernet Transponders, Network Processors, and Next-Gen Sonet Silicon.As usual, this report was previewed in a Light Reading Webinar sponsored by four leading providers of packet switch chipsets: Dune Networks, IBM Corp. (NYSE: IBM), Zagros Networks Inc. and ZettaCom Inc. The Webinar is archived here.A chassis-based switch (figure 1) contains several multi-channel line cards. Each line card is attached to a number of lines (1-16) through connectors on the front. The line cards are connected are connected to fabric cards through the backplane. Packets are forwarded to the fabric card by routing devices on the line card. The fabric devices on the fabric card switch packets between the line cards.A typical switch contains 14 to 32 line cards in a single chassis with between two and four fabric cards supporting a total switching capacity of 40 Gbit/s to 640 Gbit/s. Some switches are designed to support multi-chassis implementations, expanding the system capacity to several Tbit/s.A switch chipset has three key functions.
Firstly, it must provide a connection between line cards across the backplane. These connections support hot-swappable modules, allowing an in-service upgrade to either the line cards or the switch fabric cards.
The switch chipset must also support the switching of packets and cells between the ports on the different line cards. Most switch fabrics implement the switching of frames that contain part or all of a cell or packet. Incoming packets are segmented and packed into frames by either the switch fabric line interface or by the traffic manager or network processor. For IP applications, the switching solution must support both uni-cast and multi-cast traffic. Some packet switch chipsets also support TDM traffic.
Finally, the switching solution must meet the quality of service (QOS) requirements for the end application. Typical parameters include bandwidth guarantees, latency, and jitter.
A typical switch chipset consists of a line interface on each line card and one or more fabric devices on each fabric card (figure 2). The line interfaces are connected to the fabric devices across the backplane using high-speed serial links (usually 2.5 Gbit/s or 3.125 Gbit/s of raw bandwidth). Up to 144 serial links are integrated into the fabric devices. In most chipsets, between eight and 20 serial links are also integrated into the line interfaces. Most chipsets use an 8B/10B coding on the links, creating an effective bandwidth on each link of 2 Gbit/s or 2.5 Gbit/s.Each line card in the system also contains one or more framer devices, a network processor, and a traffic manager. Packets are usually split into a number of fixed-size frames by the network processor. Each frame is then sent over a single serial link (in-line) or split across several serial links (striped).The line interface device contains some functionality that is similar to a traffic management function (see Traffic Manager Chips). This is leading a number of companies to develop chipsets that integrate the traffic manager with the switch fabric solution.
Figure 3 shows an ideal switch in a functional diagram, with the line interface and traffic manager/network processor split: The ingress is shown on the left, and the egress is shown on the right.The packets come in through ingress traffic managers/network processors and multiplexers. Packets are sent through the fabric devices and queued at the output of the switch. In this ideal switch, there is infinite bandwidth between the ingress traffic managers/network processors and the output queues connected to the egress traffic managers/network processors. Packets are stored in the output queues to avoid head-of-line blocking as they wait for a path through the fixed-bandwidth egress port.
Figure 4 shows a practical switch, with finite bandwidth. There are now virtual output queues (VOQ) on the ingress side. These VOQs are used to store frames on the ingress side before sending them through the fabric devices to arrive on the egress side, just in time to be sent through to the traffic manager and network processor.The key to making this solution work efficiently is to ensure that you make maximum use of the switching capacity, while at the same time neither starving the egress ports nor delaying premium-rate traffic. Most existing switch chipsets still require queues on both ingress and egress to achieve maximum throughput.Capacity can be added to a switch by adding fabric devices in parallel, with each fabric device handling 32-64 ports or 40-80 Gbit/s. Further expansion is achieved by replacing the fabric devices with ones that support either higher link rates or additional ports.
In the shared-memory switch, the center of the switch fabric is a shared memory that contains queues for different output ports and classes of service. Incoming frames are stored in virtual output queues on the line card and moved into the shared memory fabric device as quickly as possible. Frames are then scheduled out of the shared memory to meet QOS requirements.The shared memory switch is likely to be more expensive than an arbitrated crossbar (see figure 7), due to the cost of the integrated memory in the fabric device. This architecture has the best pedigree, however, with both IBM Corp. (NYSE: IBM) and Applied Micro Circuits Corp. (AMCC) (Nasdaq: AMCC) shipping since 1992.
The buffered crossbar switch has queues at the input, in the crossbar, and at the output. There is scheduling at the output of each queuing point. By having queues at each stage, this architecture avoids the complication of a centralized arbitration mechanism.The buffered crossbar is the simplest switch architecture, but because the stages are independent, it is difficult to provide advanced QOS through the switch. Broadcom Corp. (Nasdaq: BRCM) and Vitesse Semiconductor Corp. (Nasdaq: VTSS) are sampling buffered crossbar switch chipsets, with Internet Machines Corp. expected to sample devices by the end of 2002.
The third switch architecture is the arbitrated crossbar switch. With this architecture, there are virtual output queues connected to the traffic manager or network processor, a crossbar including an arbiter, and in most cases queues on the output. A request is made by the line interface, and data is sent once a grant is received back from the arbiter within the crossbar. Some chipsets have a separate arbiter device.The arbitration schemes used with this architecture are the key to success, and standard algorithms such as iSLIP and FIRM are being replaced by proprietary algorithms that are optimized for performance, QOS, or cost.The arbitrated crossbar is expected to be the dominant architecture in the future, but implementation challenges are delaying the introduction of chipsets. Only Mindspeed Technologies and ZettaCom Inc. have arbitrated crossbar switch chipsets in production. Chipsets are sampling from Agere Systems (NYSE: AGR/A), AMCC, and Sandburst Corp. In addition, Erlang Technology Inc., PetaSwitch Solutions Inc., Tau Networks Inc., and TeraCross Ltd. expect to sample chipsets by mid 2003.Scheduling and arbitration are used to route data through the switch to meet QOS requirements.Scheduling controls data leaving a stage within the switch chipset and may be used on ingress, egress, and in the fabric device, depending on the switch architecture. Arbitration is used to control access to the data path through the fabric device.Quality of service (QOS)QOS guarantees bandwidth, latency, and jitter for some or all traffic (see IP Quality of Service).There are various mechanisms used to support QOS in switch fabrics. Most support multiple queues per port through part or all of the switch. These multiple queues are linked to priority, classes, or flows. Some switch chipsets provide specific QOS mechanisms to support TDM traffic.To ensure efficient use of the switch bandwidth and maximum throughput, switch fabrics implement flow control through the switch itself and between the traffic manager and the switch fabric. Flow control is usually either proprietary or based on CSIX-L1 CFrames.
The Network Processing Forum (NPF) specification CSIX-L1 defines a standard interface between traffic managers or network processors and switch fabrics. In CSIX-L1 there is a definition for a CFrame that defines a widely available solution for segmenting packets into manageable frames to pass through a switch fabric. The CFrame is protocol-agnostic, and although the payload can be 1-256 bytes, the most common implementation is a payload of 64 bytes making 72 bytes, including the eight-byte overhead in the header and trailer. Contained within the header is information about class and per-destination behavior. CFrames are supported by most chipsets over the interface to the traffic manager and by some switch chipsets within the switch itself. Intel Corp. (Nasdaq: INTC) has defined a proprietary method of handling CFrames over SPI-4.2 (an interface specified by the Optical Internetworking Forum (OIF) -- see OIF Gives 40 Gig a Boost), which a number of switch manufacturers are planning to support. The NPF has also defined a streaming interface that runs over SPI-4.2 electrical interfaces. This interface has a separate, out-of-band flow-control protocol.SchedulingThe scheduling determines when, and in which order, queues are serviced. It is a “many-to-one” function.The main algorithmic forms used are:
Round Robin -- a frame is scheduled from each queue in turn.
Priority Round Robin and Strict Priority -- higher-priority queues are served ahead of a lower-priority queues.
Weighted Round Robin -- queues are serviced in order, but high-priority queues are visited more often. This can be used to support TDM and other continuous bit-rate traffic, where rate-based scheduling is needed.
ArbitrationArbitration is used to control access to the data path through the fabric device. This is a “many-to-many” function.Most arbitration algorithms include a request/grant handshake. Legacy architectures such as iSLIP and FIRM specify both the arbitration and the scheduling. This is an area of continual innovation.All switch chipsets support redundancy, with some chipsets supporting more than one type. There are three main types of redundancy: passive, load-sharing, and active.Passive Redundancy (1:1, N:1)This is the simplest to implement. One or more backup fabrics are installed in addition to the active fabrics. When a failure occurs, an automatic switch is made to one or more backup fabrics. One disadvantage of this approach is that when the switch is made to backup fabrics, any data in the active fabrics is lost. 1:1 redundancy is the minimum requirement for carrier-class reliability (99.999%).Load-Sharing Redundancy (N+1, N-1, N+N)Load-sharing switches are the cheapest to implement. All the switch fabrics are active, carrying user traffic until a failure occurs. If a failure occurs, one fabric device or card is disabled and the system performance will degrade gracefully, with traffic continuing to flow but at a lower throughput.The number of switch fabrics used will depend on the application requirements. For a minimum-cost system, the number of switch fabrics would be set to meet the network requirements under normal conditions. For more demanding applications, the number of active fabrics is increased and the switch will still meet normal traffic demands even after a failure.By providing twice as many switch fabrics as are required for normal traffic, this solution can also support carrier-class reliability.Active Redundancy (1+1)Active redundancy supports loss-less switchover to backup fabric devices and cards. This is the most expensive to implement.This configuration features two sets of active fabrics carrying the same traffic, with only one set connected to the outputs. If a fault occurs, there is automatic, loss-less switchover to the redundant fabrics. Figure 9 shows a chart from a report by In-Stat/MDR from July of 2002. Despite the current market conditions, In-Stat is forecasting a compound annual growth rate of 141 percent from 2001 through 2006. This growth to over 40 million lines by 2006 is largely driven by the expectation that a significant number of companies over this period will move away from in-house ASIC designs to standardized, off-the-shelf switch fabric solutions.“We believe that many companies that use their own silicon will next year consider using commercial silicon. A lot of the new switch chipsets that are coming out have the features that were lacking in previous designs,” says Fred Kamp, VP of marketing at PetaSwitch.Switch chipsets are used in a number of different markets:
Enterprise and Ethernet Access Systems – The overriding requirement here is for a low-cost solution that meets the bandwidth demands.
Multiservice Access – The switch fabric must be protocol-agnostic, handling Ethernet, IP, ATM, and ideally TDM traffic as well. In-service upgrade is a key requirement for this market.
Carrier-Class Metro and Core Router – Carrier-class reliability (99.999%) is achieved through a combination of redundancy and in-service maintenance. The expected lifetime of a chassis can be between five and 20 years, rather than the more common three to eight in enterprise systems, so scalability is a key requirement.
Storage Networks – Only the latest switch fabrics are suitable for this segment. Key requirements here are low latency (ideally below 2 microseconds) and support for large packets (up to 4 kbytes). This market also requires a protocol-agnostic fabric that can support Ethernet and IP as well as Fibre Channel, iSCSI, and InfiniBand.
A key trend for the near future is likely to be significant consolidation in the current 160-640 Gbit/s market, where there are four vendors in production (AMCC, IBM, Mindspeed, and ZettaCom Inc.), four vendors sampling devices (Agere, Broadcom, Sandburst, and Vitesse), and six vendors planning to sample before mid 2003 (Erlang, Internet Machines, PetaSwitch, Tau Networks, TeraCross, and Zagros). There are also five or more vendors that are yet to announce their plans.Many of the established companies have new products and are planning further extensions, offering increased scalability, additional QOS with emphasis on support for TDM, and lower prices.These standard chipsets exceed the capabilities of most in-house designs. The battle is now on to win designs from incumbent in-house ASICs and achieve the 40 million links by 2006 forecast by In-Stat.Agere Systems (NYSE: AGR/A)The Agere PI40 chipset supports 40 Gbit/s to 2.5 Tbit/s. The chipset consists of an aggregation device (PI40X) and a crossbar (PI40C). The PI40X aggregation device can be positioned on the fabric card or on a 40-Gbit/s line card. The chipset is scaled by connecting the PI40X devices together using one or more PI40C devices.The PI40X connects directly to the Agere PayloadPlus 10-Gbit/s network processor but will require external SerDes devices to connect to the new APP550 2.5-Gbit/s network processor. A single-chip, 40-Gbit/s switch solution, the PI40SAX, is also available.The PI40 chipset is sampling now. Future plans include a lower-power PI40X.
Intel Backs Another Switch Chip
Applied Micro Circuits Corp. (AMCC) (Nasdaq: AMCC)AMCC has two 160-Gbit/s switch solutions.The nPX5800 Interconnect Fabric is a 20-Gbit/s shared memory switch, designed primarily to be used with the AMCC nP57xx traffic management chipset. The fabric is connected to the traffic manager using the AMCC ViX-V3 parallel or serial interfaces. The solution can be scaled to 160 Gbit/s with eight fabric devices or 160 Gbit/s with redundancy using 16 fabric devices.The nPX8005 (Cyclone) chipset consists of four devices, including an arbitrated crossbar. Like the nPX5800, the nPX8005 supports proprietary AMD interfaces. The chipset scales to 1.2 Tbit/s and handles any combination of ATM, MPLS, and IP, as well as TDM traffic.
AMCC Announces Edge Platform
AMCC Adds Terabit Switch Fabric
AMCC Raises Eyebrows
Broadcom Corp. (Nasdaq: BRCM)The Broadcom BCM83xx chipset is designed for the enterprise and service provider metro markets. The chipset consists of a dual 10-Gbit/s line interface and an 80-Gbit/s buffered crossbar. A 160-Gbit/s switch can be built using only two crossbar devices, though four are needed to support 2x overspeed on the backplane links. The line interface supports two CSIX-L1 network processor interfaces. The chipset is sampling.To meet the requirements of service provider applications, Broadcom has focused on 99.999% reliability and QOS. "We are the only people that can have this level of quality of service in a fabric this size," says Eric Hayes, Broadcom's product line manager.
Broadcom Launches New Switch Fabric
Dune Networks Dune Networks was founded in October 2000 by a team drawn largely from system companies. With headquarters in California and development in Israel, the company is working on a switch chipset called SAND (Scalable Architecture for Network Devices).The fabric will have scheduling algorithms and 'fine-grain' traffic management to support various traffic types, including ATM, Ethernet, and TDM. The company has not released schedule information; therefore the SAND chipset is not included in the product table at the end of this report.
Switch Fabric Chips Rewrite the Rules
Switching Silicon Goes Scaleable
Dune Digs Up $24M
Erlang Technology Inc.Erlang is a privately held company that provides both ASIC and ASSP (standard product) switching solutions. The company’s standard product roadmap includes three chipsets: the ENET-Xs (which is shipping), the ENET-Xe (covered in this report), and the forthcoming ENET-Xt.The ENET-Xe consists of two chips, a 2.5-Gbit/s line interface (Xel) and an arbitrated crossbar (XeC). Scalable from 80 Gbit/s to 640 Gbit/s, the chipset is expected to sample soon.
Erlang Preps Switch Fabric Push
IBM Corp. (NYSE: IBM) IBM is the switch chipset market leader and the PowerPRS Q-64G is the latest switch fabric in a long line.“We have been designing switch fabrics since 1992,” explains Gilles Garcia, strategic marketing manager at IBM.All the IBM switch fabrics use a shared-memory approach. The Q-64G scales from 80 Gbit/s to 320 Gbit/s and is IBM’s first fabric to support 2.5-Gbit/s backplane links, rather than the DASL links used in previous generations. IBM’s approach is more conservative than many competitors' but will still meet the technical requirements of most customers. The keys to IBM’s success have been predictable performance and a migration path over many generations of product. This is expected to continue.The PowerPRSQ-64G fabric is in production with the 2.5-Gbit/s (C48) and 10-Gbit/s (C192) line interface devices sampling. Future products include the PowerPRSQ-128G, which doubles the capacity to 640 Gbit/s, and a further enhanced architecture designed to take capacity over 16 Tbit/s.
Bay Micro Interoperates With IBM
EZchip, IBM Offer Reference Platform
Internet Machines Corp.Internet Machines is developing a complete 10-Gbit/s chipset, including network processor, traffic manager, and switch fabric, which is expected to sample by the end of 2002. At the core of the chipset is a high-capacity switch fabric connected through 3.125-Gbit/s links to the traffic manager on the line card. The chipset will scale to 640 Gbit/s of throughput.“We believe it to be the highest-capacity single-chip switch element. I'm not aware of anyone coming close to the 200-Gigabit capacity we have in our chip,” says Aloke Gupta, vice president of marketing.The SE200 switch fabric is expected to sample in the fourth quarter of 2002 and is designed to work with the TMC10 traffic manager (again, see the report on Traffic Manager Chips for details). They are included as a single chipset in the product table on the next page.
Internet Machines Takes Aim at Zettacom
Internet Machines in the Chips
Mindspeed TechnologiesThe iScale chipset is based on technology developed by Hotrail before its acquisition by MindSpeed. The iScale chipset is unusual in having both the buffered crossbar and the queue manager on the switch fabric card. There are high-speed serial interfaces between the crossbar and the queue manager, and between the queue manager and the line card. External SerDes devices are required on the line card to connect to a standard network processor or traffic manager. A 2.5-Gbit/s SerDes device is available from Mindspeed, and customers can implement a 10-Gbit/s solution using four SerDes devices and an FPGA. The iScale chipset scales from 20 Gbit/s to 320 Gbit/s and is in production. “Currently we are shipping to customers in 20, 30, 40, 80 and 160 Gigabit configurations. Silicon is now being deployed in carrier systems,” says Elie Massabki, executive director of marketing.
Mindspeed Switch Fabric Lives On
Mindspeed Releases Chipset
PetaSwitch Solutions Inc.The Pisces chipset consists of a Virtual Queue Manager (VQM) on the line card and an arbitrated crossbar fabric (CSW). There are two versions of the VQM, supporting either CSIX-L1 or SPI-4.2. At 8.75W per 10-Gbit/s port, the Pisces is predicted to have the lowest power consumption (including line card SerDes) of any chipset in this report.The Pisces chipset will scale to 2.5 Tbit/s; however, samples are not expected until the second quarter of 2003.Sandburst Corp.The 10-Gbit/s HiBeam chipset, including packet processor, traffic manager, and switch fabric functions, takes a fresh approach. The HiBeam Packet Fabric is an arbitrated crossbar that connects to the QE-1000 queuing engine on the line card.“We are focusing on Enterprise, Ethernet LANs, VLANs, and IP networks, but we also have hooks in there to support legacy SONET rings and Packet over SONET type applications,” says Vince Garziani, Sandburst's CEO.The QE-1000 has a SPI-4.2 interface to the packet processor and 2.5/3.125-Gbit/s serial interfaces to the switch fabric. The 160-Gbit/s chipset is sampling now, and Sandburst has announced an agreement with Analog Devices Inc. (NYSE: ADI) to jointly market ADI's X-Stream crosspoint switches.
Intel Backs Another Switch Chip
Sandburst Switches Packets
Sandburst Bags $27.5M
Tau Networks Inc.Tau Networks is not looking to develop the fastest, lowest-power, or best QOS-enabled switch chipset; it wants the cheapest possible solution that achieves good all-round performance. At $270 per 10-Gbit/s port, they achieve this.The T64 chipset consists of a line interface and arbitrated crossbar switch. Each switch device includes both arbitration and switching logic, and depending on the switch configuration can be used for arbitration, switching, or both. The chipset is expected to sample in the fourth quarter of 2002.
Tau Touts Cheap Switch Fabric
TeraChip Inc.
TeraChip announced its TCF16X10 Switch Fabric device on Feb. 3, 2003. Both the switch fabric and a 10-Gbit/s line interface device are sampling during the first quarter. TeraChip is the first startup to use the buffered memory approach championed by IBM, but, unlike the IBM architecture, it does not require communication among switch fabrics.
The $800 (in volume) switch fabric has sixteen 10-Gbit/s links, giving a maximum switching capacity of 160 Gbit/s. Like the Broadcom BCM83xx chipset, the TeraChip fabric achieves maximum switching capacity with 1x overspeed on the serial links. Most implementations are likely to need additional overspeed to achieve full 10-Gbit/s throughput on each linecard, which will increase the number of switch fabrics required. TeraChip is planning to introduce higher-bandwidth line interfaces at a later date.
TeraChip has not released pricing details on the 10-Gbit/s line interface, but assuming this is not too expensive (and the switch fabric has adequate buffering), this solution should be very competitive.
TeraCross Ltd.The GLIMPS switch chipset consists of a 10-Gbit/s queue manager (TXQ) and a scheduler (TXS) implemented on a Xilinx Virtex-II FPGA. The chipset is designed to work with a third-party 144x144 crosspoint switch and external SerDes devices on the line card.Supporting system scaling to 1.28 Tbit/s, the queue manager is expected to sample in the fourth quarter of 2002.
TeraCross, Mindspeed Switch Packets
TeraCross Links to New Intel Chip
Vitesse Semiconductor Corp. (Nasdaq: VTSS)The TeraStream chipset builds on the older CrossStream and GigaStream switching solutions. TeraStream consists of a 10-Gbit/s queuing engine on the line card and a buffered crossbar on the switch card. The queuing engines support both CSIX-L1 and SPI-4.2 interfaces to the network processor or traffic manager. The TeraStream architecture will scale to 640 Gbit/s, and a 160-Gbit/s chipset is sampling.
Vitesse Offers PaceMaker, TeraStream
Zagros Networks Inc.Zagros's focus is on multiservice applications, where QOS is key.“Now we can deliver controlled latency, guaranteed bandwidth, and interflow isolation, which are all attributes of the circuit switch network in a packet-oriented environment,” explains Wade Appelman, vice president of marketing.The Z1 chipset consists of a 10-Gbit/s queue manager (QD10) on the line card and a 32-port arbitrated crossbar (Zn320) on the switch card. The queue manager supports a CSIX interface to the network processor or traffic manager. The chipset scales to 320 Gbit/s and is expected to sample in the second quarter of 2003.
Switch Fabric Chips Rewrite the Rules
Zagros Wins $9.7M
ZettaCom Inc.Zettacom has two switch chipsets: the ZSF200, with external SerDes, and the enhanced ZSF500, with integrated SerDes. The ZSF200 is in production and the ZSF500 is sampling.Zettacom claims to be currently shipping in the SAN systems, multiservice access equipment, carrier-class metro, and metro Ethernet markets. It also says it's chips are being designed into high-end enterprise systems.The ZSF switch fabrics are designed to work with the 10-Gbit/s ZTM traffic management chipsets. The ZSF200 scales to 320 Gbit/s, and the ZSF500 scales to 640 Gbit/s.
ZettaCom Has Designs on Fujitsu
Zettacom, Xilinx Interoperate
Zettacom Set to Score $47.5M
ZettaCom Advances With ZEST (& ZEN)
Dynamic Table: Packet Switch Chips
Select fields:
Show All Fields
CompanyChipsetSwitching CapacitySample AvailabilityNPU/TM InterfacesIntegrated Traffic ManagementPower (per 10 Gbit/s)Price (per 10 Gbit/s)Integrated Linecard SerDes160-Gbit/s Device Count160-Gbit/s (with 1:1 Redundancy) Device Count640-Gbit/s Device Count640-Gbit/s (with 1:1 Redundancy) Device CountSwitch ArchitectureGuaranteed LatencyTDM SupportSub-ports per 10-Gbit/s Line InterfaceTraffic Flows per 10-Gbit/s PortFrame Payload (Bytes)Frame Distribution Across FabricFabric OverspeedBackplane Link SpeedBackplane Links per 10-Gbit/s PortRedundancy ModesHost Interface
Where an entry is not relevant, this is shown by '–'. Here’s an explanation of the column headings:Company
The semiconductor vendor.Device
The name of the device.Switching Capacity
The maximum switching capacity the chipset can achieve. This is the aggregate full-duplex throughput. Most chipsets will scale from 40 Gbit/s to 320 Gbit/s or 640 Gbit/s, with a few scaling to over 1 Tbit/s. Sample Availability
The quarter or month in which samples are expected to be available. When the device is already available, we state whether the device is sampling or in production. In some cases, only the devices supporting less than the maximum capacity are available. In these cases, the highest capacity available is indicated in parentheses.NPU/TM Interfaces
Most of the chipsets will interface directly to a traffic manager or NPU. Many support the NPF CSIX-L1 interface for 10 Gbit/s and POSPHY-3 (PL3) for 2.5 Gbit/s. Some devices support SPI-4 interfaces and are compatible with either the Intel IXP2800 or the NPF Streaming Interface (NP-SI), or both. Some devices have a proprietary interface, such as the AMCC ViX or a high-speed serial interface, and can only be used with traffic managers or network processors supporting that interface. Integrated Traffic Management
Does the switch chipset include integrated traffic management functions such as shaping, policing, and scheduling? This may increase the cost and power consumption of the chipset but removes the need for a separate traffic management solution.Power (per 10 Gbit/s)
This is the power (maximum or typical) per 10-Gbit/s port for a 160-Gbit/s chipset, excluding any external SerDes devices.Price (per 10 Gbit/s)
This is the price in production quantities per 10-Gbit/s port for a 160-Gbit/s chipset, excluding any external SerDes devices.Integrated Linecard SerDes
Does the device include integrated SerDes on the line interface? If not, then either the chipset must interface with traffic managers or network processors supporting that interface, or additional SerDes device are required on the line card.160-Gbit/s Device Count
The total number of devices, including line interface devices, switch devices, and scheduling devices (excluding any external SerDes devices), required for a 160-Gbit/s switch with no redundancy support.160-Gbit/s (with 1:1 Redundancy) Device Count
The total number of devices, including line interface devices, switch devices, and scheduling devices (excluding any external SerDes devices), required for a 160-Gbit/s switch with 1:1 redundancy.640-Gbit/s Device Count
The total number of devices, including line interface devices, switch devices, and scheduling devices (excluding any external SerDes devices), required for a 640-Gbit/s switch with no redundancy support.640-Gbit/s (with 1:1 Redundancy Device) Count
The total number of devices, including line interface devices, switch devices, and scheduling devices (excluding any external SerDes devices), required for a 640-Gbit/s switch with 1:1 redundancy.Switch Architecture
The basic architecture of the switch chipset: shared memory, buffered crossbar, or arbitrated crossbar. All of the chipsets covered are fundamentally one of these architectures.Guaranteed Latency
All of the chipsets support guaranteed latency for premium rate traffic. Guaranteed latency is particularly important for TDM and storage applications.TDM Support
Does the chipset support TDM traffic? These chipsets provide QOS mechanisms to guarantee latency and bandwidth, with a few providing TDM-specific scheduling and arbitration.Sub-ports per 10-Gbit/s Line Interface
With a 10-Gbit/s line interface, how many sub-ports (4 x 2.5 Gbit/s, 16 x 622 Mbit/s etc.) are supported?Traffic Flows per 10-Gbit/s Port
The number of flows that can be separately queued per 10-Gbit/s input/output pair. The flows will be differentiated by priority or other QOS parameters.Frame Payload (bytes)
The number of bytes in the payload of the frame switched by the chipset. This is typically 64-80 bytes, but can be as low as 1 byte and as high as 256 bytes. Some chipsets switch full packets (variable frames) rather than fixed frames.Frame Distribution
How are the bytes of a single frame spread sent across the backplane? Some chipsets send all the bytes sequentially along a single serial link (in-line); some chipsets spread (stripe) the bytes across a number of serial links that may be switched by different fabric devices.Fabric Overspeed
The overspeed into the fabric, excluding any coding overhead. So with ten 2.5-Gbit/s serial links for a 10-Gbit/s line card with 8B/10B coding (2 Gbit/s effective rate), the overspeed is 2x (20 Gbit/s). Some devices use more efficient coding (e.g. 16B/17B). Overspeed is a guide to performance, but chipsets that support variable frame size will achieve similar performance with 1x overspeed as a chipset with fixed frame size and 2x overspeed when the incoming packets are just bigger than the frame size (e.g. 65-byte packets and 64-byte frames).Backplane Link Speed
The speed of the links across the backplane. These are typically 2.5-Gbit/s or 3.125-Gbit/s serial links.Backplane Links per 10-Gbit/s Port
How many links are required per 10-Gbit/s port (with no redundancy)?Redundancy Modes
Which redundancy modes are supported?Host Interface
Most chipsets support 16-bit generic or 32-bit PCI interfaces. A few support MDIO or in-band access to internal registers.
You May Also Like