& cplSiteName &

Dueling Interconnects Unmasked

Light Reading
News Analysis
Light Reading
11/15/2001

Recent news of the HyperTransport Technology Consortium draws attention to an increasingly public duel between two technologies aimed at speeding up throughput on today's high-speed networks (see 'HyperTransport' Consortium Grows).

Those two technologies are HyperTransport and RapidIO (which has its own banner group, the RapidIO Trade Association). Both are chip-to-chip interconnection techniques aimed at ensuring that the inner workings of network and storage gear won't wind up being a bottleneck as carriers move to ever faster services.

A quick backgrounder: Today, networking devices that deploy multiple processors typically rely on the PCI bus or similar techniques to process information, such as packet lookups, inside a box. These technologies have maximum data rates of about 1 Gbit/s today (with exceptions), and they don't support varying amounts of bandwidth in one device.

But speed and bandwidth flexibility will be required of mainstream storage and networking gear as networks move to 10 Gbit/s data rates and beyond. For example, OC192 network processors supporting functions for next-generation Sonet switches will need to get lots of data on and off their chips fast.

Enter HyperTransport and RapidIO, which were devised by different vendors to speed up devices based on their chips. Simply put, both are ways of ensuring that complex, multiprocessor boxes will perform as well as the high-speed networks they're meant to run on.

Neither HyperTransport nor RapidIO are alternatives to Infiniband, which is designed to improve the I/O in servers and between subsystems instead of between chips (see InfiniBand Trade Association). Both HyperTransport and RapidIO proponents say their designs work with Infiniband.

Intel Corp. (Nasdaq: INTC) is often cited as having its own interconnect program, called 3GIO or Arapahoe. But by all accounts, this is not up to speed with HyperTransport or RapidIO. "Intel is a latecomer," says Bert McComas, founder and principal analyst at InQuest Market Research, a consultancy and market research firm. With less bandwidth and scaleability than either HyperTransport or RapidIO, by the time 3GIO is ready for market, about 2004, it's unlikely to be any competitive threat to the others, in McComas's view.

Meanwhile, both HyperTransport and RapidIO have a lot in common: They are designed to link chips, network processors, and other components inside multifunction devices based on silicon semiconductors. They are packet-based. They support varying bus widths and bandwidth rates. They are geared to support power and design efficiency. They both deploy so-called source-synchronous technology, where data and clock signals are sent over different connections that use the same voltage. They use similar signaling, based on the Low Voltage Differential Signaling specs approved by standards groups worldwide. And each claims to run at speeds above 60 Gbit/s chip to chip.

Table 1: Interconnect Techniques Compared

HyperTransport RapidIO
Main backers AMD, API Networks Motorola, IBM
Consortium URL www.hypertransport.org www.rapidio.org
Clock speeds 200 MHz to 800 MHz 100 MHz to 1 GHz
Bus widths 2,4,8,16, and 32 bits 8 or 16 bits
Number of pins 24 for 2-bit to 197 for 32-bit 40 for 8-bit; 76 for 16-bit
Aggregate bandwidth 128 Gbit/s 3 to 60 Gbit/s
Power required 1.2 volts 2.5 volts


But HyperTransport and RapidIO differ in several key ways, starting with pedigree. HyperTransport's creator and main backer is Advanced Micro Devices (NYSE: AMD), and originally it was geared to MIPS processors used in high-end servers and similar gear. More recently, API Networks Inc. has assumed a leading role as well. Broadcom Corp. (Nasdaq: BRCM) and PMC-Sierra Inc. (Nasdaq: PMCS) also license the technology.

RapidIO was originated by Motorola Inc. (NYSE: MOT) and IBM Corp. (NYSE: IBM) and is geared to Power PC processors, including those used in chassis-based equipment. This has given it early inroads with makers of networking gear. Alcatel SA (NYSE: ALA; Paris: CGEP:PA), Lucent Technologies Inc. (NYSE: LU), and Nortel Networks Corp. (NYSE/Toronto: NT) are all part of its steering group.

These early distinctions are getting increasingly blurry. Cisco Systems Inc. (Nasdaq: CSCO), for instance, belongs to both groups, as does Xilinx Inc. (Nasdaq: XLNX).

"There are many dependencies as far as selecting one or the other," says InQuest's McComas. He feels the key differentiators are more fine-grained and have to do with the practical considerations faced by product designers.

HyperTransport, for example, can often be implemented at lower cost because it uses fewer pins and lower voltage. Also, its supporters have put considerable effort into making sure HyperTransport works with PCI bus devices.

”HyperTransport might be considered a technology closer to the mainstream, high-volume implementations,” McComas says.

Until recently, there also were differences in how each technique worked with multiprocessor arrays. In most HyperTransport setups, when one processor in a row wants to talk to, say, the tenth processor, it must “daisy chain” its way through all of the intervening processors. RapidIO, in contrast, supports a peer-to-peer approach that is closely akin to switching. Since storage and networking gear usually deploy multiple processors, this could be a key factor in choosing one tack over the other.

But this differentiator appears to be dwindling. Earlier this month, API Networks released a product called the Starfish, a switch designed for use in HyperTransport networks. While details are comprehensible only to chip experts, API spokespeople say the Starfish works by replacing HyperTransport’s bus approach with a point-to-point architecture.

Still, those who wish to put a switch setup into a HyperTransport group of processors must pay API to do so, unless they want to create their own.

Ultimately, it seems the distinctions between HyperTransport and RapidIO are getting fewer and harder to discern, at least to outsiders. "There’s nothing I can point my finger at to say, ‘This is a feature that’s better in HyperTransport or RapidIO,' " says an engineer (who requested anonymity) from a chip vendor that supports both groups. "You can say that HyperTransport is cheaper, but that’s not always the case. There’s little technical difference, and performance is basically the same.”

In the end, it may come down to relationships. And that's not an easy field to navigate under any circumstances.

"There are alliances and pools of alliances that can be drawn along processor lines and by loyalty," says McComas, who points out that such alliances can shift. "It's not black and white."

— Mary Jander, Senior Editor, Light Reading
http://www.lightreading.com
(13)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 2   >   >>
edgecore
edgecore
12/4/2012 | 7:34:08 PM
re: Dueling Interconnects Unmasked
============
Until recently, there also were differences in how each technique worked with multiprocessor arrays.
============

With Rapid IO and HT silicon coming next year, will multi processor systems be the way of the future on the control plane for high end network equipmrent vendors?

BCM has the 1250, PMC has the RM9000X2, Motorola has a few SMP G4 (7450) boards out there.

EC
radnor
radnor
12/4/2012 | 7:34:07 PM
re: Dueling Interconnects Unmasked
Can someone tell me how the various SPI standards fits in with these proposals? Thanks.
pablo
pablo
12/4/2012 | 7:34:06 PM
re: Dueling Interconnects Unmasked

I was under the impression that HyperTransport's effort centered around 12.8Gbps for the 32bit version...
skeptic
skeptic
12/4/2012 | 7:34:04 PM
re: Dueling Interconnects Unmasked
With Rapid IO and HT silicon coming next year, will multi processor systems be the way of the future on the control plane for high end network equipmrent vendors?
----------------------------

Possibly. But most people who go multiprocessor
will overestimate the performance gains and
underestimate the software effort to really
make it a performance win. Its also the case
that the real-time event-driven sort of
applications that run on control planes are
not the best applications to get multiprocessing
performance gains from.




switchrus
switchrus
12/4/2012 | 7:33:57 PM
re: Dueling Interconnects Unmasked
The proliferation of high speed data busses leaves me a bit confused.

At COMDEX the Infiband crowd was there showing itGÇÖs wares and touting the grandness of their product, fair enough. One of the GÇ£selling pointsGÇ¥ was the lack of CPU power to support the TCP stack, the stack is in the chips so they say. Can someone who is a bit more software centric than I, please explain, using small words because IGÇÖm a hardware guy, why this is needed.

On this newest set of wrinkles in the high speed high bandwidth race, whereGÇÖs the advantage in adopting one technology over the other? What is the differentiator ?
opticalwatcher
opticalwatcher
12/4/2012 | 7:33:57 PM
re: Dueling Interconnects Unmasked
At the physical level, Rapid I/O, Hypertransport,
XGMII, and SPI-4 are all parallel, source synchronous, point-to-point buses, either HSTL or LVDS. To connect more than two devices together, you go through a switch.

"Serial Rapid I/O", Arapahoe, SPI-5, XAUI, Infiniband are all 'SerDes' technology: one or more serial pairs running at 1 to 3 or more Gigabit/second, with self-clocking and 8B10B encoding. Again, you need a switch to connect multiple devices together.

Above the physical level, it gets more complex. I think that Hypertransport is like PCI.

SerDes technology is probably ultimately faster,
lower power, and uses less pins for the same
throughput, so Intel may be trying to technically jump a generation ahead of Hypertransport with Arapahoe.

Can anyone else fill in more details?

p.s. System Packet Interface Level 4 (SPI-4) is
not to be confused with SCSI Parallel Interface 4 (also SPI-4).
rjmcmahon
rjmcmahon
12/4/2012 | 7:33:55 PM
re: Dueling Interconnects Unmasked
With Rapid IO and HT silicon coming next year, will multi processor systems be the way of the future on the control plane for high end network equipmrent vendors?
__________________________

SMP in networking products has not been gated by new interconnect technologies, rather there hasn't been a customer need justifying a vendor to invest the NRE. Also, since by their nature, networking products tend to be part of a loosely coupled distributed system and since they don't really support third party programmers, the evolution seems to have been more asymetric than symetric, in my opinion.

-Bob



edgecore
edgecore
12/4/2012 | 7:33:53 PM
re: Dueling Interconnects Unmasked
SMP in networking products has not been gated by new interconnect technologies, rather there hasn't been a customer need justifying a vendor to invest the NRE
====================

What NRE? Some commercial boards support SMP (i.e. SBS technologies). If you want to make your own hw (like most high end networking OEM's, then plenty of SMP bridging silicon exists for SMP (i.e. Galileo GT64260). ALso, commercial RTOS'es like QNX have been supporting true tightly coupled SMP on x86 and PowerPC for a couple of years!

EC

PS- To the poster who talked about TCP eating up all the processing cycles on the CPU:

-COMDEX is the last place to get an info on embedded systems...its a WinTel lovefest

-vendors such as AlacritechGÇÖs do specific silicon for partial TCP offload, other vendors do
full TCP offload chip...since Rapid IO is a chip to chip technology, iyt will only enhance the speed at which data is moved on the board between the CPU and offload chip for example
edgecore
edgecore
12/4/2012 | 7:33:52 PM
re: Dueling Interconnects Unmasked
Look at it as an investment, you will find bugs that you never dreamed you had when you unleash your app on an SMP system!

But your point is well taken, some NRE on the dev side may be required, but maybe not as much as you think:

http://www.qnx.com/literature/...

EC
rjmcmahon
rjmcmahon
12/4/2012 | 7:33:52 PM
re: Dueling Interconnects Unmasked
What NRE?
_________

The NRE required to rewrite the control plane software to run on SMP hardware.

-Bob
Page 1 / 2   >   >>
Featured Video
Upcoming Live Events
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events