x
Comms chips

Internet Machines Ships (At Last)

It's taken a while, but network processor vendor Internet Machines Corp. is ready to ship its three-chip set for 10-Gbit/s switching.

Yesterday, the company announced sampling of its chips: the NPE10 network processor, the TMC10 traffic manager, and the SE200 switch fabric, collectively called iMpower (see Internet Machines Launches Chipset). Production volumes are expected next quarter, says CEO Chris Hoogenboom.

It's been a long road for Internet Machines. The company had hoped to sample at least one chip by the end of 2001 (see Internet Machines Takes Aim at Zettacom).

"They're definitely behind the schedule they had talked about," says Linley Gwennap, principal analyst of The Linley Group. "The good news is, even though they're late, the market at 10-Gbit/s has been developing more slowly than people thought, so they haven't missed anything."

The company started in 2000, aiming to provide a blast of speed for routers or switches. But the need for speed has faded, replaced by channelization and integration, as equipment vendors strive to build compact edge boxes.

If the company's star hasn't fallen, it's certainly faded a bit. In July 2001, Exar Corp. (Nasdaq: EXAR) invested $40.3 million for a 16 percent stake in Internet Machines (see Exar Teams Up on 10-Gig Chips), and that investment was worth just $5 million by September 2002, according to Exar's filings with the Securities and Exchange Commission (SEC). Exar officials insist, however, they still have faith in Internet Machines.

"The good news is, [Internet Machines has] a lot of money in the bank and they can hold out quite a while," Gwennap says. "This is a startup with legs."

For 2003, "we're pretty comfortable with the cash position we have throughout this year and into next year. We raised $81 million, and we acknowledged we're in a downturn long before our competitors did and did our layoffs earlier," Hoogenboom says (see Internet Machines Raises $41 Million). Internet Machines' venture investors include Banc of America Securities LLC, Meritech Capital Partners, Morgan Stanley, and Redpoint Ventures.

The chipmaker claims it can outgun the 10-Gbit/s competition, which includes Applied Micro Circuits Corp. (AMCC) (Nasdaq: AMCC), Bay Microsystems Inc., EZchip Technologies, and Intel Corp. (Nasdaq: INTC). But with times being frugal, a more important aspect might be the way it merges the traffic manager and switch fabric.

Shrinking down
Most switch fabrics actually consist of two chips. One is a switching element -- the SE200, in this case -- that resides on its own card, providing the connection point for all ingress and egress ports. The second chip is the line interface that queues up data before sending it to the switching element; one such chip is needed on every line card in the system.

Internet Machines' TMC10 includes that line-interface function. This saves one chip on every line card, since the traffic manager and line interface are glommed together.

"Two chips -- the packet processor [NPE10] and the traffic manager. That's the core of your whole line card," Gwennap says. "They've got a pretty good story on that side."

By contrast, other architectures use separate chips for the traffic manager and the switch fabric's line interface. And the traffic manager isn't always a single chip; Applied Micro Circuits, for example, uses two chips in its nPX5700 traffic manager. Of course, AMCC plans to cut its chip count; the company's already announced it will merge the nPX5700 into a network processor later this year (see Net Processors Aim for Access).

Why the obsession with fewer chips? For starters, the economy of chips translates into economy of dollars. "If you add up the numbers, [Internet Machines will] produce the basic guts of a 640-Gbit/s box for about $110,000," says Eric Mantion, analyst with In-Stat/MDR. Even with optics added, the end price tag is "well under $1 million -- that's a significant drop."

The price could bring a new wave of systems players into the game. "Especially the Asian manufacturers in general, and the Chinese in particular -- the Chinese are especially interested in ASSPs," Mantion says, referring to Application-Specific Standard Products; i.e., off-the-shelf chips.

The TMC10's compactness also has a technical benefit. A traditional setup could endanger the quality of service (QOS) that the traffic manager sets up, because the line interface cards might have their own ideas about which packets get to hit the switch fabric first. "You're deliberately placing the eggs in the carton and then throwing the carton into the truck," Mantion says. "The switch fabric always should have been integrated with the traffic manager."

Some of the benefit is lost, however, if the customer buys only the TMC10 without the SE200, because the TMC10's line-interface card has features designed specifically for the SE200 (more on that later). No one's asked to buy the TMC10 without the SE200, Hoogenboom notes.

Packing them in
Like most competitors in this space, Internet Machines is proud of its chip-integration feats. Each SE200 includes 64 serializer-deserializers running at 3.125 Gbit/s each; the NPE10 packs 64 RISC microprocessors designed by ARC International plc (ARC Cores) (London: ARK). The latter feat has Internet Machines calling the NPE10 the industry's first "massively parallel" network processor. "There's no industry definition that says, 'Beyond this number you can call yourself massive,' but with 64 processors, we feel pretty confident," Hoogenboom says.

For software purposes, the company treats the 64 processors as one -- that is, the programmer writes code for one big processor, and Internet Machines' software doles out the work among the 64 small ones. The chip can be programmed in C or assembly, and all the programming can be done using open-source GNU tools.

On paper, the chips can handle four 10-Gbit/s ports per line card, but they're also suitable for slower-speed applications. The NPE10 and TMC10 can accept up to 16 sub-channels per port; in other words, 16 Ethernet ports could be fed into one NPE10 port. "It's definitely not overkill," notes Hoogenboom, for markets such as Gigabit Ethernet aggregation.

Some of Internet Machines' fanciest tricks are packed into the switch fabric, consisting of the TMC10 and SE200. Switch fabrics can run into trouble if packets hit congestion at an egress port -- as with a jammed freeway off-ramp, a clogged port can cause systemwide backups. It's a complex problem that's being tackled by several specialty startups (see Switch Fabric Chips Rewrite the Rules and our report on Packet Switch Chips).

Internet Machines' approach is to ward packets away from congested ports, which is accomplished through proprietary messaging among the TMC10s. If a particular egress port gets overloaded, it signals the line cards to throttle down their bandwidth, slowing down any traffic destined for that port.

The switch fabric also saves some bandwidth by avoiding the "cell tax." Most switch fabrics pack data into fixed-size cells, which makes it easier to tell how long data will take to traverse the fabric -- but this usually leaves dead space, a kind of round-off error, in the final cell for a particular packet. Internet Machines overcomes this by allowing the next packet to use that dead space, ensuring that each cell is filled to the brim with data.

One concern is that Internet Machines didn't embrace Asynchronous Transfer Mode (ATM) wholeheartedly -- not a big deal in 2000, but a possible setback today, now that equipment makers are pressing to fit into legacy ATM networks. "They don't do ATM internetworking very well," Gwennap says. "But there are plenty of applications that don't need ATM anyway."

For more information about packet-processing silicon, see our recent reports on Packet Switch Chips and Traffic Manager Chips

— Craig Matsumoto, Senior Editor, Light Reading

Be the first to post a comment regarding this story.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE