Intel: The Prince of Processors?
According to analysts, Intel is already one of the two top vendors of network processors, along with Applied Micro Circuits Corp. (AMCC) (Nasdaq: AMCC). It's managed to achieve this with a single product platform, the IXP1200 -- which offers only OC3 (155 Mbit/s) or OC12 (622 Mbit/s) capabilities, to boot.
Later today, Intel plans to unveil three new network processors that will allow it to compete head-on with AMCC and others in the high-speed end of the market. It gave the public a sneak peek at the chips at its Developer Forum in 2001. Now it's going so far as to reveal details of the internal architecture and outlining when it intends to start shipping.
The three network processors are the IXP425, which targets small and medium-sized enterprises and office applications; the IXP2400, which will be a half-duplex OC48 (2.5 Gbit/s) solution; and the IXP2800, which is half-duplex OC192 (10 Gbit/s). Doug Davis, VP and general manager for Intel's network processor division, says the chips should start sampling by the end of Q2, Q3, and Q4 2002, respectively.
Although Intel bills the new chips as a straightforward evolution from its existing network processor, the IXP1200, engineers with knowledge of Intel's architecture say that quite a lot has changed under the covers.
Davis summarizes the changes. The new chips are based around an "XScale" core -- a microprocessor architecture that can scale from speeds of 300 MHz (the optimum speed for power-sensitive devices like handheld computers) up to 700 MHz in the IXP2800. XScale is a derivative of the StrongARM processor that powers the old IXP1200, he says.
On top of that, Intel has created version two of its micro-engine technology. The micro-engines are smaller processors that surround the processor core, offloading tasks from it. There were six in the IXP1200; there are 16 in the IXP2800.
"We've also developed a capability called 'hypertask chaining', which defines how the micro-engines are connected together and share resources," says Davis. "The benefit of this is that it allows us to support applications at line rate and have enough overhead left over to do things on top of that."
The upshot of all this, says Felix McNulty, VP of marketing at Teja Technologies Inc., is that Intel's faster network processors are signficantly more complex to program than its previous (slow) generation. And Teja should know: It has been given early access to details of Intel's architecture so that it can create a software development environment for the new network processors.
The basic idea behind Teja's platform is that Teja writes the code, while its customers build applications using a user-interface based on state-machine definitions, never needing to look at the code (see Why Intel Loves Teja ).
Teja is announcing support for the new Intel chips today and has some significant news of its own (see Teja Supports New Intel Processors). In short, it has been able to prove that end-user applications created with its development environment are completely portable from the old to the new generation of Intel processors. This is the first time anyone has been able to show portability, McNulty claims.
As they get faster and have more features, network processors are getting harder to program, McNutly contends, and that gives people more reason to turn to a product like Teja's, which hides the programming complexity from the user. "Some Intel customers have done it without Teja. They seem to be our best prospects going forward, because they know how fiendish it is to program these things."
— Pauline Rigby, Senior Editor, Light Reading