Why Intel Loves Teja

One of the problems with network processors is that they can be a pain to program. They help system vendors eliminate ASICs in packet processing gear, but writing code for them can end up consuming a lot of time and money.

Teja Technologies Inc. says it's got a solution: a way of radically simplifying the programming exercise using a graphical user interface.

However, there's a catch. Right now Teja's software can only be used to program a single network processor, the IPX 1200 from Intel Corp. (Nasdaq: INTC).

This probably explains why Intel has taken such an interest in Teja. It was an early investor and also contributed to Teja's $12 million second round, announced yesterday (see Teja Takes in $12M).

From Teja's point of view, siding with Intel looks like a smart move. Intel is one of the two top network processor vendors in terms of revenue, or perhaps the top vendor, depending on which analyst you ask, according to Teja's VP of marketing, Felix McNulty. The other big vendor is Applied Micro Circuits Corp. (AMCC) (Nasdaq: AMCC), through its acquisition of MMC Networks.

Other network processor vendors would like to get in on the act, according to McNulty. But they're going to have to wait. "It's a delicate subject, because right now we basically have our hands full following Intel's roadmap," he says. "It's a question of how Teja will balance its activities."

The interest from network processor vendors is not surprising, because Teja's software could make their off-the-shelf chips look a lot more attractive to systems vendors, many of which still prefer to develop their own ASICs for packet processing.

Ease of programming has turned out to be the biggest obstacle to network processors becoming more widely accepted, according to McNulty. Instead of spending lots of time and money developing ASICs, systems vendors end up spending lots of time and money writing incredibly complicated programs in machine code, so they don't end up much better off overall.

The really tricky part of the programming procedure is deciding how different tasks are shared out among the different processor cores on the chip, says McNulty. "The problem is not so much programming any individual microprocessor, but making it work as a system."

Finding out the best way to divide up the workload is an iterative process, he explains. With ordinary programming environments, the software has to be optimized after each iteration, and that means debugging thousands of lines of code.

In Teja's software, all that is replaced with a graphical user interface (GUI). The user defines different application functions using a kind of high-tech flow chart called a "state machine." To optimize the system, the graphical representations of the different functions can be dragged and dropped onto different cores in the silicon. The system then churns out a batch of code, which is ready for testing in a fraction of the time it had previously taken.

Underlying all this is Teja's NPOS (network processor operating system), which is equivalent to Wind River Systems Inc.'s (Nasdaq: WIND) RTOS (real-time operating system). This is the clever part that deals with threading, which is the business of how tasks are shared out among processor cores.

One network processor vendor, Xelerated AB, didn't seem that keen on Teja's approach, but that could be because its own chip doesn't have multiple processor cores -- it has a pipeline architecture -- and thus isn't as tricky to program in the first place, it claims (see Xelerated Touts 40-Gig Toolbox).

If Teja can grow, and support other products, then it will give systems vendors another boost, by enabling them to port their applications over to different network processors, whether its an upgrade, or a completely new part.

Ultimately -- and this starts to get a bit boggling -- Teja sees the whole thing going the same way as the PC industry, in which one company (Intel) supplies hardware and another (Microsoft) supplies software. "We want to be the Microsoft of the network world," says McNulty, a phrase that may well come back to haunt him.

Teja claims it's got six customers right now, including Digital Fountain, which used Teja's product to build a video-streaming server line card, and Network Associates Inc., which used it to develop a distributed denial-of-service filter.

Its investors in yesterday's round were Blueprint Ventures, Mayfield Fund, RRE Ventures, Tallwood Ventures, and Viventures Partners, as well as Intel.

— Pauline Rigby, Senior Editor, Light Reading
skeptic 12/4/2012 | 11:00:48 PM
re: Why Intel Loves Teja
The reason that programming network processors
is so difficult is that to get optimum
performance, the code has to be written very
carefully with the network processor in mind
at a very "deep" architecture level.

It doesn't matter if its pipelined or multi-core,
to get good results from the network processor
requires understanding the network processor,
how it works and how it is coded at the
instruction level and below.

I thought everyone went through all these same (bad) ideas
when the network processor vendors were
delivering "software kits" which were supposed
to be firmware to do all the packet processing
delivered with the chip. That didn't work
because every product is different.

If you get into network processors, you have
to hire people who can program them. If thats
not the right thing to do, stick with ASICs.

edgecore 12/4/2012 | 11:00:46 PM
re: Why Intel Loves Teja
Skeptic, good point, with any sort of one size fits all model, you give up performance optimization for the purpose of convenience.

But the whole NPU message is based around time to market, and not performance! Not yet at least...

The IXP1200 has a 32 bit StrongArm core on the actual NPU, why would you want to use NPOS on the NPU core while you are running Unix, Vx, OSE, QNX, Lynx or Linux on the control plane CPU (which, guess what, is a 32 bit CPU).

My other assumption is that the developemnt, debug and performance monitoring tools for your "real" 32 bit OS are probably offer more intraspection or a running target, better graphical debug, code profiling, performance monitoring...understood, tried and tested...more mature than what NPOS would offer.

Anyways, with Intel buying Trillium, Ziatech and now the Teja love affair heating up...won't be long before we are refering to it as "Teja, an Intel company"!

Intel = CPUs and NPUs
Trillium = Stacks
Ziatech = HA hardware
Teja will = Dev Tools

Not a bad story!


Pauline Rigby 12/4/2012 | 11:00:43 PM
re: Why Intel Loves Teja Teja had some comments on the difference between NPOS and WindRiver RTOS. I thought I'd put it here instead of trying to weave it into the story and risk complicating things, or getting it wrong:

"Teja doesn't displace WindRiver's RTOS, but works in conjunction with it. Essentially, the RTOS runs only on the Control Plane (or control processor) of an NPU, while leaving the system OEM to hand-code assembly for the Forwarding Plane (the microengines). Teja's NPOS runs as a task under the RTOS and runs on the "bare metal" of the microengines - basically integrating the functions of both, which is an industry first. The NPOS still performs RTOS scheduling functions, but some basic RTOS functionality IS needed. This doesn't have to be WindRiver - in fact, a lot of system OEMs are choosing Linux, as
it's free and can be pared down to just the essentials."

[email protected]
edgecore 12/4/2012 | 11:00:39 PM
re: Why Intel Loves Teja Thanks,

Sign In