From Academics to Entrepreneurs

Pankaj Gupta, co-founder of Sahasra Networks, hopes he can turn his academic vision into a commercial windfall.

Gupta, who just received his PhD in computer science from Stanford University last year, founded the company with Srirnivasan Venkatachary of Washington University in St. Louis back in late 2000. The two had conducted research together on fast routing lookup mechanisms, packet classification, IP routing and switching architectures, and scheduling algorithms when they decided to turn their research into a commercial product.

Their new project is a startup focused on developing a chipset that will fit on routing and switching line cards for high-speed lookups of IP addresses. And it has already received an angel investment from Nick McKeown, founder of semiconductor startup Abrizio which was sold to PMC-Sierra Inc. (Nasdaq: PMCS) in 1999 for over $400 million, and Mike Farmwald, a partner with Benchmark Capital and founder of Rambus Inc., a developer of scaleable chip technologies that enable semiconductor memory devices and ASICs to keep pace with faster generations of processors and controllers.

The link between McKeown and Gupta is more than financial: Gupta was a student of McKeown’s at Stanford, where the two worked together to develop Sahasra’s underlying technology.

“I think the fact that my advisor has enough faith in what we’re doing to invest his own money in the company is important,” says Gupta.

The company, which has fewer than 10 people working for it, is still looking for its first round of venture investment. Aside from Benchmark, at least one other venture firm, Sequoia Capital, has also noticed the company.

Chris Rust, a partner with Sequoia, had participated in the Abrizio deal before its acquisition and was also a director of chip startup SwitchOn before it was also acquired by PMC-Sierra. He says he is familiar with Gupta’s research: “He is a bright young talent. We leveraged some of his research at SwitchOn in the area of classification. I know Nick [McKeown] thinks highly of him.”

But Rust says his firm has not invested in Sahasra. Whether it will in the future is still unknown.

So what’s Gupta’s big idea anyway? Basically, Sahasra has developed a new algorithm that compacts the data structure of Internet routing tables, ultimately allowing switching and routing companies to pack more interfaces onto a single line card. As routing tables grow larger and larger, the chips that do the lookup of IP addresses in the routing table need more and more memory. And while memory itself is rather cheap, it takes up a lot of room on the line card, limiting the number of high-speed processors that can fit in an area to perform the lookups.

“This would cut their system cost by an order of magnitude by allowing them to put more on each card,” Gupta says. “The only reason right now that a system vendor can’t get more than one interface on a 10-Gbit/s line card is because they don’t have any more room on the board and they can’t cool all the components. Our one chip takes the place of eight to 16 different components.”

At present, system companies like Cisco Systems Inc. (Nasdaq: CSCO), Juniper Networks Inc. (Nasdaq: JNPR), and Riverstone Networks Inc. (Nasdaq: RSTN) use standalone ASICS and specialized memory devices called CAMs (content-addressable memory) to look up and classify packets. The problem system vendors face today is that these implementations use several memory chips, which take up a lot of room. While CAM vendors are working to make their memory chips denser, Sahasra is working to compact the routing information into a smaller space so that it actually uses less memory.

How great are the memory savings with the Sahasra solution? Gupta, who worked on Cisco’s early implementation of its OC192 (10 Gbit/s) interface back in 1997, says that the Cisco OC192 line card uses about one gigabyte of memory. The Sahasra solution only requires 2.5 megabytes of data.

What’s more, traditional chips often consume anywhere from six to eight Watts of power each, and with some line cards requiring 10 chips per blade, that can add up to a chipset consuming 80 Watts of power. Because Sahasra’s solution requires less memory it might only use a single chip that would only consume about 8 Watts of power. And the lower the power consumption, the lower the heat dissipation and the less a system needs to be cooled.

Gupta says the chip, which will likely be introduced sometime next year, will support 40-Gbit/s worth of traffic per line card, meaning that it would be able to support up to 16 OC48 (2.5 Gbit/s) ports, four OC192 ports, or one OC768 port per line card.

While Gupta’s academic research in this area has been extensive, David Newman, president of Network Test Inc., wonders what the performance tradeoff might be.

“Smaller, cheaper, faster, denser is always good,” he says. “This solution might be smaller and denser, but the question is: Is it really faster?”

The second question to ask is: Will system companies actually buy this kind of technology from a third party? Traditionally, they have developed it in house. But Gupta believes they will find the benefits so compelling that they will want to buy the chips.

“All these companies need to do lookups, and we just help them do it better,” he says. “Fundamentally, because it is a totally different algorithm, it can provide an order-of-magnitude improvement. I think they’ll see our technology as complementary to their own.”

- Marguerite Reardon, Senior Editor, Light Reading

HarveyMudd 12/4/2012 | 7:54:43 PM
re: From Academics to Entrepreneurs The kind of things described by Sahasra Networks have been discussed for the last so many years. The proposals have no merit at all.
sroy 12/4/2012 | 7:54:37 PM
re: From Academics to Entrepreneurs Gupta's previous research actually was quite sound, and a good number of the new research ideas on fast lookups/classification have come either Wash-St. Louis, or Stanford so it's quite likely that there may be something. But it's not just fast lookups that are important in a router, it's also table update rates.
skeptic 12/4/2012 | 7:54:36 PM
re: From Academics to Entrepreneurs
None of the ideas in this article sound new.
There are already well-known ideas in print
about table compaction. The trade-off with
table compaction for lookup is that the number
of memory accesses go up. You can shrink the
total memory, but the memory has to get
progressively faster and beyond that, a price
is paid (often, not always) to compact the

I would also caution that algorithm people like
this don't necessarly understand the nature
of building real hardware to implement their

The math in the article as regards current
components, densities and power/heat doesn't
seem correct (to me anyway) as well.
xinant 12/4/2012 | 7:54:30 PM
re: From Academics to Entrepreneurs I agree with you completely. How to design
an algorithm which supports both fast lookup and
update is a very challenging task.

In their DIR-24-8-BAISC scheme, the space
required is about 16Mb and this is nothing in
the current DDR memory technology. However,
how to update these tables at the speed of
100-1000 times/S is a question to be addressed.

Moreever, the cost of communication between
a control processor and the table lookup engine
seems ignored, which might be the bottleneck
for the entire system.


hi_concept 12/4/2012 | 7:54:06 PM
re: From Academics to Entrepreneurs Harvey, this direction is the only way to go and Pankaj Gupta is a top researcher in this area. To conclude that if some field is discussed for years, an advancement in the field has no merit, is sheer idiocy.
Sign In