x
Optical/IP

100-GigE Takes Shape

SAN JOSE, Calif. — Even though it's going to be three years before a 100-Gbit/s Ethernet standard can be ratified, some researchers and CTOs are already looking at how such a speed could be made possible, with equipment that would be practical enough to actually get sold.

They convened earlier this week, spending a day at the historic Dolce Hayes Mansion -- now a conference center -- buried in the suburbs southeast of downtown San Jose. There, an engineering-minded audience packed a small meeting room to the gills for an Optoelectronics Industry Development Association (OIDA) seminar entitled, "100 Gbit/s Ethernet: The Next Challenge for Communications Systems."

The topic is getting a lot of publicity lately, particularly with the Institute of Electrical and Electronics Engineers Inc. (IEEE) approving a Higher Speed Study Group to standardize the next speed grade. (See 100-Gig Ethernet Takes First Step.)

That standard, which is expected to take three years to complete, doesn't have to be 100 Gbit/s. The HSSG, which won't meet formally until later this month, will have to entertain comments on several speed-grade proposals, including 40, 80, 120, and 160 Gbit/s. Most bets, though, seem to be on 100 Gbit/s.

The loudest cries for a higher-speed Ethernet come from places like carrier hotels, some of which are already reporting a need for 100-Gbit/s links among large ISP tenants. Link aggregation -- a method of treating multiple 10-Gbit/s links as one -- isn't expected to be efficient enough to handle traffic growth. (See Ready for 100-Gig Ethernet? .)

Given that the need is already surfacing, many speakers said designers should be thinking beyond 100 Gbit/s. Drew Perkins, chief technology officer at Infinera Corp. (Nasdaq: INFN), titled his presentation "TbE [Terabit Ethernet] or Bust."

"Core network links blew through 10 Gbit/s years ago, frankly," Perkins said. "Let's not stop at 100 Gbit/s."

Others agreed that there's some urgency to all this. One speaker argued 100-Gbit/s Ethernet is already late for the high-performance computing industry, at least when compared with the timing of previous technologies like 10-Gbit/s Ethernet. "If it doesn't move quickly, other standards like Infiniband may take its place in that market," said Petar Pepeljugoski, a scientist with IBM Research.

Unlike 10-Gbit/s Ethernet, the 100-Gbit/s variety will have to start life with multiple ports per line card, said Joel Goergen, chief scientist at Force10 Networks Inc. "If, [in 2009], you're not targeting your systems designs for 400 to 500 Gbit/s per blade, its going to be difficult for you to compete," he said.

As for the makeup of a 100-Gbit/s port, the endgame would seem to be serial 100-Gbit/s. "Could it happen? Probably 2013," Goergen said.

Most speakers agreed, although Jack Jewell, chief technical officer of Picolight Corp. , wondered aloud if serial 100-Gbit/s Ethernet, while attractive, would be worth the trouble. "I don't really see a need for a completely serial solution," he said during his presentation.

For near-term designs, the point is moot: Conventional silicon can't handle speeds much higher than 10 Gbit/s, so a 100 Gbit/s feed has to be split up somehow. But what combination works?

Goergen, Perkins, and others seemed to gravitate toward using five 20-Gbit/s lanes packed into a module. That's analogous to what's done with Xenpak 10-Gbit/s Ethernet modules, which produce four channels of 3.125 Gbit/s apiece. In fact, the 5x20 idea is attractive partly because it would fit inside a Xenpak module, Perkins said.

Another possibility is to use 10 lanes of 10-Gbit/s Ethernet, but Goergen worried that this wouldn't allow for the kind of density he thinks systems will need. "I can't get four of those types of ports on a blade," he said. Still, other speakers noted that 10x10 might be viable for some applications.

How about four lanes of 25 Gbit/s each? Goergen said he'd backed that idea in the past but now disdains it, because it's too difficult to get that kind of speed to comfortably run on mass-produceable chips. "I worked with two serdes companies on this, and I can't get it to work," he says.

— Craig Matsumoto, Senior Editor, Light Reading

Pete Baldwin 12/5/2012 | 3:42:28 AM
re: 100-GigE Takes Shape Drilling down on that last paragraph -- If I understand it right, Goergen of Force10 says he got 27 Gbit/s traces to *work* ... it's just that it's very, very difficult and not something companies would be able to master quickly enough. (27 = 25 Gbit/s plus overhead; it's the target that a 4 x 25 Gbit/s system would have to hit, he thinks.)

In other words: "If you wanna limit the community to Cisco, Force10, and maybe Infinera -- that's great for us," he said. (Goergen's a pretty entertaining speaker.)

Instead of 25+, he's suggesting the oddball speed of 22.2 Gbit/s on the backplane. It sounds arbitrary, but based on what he's seen in the lab, Goergen thinks that's the breaking point -- the fastest comfortable limit that could be reached by a multitude of competitors. (And you want "fastest" because it'll limit the number of serdes you need, which will help drive density/power/etc. -- most speakers at this event focused more on the practicality than the possiblity of 100-gig.)

So, Goergen's arguments against 4 x 25 Gig have more to do with economics and time to market than with the physics. (Petar of IBM, btw, made mention of having gotten 17 gig serdes going in their labs.)
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE