Chips Brace for 40-Gig
The conclusion: In the transition to 40 Gbit/s, the pressure is on the electronics, not the optics.
Granted, some optics apologists were present -- the panelists were all from the chip industry, after all. But the point was this: Carriers want the network to not just handle 40 Gbit/s, but to be optimized for services such as IPTV, and to embody the network convergence they've been working on.
Many of the concerns at 40 Gbit/s stem from how to properly handle video and other applications in this environment. These jobs fall on the system electronics. "It really is more a features problem than an optics problem," said Anthony Torza, a systems architect with chip vendor Xilinx Inc. (Nasdaq: XLNX).
That's much different from the transition from 2.5 Gbit/s to 10 Gbit/s, where optics was the star. Torza was a board designer at Ciena Corp. (Nasdaq: CIEN back then, and he recalled how the company even had to build its own fiber Bragg gratings.
In contrast, the optics industry began preparations for 40 Gbit/s during the bubble and has been waiting for its chance to deliver ever since. (See 40-Gig Begins Its Ramp.)
Torza said another difference from the 10-Gbit/s transition is the 40-Gbit/s transition is happening in the metro market and not in the core. Carriers expect broadband video to drive most of this bandwidth demand, and video often comes from local sources, not the network core. "There are players in the long haul, but where the money is, is in the metro," he said.
So, what are these difficult jobs the chips are asked to do?
One is traffic management, which involves separating packets into queues and selecting which packet gets transmitted next. This is crucial when it comes to juggling VOIP or IPTV feeds. Bigger pipes mean more user flows are juggled, making the queuing more complex. How a designer solves that problem will have implications on the rest of a system.
"Traffic management is actually a system-level problem. We're talking about chips here, and it's often easy to overlook the fact that traffic management is not a complete solution. If you had the perfect traffic manager, it could still fail at the system level," said Ofer Iny, CTO of Dune Networks.
Similarly, interfaces become more challenging at 40 Gbit/s. There's less time available to shift data into and out of memory.
Chips might also face the usual challenges of power and speed as 40-Gbit/s requirements emerge. Often, these problems solve themselves as chips shrink, but that might not be the case here, said Bill Weisinger, director of sales and business development for Bay Microsystems Inc.
"You can't just rely on the manufacturing node to get you to the next point and the next speed," Weisinger said. "There's a question of how you get 40 Gbit/s in and out of a payload buffer."
But is 40 Gbit/s is really happening? According to the panelists, it's real, but they conceded that volumes remain low and prices high. What's important is that carriers are beginning to think they'll need OC768. "The market-perception perspective is happening right now. The price perspective is going to come later," Torza said.
For now, 40 Gbit/s is starting out as four lanes of OC192 rather than a single OC768 feed; systems geared for 40-Gbit/s links will have to support both approaches, panelists said.
— Craig Matsumoto, Senior Editor, Light Reading