x
Optical/IP

Boffins Automate Backbones

Imagine a telecom network where optical connections could be set up and torn down in an instant by edge equipment, without human intervention. Fantastic, or frightening?

A protocol aimed at achieving this feat has been under development for some time. Called the "Just in Time" protocol, or JIT, it provisions lightpaths in as little as 10 microseconds. And just recently, scientists from the North Carolina State University and MCNC Research & Development Institute demonstrated to the Federal Communications Commission (FCC) that JIT isn't science fiction (see MCNC Touts JIT Optical Protocol).

The researchers make unflattering comparisons with Generalized Multiprotocol Label Switching (MPLS), which takes between several seconds and several minutes to set up or tear down an optical connection.

JIT "is better than GMPLS," says Dan Stevenson, VP of advanced networking and IT at MCNC-RDI. "GMPLS is too complex, and it doesn't get you anywhere near the performance level you need for interactive capabilities."

Applications like grid networking and research networks like the recently announced National Lambda Rail could be first in line to benefit from JIT's faster provisioning. High-energy particle physics experiments generate huge amounts of data in bursts, which need to be sent between different computers without tying up bandwidth for long periods. The U.S. government is also interested in JIT, for reasons it won't specify.

"JIT addresses some very challenging problems in high-performance computing," Dr. Hank Dardy, chief scientist for advanced computing at the Naval Research Laboratory (NRL)’s Center for Computational Science wrote in a statement. "It can take weeks to establish an optical connection through a carrier network, and minutes to do so with generalized multi-protocol label switching, the current industry standard. With JIT, we can provision optical connections between sites in a few milliseconds through our microelectromechanical switches, and in a few microseconds when we deploy faster photonic switches."

NRL has just completed the third JIT demonstration, sending uncompressed, high-definition TV signals between host computers at various Department of Defense locations using commercial ROADMs (reconfigurable add/drop multiplexers) from Lambda Optical Systems Corp. Earlier demonstrations of HDTV transmission at 1.5 Gbit/s and IP data took place, respectively, in November 2002 and September 2003.

It's early days for JIT, which is not a standardized protocol, but things look promising.

"He [Dardy] is the guy that everyone goes to when they want a new technology adopted by the U.S. government," says Geoff Bennett, chief technologist with Heavy Reading, Light Reading's paid research division. "If they're testing with him, they're definitely on the right track."

However, Bennett has some reservations about JIT and the scientists' claims. "I don't think it's as simple as saying that JIT could be a competitor to GMPLS," he says. "GMPLS can use different signaling protocols to set up connections, and, in the future, one of those signaling protocols could be JIT."

Bennett goes on to say that the whole idea of edge equipment automatically setting up and tearing down optical connections on demand "would give the transmission guys the willies" in conventional carriers. It would mean they'd lost control of capacity planning.

Kiss and tell So how does JIT configure networks in an instant?

Most control plane protocols, including Asynchronous Transfer Mode (ATM) and GMPLS, require handshaking: At each stage of the configuration, the network resources must send back a confirmation message to say that they are ready. Stevenson calls this "tell and wait".

JIT, in contrast is "tell and go": It sends out a network setup message that nails up the connections as it goes, thereby avoiding all the latency associated with handshaking.

Another advantage, says Stevenson, is that an application, as opposed to a user, can make a connection request, which allows provisioning to be truly automated.

All this works fine when the network is lightly used, but what happens if there's congestion? Simple answer: Data gets dropped. And that isn't acceptable to most users.

"JIT was rejected in the past," notes Heavy Reading's Bennett. "The big problem everyone had was that you could never know for sure that a connection has been set. What's changed?"

MCNC-RDI's Stevenson says that blocking probability -- the chance that a connection can't be routed -- has been reduced to an acceptable level. "Quality of service can still be maintained on a network with 60 percent loading in the access and 40 percent in the core," he claims. "That compares favorably to IP networks where utilization is typically 10 to 15 percent."

But there is a catch. To be really efficient, the network must have enough wavelengths, where "enough" is in the range of 16 to 32 wavelengths on a fiber, he adds.

Various techniques are used to reduce the blocking probability. MCNC-RDI has assumed in its calculations that wavelength conversion is a readily available technology -- a questionable assumption, since there aren't any wavelength converters on the market today. Another available technique is called "deflection routing": If the requested port is busy, the switch sends the data out along another random path.

A hardware-based protocol, JIT is built into a prototype rackmounted 3U enclosure that Stevenson calls a JITPAC. "If you have an existing system of OADMs, you acquire some JITPACs, one per OADM. All you need to do is figure out which channel to use for signaling, and you're off to the races."

The JITPAC is capable of interfacing to all existing commercial off-the-shelf switches via a customizable TL1, SNMP, or proprietary interfaces for the purpose of controlling the switches. MNCN-RDI is working towards standardizing the control interface. To that end, the organization is in talks with equipment vendors -- including Cisco Systems Inc. (Nasdaq: CSCO), Calient Networks Inc., and Glimmerglass Networks. In the meantime, says Stevenson, they'll keep kicking the tires.

— Pauline Rigby, Senior Editor, Light Reading

dljvjbsl 12/5/2012 | 1:47:38 AM
re: Boffins Automate Backbones I am not an expert in this by any means. However I know that a protocol very much like JIT was bruited about by Nortel for ATM for the same reasons about 15 years ago.

Isn't this comparing apples and oranges. The two protocols are not imcompatible. The ATM JIT (whatever it was called) was described as being able to set up apths for such short-lived connections as Email messages. Longer-lived connections will have different imperatives in protocol design and different weighting on the worth of set up delay.

Jut trying to find some answers by starting a discussion.
arch_1 12/5/2012 | 1:47:34 AM
re: Boffins Automate Backbones This concept was investigated by ARPA in the 1960's, before the original ARPANET. I think it was called "fast circuit-switch." The idea was to circuit-switch on a character-by-character basis.

Remember that this was when trunk speeds were 9600bps, terminal speeds were 300bps and below, and memory cost was $1/bit. Essentially, the concept almost made economic sense then, because it was extremely expensive to buffer the data at the nodes. This is essentially the same reason that it almost makes economic sense now.

What can we learn from this? Well, optical JIT switching will cease to be cost-effective when we can buffer the data cheaply enough, either OEO+RAM, or optical storage. (I'd bet on OEO+RAM.)
So, can JIT be developed and deployed before Moore's law can drive down the (electronic) buffer cost AND before someone invents an optical buffer?

Note that JIT relies heavily on wavelength conversion, so the costs are already higher than GMPLS-type setups. fast-programmable wavelength converters are not likely to be much cheaper than the OEO optics.
dangerous-dan 12/5/2012 | 1:46:02 AM
re: Boffins Automate Backbones The arguments of arch_1 don't hold up to scrutiny. First of all, there seems to be some confusion about JIT (a control plane protocol that is independent of the underlying switching and transport technology, and Optical Burst Switching).

Cost assessments done by NIST (David Griffith et al http://www.cse.buffalo.edu/~qi... indicate that with today's costs, Optical Burst Switch used in the core network to carry IP traffic leads to 2x to 7x capital cost improvements relative to using GMPLS and OADMs. Other work done at MCNC, not yet published, tend to confirm these results. In part, these cost savings are realized by saving on OEO, but there is also a contribution of integration, routers and OADMs can be consolidated into a single unit.

Yes OBS relies on wavelength conversion, which historically has been expensive; however, several recent developments lead to the conclusion that this is a dated notion. Infinera has gotten a lot of attention with its photonic IC module (Lightwave May 2004) which promises low cost OEO, which will if you think about it, also provide a trivial means of wavelength conversion, at the same cost of OEO. Similarly, research done by Dan Blumenthal http://www.physlink.com/News/0... using integrated devices for all optical wavelength conversion, and work being done by Nan Jokerst (now at Duke) show significant promise for low cost all optical wavelength conversion devices, that also provide for 2R and 3R.

Moore's law is hard to beat, but within the optics community it is widely believed and asserted that optics technology is following a steeper improvement curve than electronics. In part, this is due to the very low level of integration used for optics today relative to electronics, consequently making large improvements is not as difficult and the pace of improvements is faster.

The other reason to not bet on OEO+RAM has to do with one of the secondary consequences of Moore's law. As RAM and CPU gets to be more inexpensive, the natural message size of applications increases. We are all familiar with e-mail message and application file bloat. With these increases come performance incentives to increase the message transmission unit size in networks. Big science applications are acutely aware of this issue. However, support for large data units in routers is not just a simple matter of RAM density, memory management issues become much more complex. In fact, the routers architects that I speak with view this as a very difficult task. All the performance model results show that there is little advantage to buffering once there are a couple dozen channels available. So, our perspective is that it is not worth the bother to do buffering in either optics or electronics. Keep the data in optics, save costs, save complexity, save latency and make it possible for applications to use MTU sizes that are more efficient for their needs.

Finally, the ARPAnet work from the 60's was an investigation of data networking. Today research and industry are focused on ways to converge all network services onto a single technology base. Packet networking in the form of IP swithcing has emerged as the presumptive solution as a result price advantages it has as a result of quirks in the regulatory framework telecommunications operates within. However, service providers find that meeting the requirements of service level agreements designed to support VOIP necessitate keeping link utilizations very low, otherwise buffering has unacceptable influences on quality.

Under the cirmunstances, it might serve industry and the research community well to rethink the fundamentals rather than simply continue to follow the herd. Buffering has bad consequences in a multi-service network and furthermore is not needed.
bobcat 12/5/2012 | 1:46:00 AM
re: Boffins Automate Backbones Good argument and good post but several points you make need clarification;

>>Moore's law is hard to beat, but within the optics community it is widely believed and asserted that optics technology is following a steeper improvement curve than electronics.

Ok fine I'll concede that one.

In part, this is due to the very low level of integration used for optics today relative to electronics, consequently making large improvements is not as difficult and the pace of improvements is faster.

Ok, point taken...

>>As RAM and CPU gets to be more inexpensive, the natural message size of applications increases. We are all familiar with e-mail message and application file bloat.
>>With these increases come performance incentives to increase the message transmission unit size in networks.

Yup.

>>Big science applications are acutely aware of this issue.

Big science? Oh, alright.

>>However, support for large data units in routers is not just a simple matter of RAM density, memory management issues become much more complex. In fact, the routers architects that I speak with view this as a very difficult task.


Ok so buffering is, a necessary evil..?


>>So, our perspective is that it is not worth the bother to do buffering in either optics or electronics.


>>Keep the data in optics, save costs, save complexity, save latency and make it possible for applications to use MTU sizes that are more efficient for their needs.


So simpler is better. All well and good.
Your arguement is (still working on the first coffee) that this will be as, or more, efficient than current buffering applications..?
By what magnitude and cost? Is it worth it in the here and now? and if not now hw far distant should this be a practical exercise?


>>Under the cirmunstances, it might serve industry and the research community well to rethink the fundamentals rather than simply continue to follow the herd.


Ok good point (think outside the box) and good engineers do this, but, "Keep the data in optics" and, "MTU sizes that are (larger) more efficient" still says, (IMHO) to me you need a buffer at the head end due to a greater capacity there. The translation to an upper layer still seems like a problem.
Correct me on this.


>>Buffering has bad consequences in a multi-service network and furthermore is not needed.


Hmmmmm... I'll have to step back on that and think a bit. More coffee.

MAD ;>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE