Optical/IP Networks

Siemens Claims 160-Gbit/s Milestone

Never mind 40 Gbit/s. Researchers from Siemens AG (NYSE: SI; Frankfurt: SIE) say that 160-Gbit/s technology could be ready for deployment in a couple years' time.

Siemens, together with BTexact Technologies, and the Technical University of Eindhoven, has recently demonstrated 160-Gbit/s switching on a real network in the U.K. (see Siemens, BT Demo Fast Optical Switch). This is the culmination of several years of work under the auspices of a European Commission-funded project codenamed FASHION (ultra FAst Switching in HIgh-speed OTDM Networks).

"We have made the big step from a pure laboratory technique to equipment that is capable of overcoming the various difficulties under real conditions," says Dr. Gottfried Lehmann of Siemens Corporate Technology. He reckons it should be possible to develop a real system within two or three years.

If such equipment was developed commercially, then it has the potential to provide major cost savings thanks to the reduction in the amount of DWDM equipment required: 160-Gbit/s technology would allow sixteen DWDM channels at 10 Gbit/s to be replaced by a single wavelength.

Of course, this is a rather rosy view of the situation. In practice, more research is needed on other aspects of 160-Gbit/s systems before commercial development begins, according to Lehmann. And, if no new technological gremlins pop up, there are still other issues to contend with, such as whether the technology will be needed in the foreseeable future. Forecasts of huge increases in bandwidth requirements have evaporated in recent years, and 10-Gbit/s equipment has come down in price so much that a lot of 40-Gbit/s development projects have been mothballed.

It's also worth pointing out that Siemens isn't alone in developing 160-Gbit/s transmission systems: Japan's Mitsubishi Electric Corp., Germany's Heinrich Hertz Institute (HHI), and America's own Bell Labs have also done work in this area (see Mitsubishi Looks to 160-Gbit/s Future). Bell Labs' demonstration was over four years ago (see the original press release).

But let's put skepticism aside for a moment and look at the nuts and bolts of the 160-Gbit/s system. For a start, 160 Gbit/s isn't directly comparable to 40 Gbit/s. Unlike OC768 (40-Gbit/s Sonet), which provides a clear channel for communication, 160-Gbit/s transmission relies on a multiplexing technique called OTDM (optical time division multiplexing). Individual 10-Gbit/s channels are squashed onto a single wavelength by interleaving them. The result is a data stream where every sixteenth bit belongs to the same channel.

This presents quite a challenge at switching and distribution points, says Lehmann. "Of the 160 billion bits which now hit the distribution point every second, each sixteenth belongs to the data flow which is due to be deflected," he notes. This means one individual bit must be pulled out of the data stream every 100 picoseconds (or 100 million millionths of a second).

The only way to do this is with all-optical switching -- it's too fast for anything mechanical or electronic. Siemens has developed a switching device based on a process called four-wave mixing (FWM) in Semiconductor Optical Amplifiers (SOAs) .

FWM is a non-linear effect that results in two wavelengths interacting to create a new wavelength. A control laser, which is pulsed at the right interval to pick out every sixteenth bit, is entered into the SOA, along with the data stream. When both data stream and control pulse are present, that particular data bit is reproduced on a new wavelength, which can be filtered out. Other data bits pass through the SOA without being affected. Full details of this device can be viewed in this research paper presented at ECOC 2003.

BTexact tested the technology over four 70km fiber links between the U.K. towns of Ipswitch and Newmarket. The all-optical add-drop mux was installed at the halfway point, which switched out individual 10-Gbit/s channels from the 160-Gbit/s data stream.

— Pauline Rigby, Senior Editor, Light Reading

Page 1 / 2   >   >>
Jukke 12/4/2012 | 11:13:25 PM
re: Siemens Claims 160-Gbit/s Milestone No worries. That is typical Siemens.
Great technology but no clue what to do with that.

Physical_Layer 12/4/2012 | 11:13:25 PM
re: Siemens Claims 160-Gbit/s Milestone This is hilarious. Good research but totally impractical. Anybody care to comment on the chromatic dispersion problem they'll have to overcome if they try to use 160 Gb/s over any reasonable distance? 16x the speed = 256x the CD problem, right? 16 times worse because of the size of the bit period, and 16 times because of the increased spectral width of the optical signal. I don't think the fibers we've got burried today can even begin to think about handling this type of CD problem.

Besides .. .what is really being saved if the signal has to be PRODUCED by 16 separate lasers anyway, and then received by 16 separate photodiodes? Saving some DWDM gear doesn't seem to really help when you're not saving any lasers or photodiodes. Where's the potential cost savings? I only see additional complexities with no economic benefit.

dodo 12/4/2012 | 11:13:24 PM
re: Siemens Claims 160-Gbit/s Milestone Isn't this just a HERO experiment even though it is described as a field test?

Wonder whether there was live (actual transmission)traffic being pumped all along (not only data from the test source).

Agree - the dispersion issues would be interesting! Try to debug them for a commercial product in 2 years time. Yee hah!
Pauline Rigby 12/4/2012 | 11:13:22 PM
re: Siemens Claims 160-Gbit/s Milestone WRT to the dispersion issue, I believe there is something called pseudo-linear dispersion that comes into play at high bit rates like 160 Gbit/s. I don't know any more than that, like when and why it happens, and if it's any use in the real world, but if anyone reading this does, it would be good if they could explain.

[email protected]
Arne_S 12/4/2012 | 11:13:21 PM
re: Siemens Claims 160-Gbit/s Milestone I think you mean "pseudo-linear transmission" or the "pseudo-linear regime":

At high bitrates over 40GB/s dispersion yields to pulse broadening and pulses overlap with the neighbouring pulses. This leads to so called intra channel effects (Intra channel Crossphase Modulation IXPM or Intra channel Four Wave Mixing IFWM). These intra channel effects depend on the non linear coefficient of the fiber and on the fiber input power and it's variation in time. If you further increase the bitrate up to160 GB/s the pulses broadens dramatically due to it's wide signal spectrum. Then the broadend pulse is not really a pulse anymore and if you add all broadened pulses you have something like constant signal power. Consequently intra channel effects are very low.

Sometimes low bit rates of 10 GB/s are called "soliton regime" because dispersion is negligible, no pulse overlap occurs and the pulse keeps the origin shape.
jimmy 12/4/2012 | 11:13:19 PM
re: Siemens Claims 160-Gbit/s Milestone So which is more useful?

Lucent's recent breakthroughs disecting optical propagation characteristics of undersea sponges


Siemens 160Gbit/s transmission Milestone.

I think it's a tie......
boozon 12/4/2012 | 11:13:14 PM
re: Siemens Claims 160-Gbit/s Milestone no problem about CD compensation?
In theory yes, in practice it is one more complication to deal with, and not a minor one. Is it worth it?
As far as the other cost issues are concerned, it's true that you only need one laser, but you still need 16 data modulators (plus relative electronics). You also need a time domain mux and a time domain demux. How cheap can you make them compared to WDM muxes and demuxes (which are just passive devices)?
To some extent, 160G is the Formula1 of optical comms: good fun, a lot of excitement, costs a lot of money and needs a lot of sponsors to survive!
If Siemens has money for this...why not?!

hrohde 12/4/2012 | 11:13:14 PM
re: Siemens Claims 160-Gbit/s Milestone Chromatic dispersion has to be, can be, and was compensated quite precisely, no problem about that.

As all signals are on the same wavelength, ONE laser is enough to provide light for all channels. You are right, we need 16 photodiodes, but compared to all other equipment the price of those are peanuts. You also save WDM gear as multiplexers and filters.

Cost savings are a complex topic which I will not discuss here in detail, but we think that ONE 160 Gbit/s channel can be cheaper than 16 10 Gbit/s DWDM channels.
purna 12/4/2012 | 11:13:12 PM
re: Siemens Claims 160-Gbit/s Milestone I think that PMD (polarization mode dispersion) will be the limiting factor. Carriers engineer fiber links to tolerate a maximum PMD before regeneration. For today's network, this limit is typically 10ps to accomodate OC-192 systems. When the bitrate is 16 times higher, the tolerance decreases by 16, which is equivalent to a reduction of a factor 16 of the distance between regenerators. Two solutions:
- Deploy PMD compensators that bring the tolerance back to 10ps. To the best of my knowledge, those do not yet exist.
- Deploy new fibre that has extremely low PMD. Not an option for most carriers.

Anyone knows the PMD value of the links used for the Siemens/BR experiment?
Dr_Moose 12/4/2012 | 11:13:12 PM
re: Siemens Claims 160-Gbit/s Milestone my first reaction was skeptiscism too. But these systems are surpisingly do-able.

Here are the responses(typical) behind this (and most other major technical progress

1. can't be done
- feasibility shown in lab

2. only works in labs / with Ph.D.s
- field demonstration done

3. not practical / cost effective / who needs it
- non-bubble start-up makes profitable business

Okay so we are still missing step 3. This may take a few years. But remember - WDM went through -exactly- the sames steps 1 & 2. So did 10 GHz ("no WAY that chromatic dispersion can be compensated compared to 2.5 GHz! it is 16X worse!"

I still remember when I got kind of nervouse about 100 MB hard disks - who need that much space! That was more than a 100 floppies! No way!


Page 1 / 2   >   >>
Sign In