All-Optical Switching Tutorial, Part 1

A down-to-earth description of all-optical switches * What they are* What they do* How they work

October 24, 2001

22 Min Read
All-Optical Switching Tutorial, Part 1

This is the first of a pair of technology tutorials on all-optical switching by Geoff Bennett, vice president of technology advocacy at Marconi PLC (Nasdaq/London: MONI).

This tutorial covers the all-optical switches themselves – the various types, how they differ from electronic switches, where they sit in networks, what functions they perform, how they're controlled, and what they can and can't do.

The second tutorial covers optical switching fabric. In particular, it shows how different sizes and types of switch require different methods of routing light through their cores.

Both of these tutorials are based on a presentation given by Bennett at Opticon 2001, Light Reading's annual conference held in San Jose, Calif., in August of this year. Bennett would like to acknowledge the help of Peter Duthie, senior technical specialist, Marconi Optical Components, in preparing this presentation.

As a rule, Light Reading doesn't accept editorial contributions from manufacturers, but Bennett's tutorials provide valuable vendor-neutral insight into issues that have often been muddied by marketing hype.

In Bennett's view, the key to understanding all-optical switches is to consider the following issues in order:

Applications Identifying the purpose of an all-optical switch pinpoints key requirements in terms of scale, functions, and performance.

Techniques This covers how traffic is directed through the switch (the control plane) and the way in which it's handled (on its own dedicated wavelength or multiplexed with other traffic).

Technologies Dealt with in the second tutorial, this covers the fabric that routes optical pulses from input ports to output ports.

Here's a hyperlinked summary of this report:

Page 2: Basics

  • How optical and electronic switches work

  • Optical > analog > strangePage 3: OEO vs OOO

  • Why OEO is nothing new

  • How some optical cores have OEO add-onsPage 4: GMPLS Context

  • New taxonomy

  • Latest MPLS lingoPage 5: Applications

  • What switch goes where

  • Lambda, burst, and packet switchingPage 6: Really Difficult Things

  • Reading at extreme speeds

  • Buffering optical packetsPage 7: Lambda Switching

  • Manual versus automatic

  • Why "dynamic" is different Page 8: Optical Burst Switching

  • How it works

  • Lambda versus burst switchingPage 9: Optical Packet Switching

  • An impossible dream?

  • Researchers push the limitsIntroduction by Peter Heywood, Founding Editor, Light Reading
    http://www.lightreading.comNext Page: Basics

    A switch has a very simple job. It takes traffic from an input port or connection and directs it, over a fabric or backplane, to an output port.

    Electronic switches already exist that handle variable-length packets, fixed-length cells, and synchronous timeslots:

    An optical switch, on the other hand, works with light. It directs a single wavelength, or perhaps a range of wavelengths, from an input port to an output port:

    A switch needs some kind of information to make this switching decision. In electronic switches, this information is carried inside packets:

    An Ethernet, or MAC Layer switch, reads the destination MAC (media access control) address on the frame and makes its forwarding decision based on this information.

    An IP switch, or router, uses the destination IP address to make its decision.

    In an MPLS (multiprotocol label switching) Label Switch Router, once Label Switched Paths have been established in the network, the outermost label is used to make a forwarding decision.

    In an optical switch, however, we’re dealing with an analog device that cannot see bits, never mind frames or packets. The only criterion this type of switch has for making a forwarding decision is the value of the wavelength of the light:

    Since most optical switches will be used in DWDM installations, we need to think about separating the various wavelengths from an inbound fiber in order to make per-wavelength switching decisions.

    In addition, in this diagram I’ve shown the light arriving at a detector, where the photons will be converted back into electrons and then received by the electronic part of the switch as frames, packets, cells, or timeslots.

    Receivers are wideband; in other words, if you allow signals from two or more wavelengths to strike a specific receiver, then the resulting signal will be a garbled combination of all of the channels.

    Further reading:

  • Beginner's Guide: Optical Networks

  • Beginner's Guide: Protocol Basics

  • Beginner's Guide: Ethernet

  • Beginner's Guide: Internet Protocol (IP)

  • Beginner's Guide: Multiprotocol Label Switching (MPLS)

  • Beginner's Guide: Wavelength Division Multiplexing (WDM)

  • Beginner's Guide: Optical Crossconnects

    Next Page: OEO vs OOO

    In the months before the Technology Crash of 2000, adding the word “optical” to your product description was a great way to increase your stock price, and so marketing departments became very "imaginative" in the way they referred to switches.

    The most amusing attempt at hyping the topic was the introduction of the term “OEO”, which stands for optical-electrical-optical. In this kind of switch, the I/O (input/output) modules are optical, but receivers turn the photons back into electrons for their journey over an electronic backplane. At the output module, the electrons are converted back into photons.

    If this method of operation sounds familiar, then don’t be surprised. This is how Sonet/SDH crossconnects, FDDI (fiber distributed data interface) switches, ATM (asynchronous transfer mode) switches, Ethernet switches (with fiber optic interfaces), and routers (with FDDI, ATM, or POS interfaces) have worked for more than a decade.

    OEO is a label that is often attached to “new age” Sonet/SDH equipment.

    The advantages of OEO are that it’s a well established technology. We get full digital regeneration (reshaping, retiming, and resynchronization of signals) “for free” as part of the optical/electronic conversion process. And we have the opportunity to include sub-lambda muxing and grooming functions.

    The downside is that electronics may not be able to keep pace with the growth in capacity of optics in the near future, although this claim has been questioned increasingly in recent months.

    As digital devices, OEO switches are also bit-rate dependent, and they need to have the appropriate protocol stacks included in order to operate with a specific stack, such as IP in POS (packet over Sonet), ESCON (IBM's Enterprise System Connection protocol, used for high-speed connections in data centers), or Fibre Channel.


    The all-optical switch is a more recent development. This is an all-analog device, where both the I/O modules and the backplane are optical.

    The primary benefit of all-optical devices may be their greater scalability over OEOs. In fact, all-optical switches are completely bit-rate transparent. They are also protocol transparent.

    Against this we have to realize that most of the technologies of the all-optical switch are still emerging and usually exist in a sub-optimal form today. We only have wavelength, or lambda-level, granularity and cannot perform sub-lambda muxing and grooming.

    As a pure analog device, we have no visibility of bit error rate (BER), which makes SLA (service-level agreement) monitoring something of a challenge. Finally, we must use amplification, not regeneration, to boost signals in the switch. This may be an advantage in terms of cost, but it poses interesting challenges for network design.


    One of the most imaginative efforts is probably the OEO-O-OEO switch:

    This is an interesting compromise. Instead of using an electronic backplane, the electronic conversion is placed in the I/O module. This makes the scalability limits – if they are really an issue – less pressing, as the capacity of an individual I/O module will always be much lower than the overall backplane capacity.

    But by putting in the electronic stage at the I/O module we get 3R regeneration – and, potentially, the ability to do wavelength translation. It may be possible to do BER monitoring in the electronic part of the I/O module, and we may be able to choose an OEO, or an all-optical I/O module on a per-port basis if the design is sufficiently advanced.

    Against this we have to say that this design is clearly more complex than either an OEO or OOO switch. Also, as the electronic stages are isolated on the I/O module (which may be a single-port module), the opportunity to do muxing and grooming is limited.

    Further reading:

  • Report: Optical Illusions

  • Beginner's Guide: Sonet (Synchronous Optical NETwork) and SDH (Synchronous Digital Hierarchy)

  • Beginner's Guide: Asynchronous Transfer Mode (ATM)

  • Beginner's Guide: Optical Amplification

  • Beginner's Guide: Optical Detection & Regeneration

  • Optical Networking Glossary

    Next Page: GMPLS Context

    The Internet Engineering Task Force (IETF) is currently working on an architecture called Generalized MPLS (GMPLS).

    Multiprotocol Label Switching (MPLS) provides a way of setting up logical connections over packet-based networks, to create the equivalent of express lanes. GMPLS will expand the control protocols defined in MPLS so they can be used to add dynamic control capabilities to Sonet/SDH and DWDM equipment.

    In the GMPLS draft, a taxonomy is defined to allow us to set a level of expectation about the capabilities of legacy Sonet/SDH and DWDM switches.

    The draft lists four types of interface on a switch: PSC, TDM, LSC, and FSC.

    A Packet Switch Capable, or PSC, interface can recognize bits, and it can recognize the packet or cell boundaries that are present in this bit-stream. The switch can therefore make forwarding decisions based on the contents of various address fields, as described earlier (see Basics).

    An important additional assumption is that this kind of interface is capable of receiving and processing control plane messages (such as routing and signaling protocol messages) that are transmitted in-band with the data.

    So PSC interfaces would be those found on conventional Ethernet, IP, and ATM switches.

    A Time Division Mux, or TDM, interface is also assumed to recognize bits. But in this case it operates on the assumption of a regular, repeating frame structure being present in a synchronous bit-stream.

    TDM switches can then forward data, or perform circuit grooming, on the basis of the position of information within the timeslot.

    Like PSC interfaces, a TDM interface is also capable of receiving and processing control plane information that is sent in-band with the data.

    In a Lambda Switch Capable, or LSC, interface we move into the pure analog world. We assume that this kind of interface will not be capable of recognizing bits, or any higher-level structure such as a frame or packet.

    Forwarding occurs by switching a lightstream on the basis of its wavelength (for an individual DWDM channel), or a range of wavelengths (a waveband consisting of two or more channels).

    A pure analog device, the LSC interface is not assumed to be capable of receiving control plane messages in-band with the data.

    Finally, the Fiber Switch Capable, or FSC, interface.

    This is really an intelligent fiber patch panel switch. It doesn’t recognize bits, nor does it have any conception of wavelengths or wavebands.

    Data is forwarded between ports based on their position in real-world physical space.

    FSC interfaces are also assumed to be incapable of receiving control plane messages in-band.

    Connections across GMPLS-based networks are called Label Switched Paths, or LSPs. This term embraces:

    • Virtual circuits or paths in packet/cell-switched networks

    • Circuits or channels in Sonet/SDH networks

    • Strings of wavelengths that might be called "optical channel trails" in DWDM networks

    • Strings of fibers that might be called "fiber paths" in the physical layer of telecom nets.

    While this is a helpful simplification of terminology, we can’t avoid the issue that an LSP that runs from an LSC interface to a PSC interface is an inconsistent concept.

    So the draft states that an GMPLS LSP must start and end on the same type of device.

    Further reading:

  • Column: The IP Priesthood

  • Column: The Monster Memo

  • News Analysis: Poll: Is MPLS BS?

  • Beginner's Guide: Multiprotocol Label Switching (MPLS)

    Next Page: Applications

    This figure shows the major application areas for all-optical switches.

  • Protection switches are already well-established. Typically built as a 1x2 unit, they are used to protect individual fibers against catastrophic failures such as a break or connector failure. They may be controlled by an external monitoring system, but may also have built-in detection for loss of signal of the primary fiber. However, in general we can assume that protection switches are fairly simple devices.

  • Optical add-drop muliplexers or OADMs are the access points to the optical network. It is here that individual wavelengths are added or dropped, and the signals may originate from PSC or TDM capable interfaces within the OADM. In other words, the OADM may also offer a sub-lambda muxing and grooming capability.

  • Optical Crossconnects, or OXCs are the crossroads of the optical network. Their role most closely resembles that of routers or ATM switches. True all-optical OXCs, however, don't have sub-lambda muxing and grooming. In the future, OXCs may evolve into three types of switch:

  • Lambda switches, which enable telecom operators to set up strings of wavelengths across a network, in the same way as today's OXCs.

  • Optical burst switches, which handle flows of packets associated with a particular dialogue.

  • Optical packet switches, which handle individual optical packets in the same way as routers handle electronic packets today.

    Next Page: Really Difficult Things

    Optical switching presents a lot of challenges, but two things stand out as being really difficult:

    Challenge No. 1: Reading and processing bits at extremely high speeds

    With traditional electronic switches, such as IP routers and ATM switches, the device acts in a store-and-forward mode. When the inbound data unit arrives at the switch, the entire unit is stored very briefly in an input buffer.

    While the input buffer holds the packet or cell, the address or label information can be processed so that the switch knows which outbound port is required.

    Today’s fastest router interfaces operate at 10 Gbit/s, despite the fact that 40-Gbit/s interfaces have been available for Sonet/SDH equipment for some months. 10 Gbit/s is the current speed limit because this is the fastest rate that network processor chips inside the router can process addressing information.

    Remember that the imperative for moving to an all-optical switch was to break free of these kinds of speed restrictions. But optical switches still need to be able to set up lambda paths through the network, and this implies the need to process routing and signaling messages.

    Challenge No. 2: Buffering optical packets so they can be multiplexed statistically

    In an electronic switch, we frequently encounter a situation in which a packet arrives at its outbound interface only to find that there is a queue of packets already waiting for transmission ahead of it. Electronic switches are built with large output buffers in order to deal with this issue, and to allow statistical multiplexing, with overbooking of capacity.

    In ATM switches the situation is even more sophisticated. These devices are built with very intelligent output buffers that allow different queues for different connections to be maintained. These queues can be serviced in different ways in order to offer guaranteed bandwidth, or priority queuing, and a range of other specific behaviors. This is part of the reason ATM switches are uniquely capable of implementing a network-wide quality-of-service scheme.

    So, electronic memory has evolved over many years to offer all of these capabilities in a compact chip that is economical to produce and highly effective in service.

    But if we move to optical switching, how do we build a buffer for photons? Aside from laboratory experiments where photons have been “frozen” at liquid hydrogen temperatures, the only practical method is a fiber delay line. Since light takes a finite time to travel down a fiber (approximately 200,000 km per second in silica glass), it should be possible to “store” a data unit in a sufficiently long piece of fiber. An Ethernet frame is about 10,000 bits long. If we transmit this frame at 10 Gbit/s, it would occupy about 200m of fiber. At 40 Gbit/s it only needs 50m.

    Fiber delay lines are directly analogous to the delay line memory used in the first generations of computers, and they suffer from the same drawback. Once a packet has been inserted into the delay line, the only way to get it out is to wait. In other words, this is not a random access technique. Delay lines can be built as hierarchical trees, with fast 1x2 switching elements between them to direct a packet through a given incremental set of delays. But this technique is highly complex and will require expensive and bulky components for each “buffer queue."

    Next Page: Lambda Switching

    This is a vastly simplified view of a typical lambda-switched network of today.

    The objective is to establish a wavelength path between the two MPLS LSRs (label switch routers) on the left and right hand side of the optical network.

    At each of the numbered points, a human operator must create a lambda between the boxes and assign this lambda to the path. When we reach Point 5 we have a complete path and the LSRs can be told to bring up their connections.

    This kind of provisioning gives the carrier the ultimate control over its network resources, but it does require considerable configuration and monitoring, as backup paths have to be pre-configured at each stage to cope with failures. Typically, an offline routing package is used to calculate and re-optimize paths periodically, but it is always a human operator (using a network management system) who makes the actual changes.

    There have been several attempts to automate the lambda switching process. The simplest comes with the 1x2 protection switch described earlier (see Applications). By designing a signal detection circuit into the switch we can failover automatically if the primary cable is broken. In more sophisticated designs, a monitoring unit is associated with one or more protection switches to detect BER (bit error rate) degradation and switch to the secondary path.

    So how do the manual and automated lambda switches overcome the two Really Difficult Things?

    Q: How do we read bits at very high speeds?

    A: We don’t. Once the wavelength is set up, we just switch the light. The switch never tries to interpret bits within the stream of photons.

    Q: How do we buffer traffic for statistical muxing?

    A: We don’t. Once the wavelength is established, it is used exclusively by one input stream, and no statistical muxing is possible.

    In order to address the issue of manual connection setup, the industry is now working towards a much more dynamic form of lambda switching in which MPLS control protocols are extended and generalized to operate with TDM and LSC interfaces. Generalized MPLS is the term for this initiative.

    In a GMPLS network, electronic devices connect into the optical core over an Optical User Network Interface (O-UNI). In-band control information is assumed to be able to pass over the O-UNI.

    Inside the optical cloud, however, the switches are assumed to be LSC only and, therefore, not capable of processing in-band control plane messages. The first-pass solution to this problem is to build an out-of-band signaling network to interconnect LSC devices, typically using fast Ethernet as the physical interface.

    So how does the dynamic lambda switching overcome the two Really Difficult Things?

    Q: How do we read bits at very high speeds?

    A: We don’t. The data plane is still “service transparent.” The control plane can operate at a much lower data rate.

    Q: How do we buffer traffic for statistical muxing?

    A: We don’t. Once the wavelength is established, it is used exclusively by one input stream, and no statistical muxing is possible.

    So in a GMPLS network, we don’t need to worry about reading bits at high speed in the data plane because we’re deliberately building an out-of-band signaling network for the control plane.

    We also don’t need to worry about buffering, as the wavelengths are not statistically multiplexed once established.

    With lambda switching, the only information a switch will use to make a forwarding decision for a given stream is the value of its wavelength.

    The means wavelength has to be measured. In particular, if we want to convert the photons from a given stream back to electrons at some point, then we certainly need to separate out the individual stream, because all receivers used today are wideband.

    Most wavelength separation technology relies on the fact that different wavelengths travel through a given medium at slightly different velocities. This fact allows us to use a variety of techniques to achieve a spatial separation of wavelengths. If we know what wavelength a given channel is using when it enters the network, then we can predict the angular separation its photons will encounter through a refracting device, for example.

    Devices in widespread use include prisms, filters, and gratings. In addition, we can see that some of these devices are static in nature and require an engineer to change the component in order to change the wavelength that is selected.

    Dynamic technologies are appearing all the time, and when we design optical switches we need to keep in mind the requirement to perform lambda separation as a part of the switching process.

    Next Page: Optical Burst Switching

    A potential disadvantage of lambda switching is that, once a wavelength has been assigned, it is used exclusively by its “owner.” If 100 percent of its capacity is not in use for 100 percent of the time, then clearly there is an inefficiency in the network.

    One solution to this problem is to allocate the wavelength for the duration of the data burst being sent. Historically, this has always been recognized as a challenge because the amount of time used in setting up and tearing down connections is typically very large compared to the amount of time the wavelength is “occupied” by the data burst. This is because traditional signaling techniques (eg. ATM, RSVP, X.25, ISDN) have tended to use a multi-way handshaking process to ensure that the channel really is established before data is sent. These techniques could not be applied to optical burst switching because they take far too long.

    For this reason, a simplex “on the fly” signaling mechanism is the current favorite for optical burst switching, and there is no explicit confirmation that the connection is in place before the data burst is sent. Given that, at the time of writing, most optical burst research has been confined to computer simulation, it’s still not totally clear what the impact of this unreliable signaling will be on real network performance.

    Here's a more detailed comparison of lambda switching and optical burst switching (OBS):

    In a lambda switch, which we can also describe as an LSC interface with a GMPLS control plane, the goal is to reduce the time taken to establish optical paths from months to minutes. Once established, the wavelengths will remain in place for a relatively long time – perhaps months or even years. In this timescale, it’s quite acceptable to use traditional, reliable signaling techniques – notably RSVP (resource reservation protocol) and CR-LDP (constraint-based routing-label distribution protocol), which are being extended for use in GMPLS. Signaling can be out of band, using a low-speed overlay such as fast Ethernet.

    In OBS, the goal is to set up lambdas so that a single burst of data can be transmitted. As noted previously, a 1-Mbyte file transmitted at 10 Gbit/s only requires a lambda for 1ms. The burst has to be buffered by the OEO edge device while the lambda is being set up, so the signaling has to be very fast indeed, and it looks as though we won’t have time for traditional handshakes.

    The signaling itself can be out of band, but it must follow the topology required by the lambda path. If this seems confusing, think of a primary rate ISDN line. In this technology we use a single D-channel (a signaling channel) to control up to 30 B-channels (the channels carrying payload). The B and D channels share the same physical cable, and therefore the same topology. In the optical context we could use a dedicated signaling wavelength on a given fiber, and run this wavelength at speeds where economic network processors are available (eg., gigabit Ethernet).

    So how does optical burst switching overcome the two Really Difficult Things?

    Q: How do we read bits at very high speeds?

    A: We don’t. The data plane is still “service transparent.” The control plane can operate at a much lower data rate, but must follow the path of the data plane (eg., use optical supervisory channel).

    Q: How do we buffer traffic for statistical muxing?

    A: By holding the burst at the ingress and allowing for setup processing delays, it may be possible to build a switch that does not need buffers.

    There are lots of uncertainties with OBS – in particular, the difficulty of creating a robust but simple signaling system.

    But the benefits are very clear. By holding the LSP open only for the duration of the burst, we achieve a statistical multiplexing of the wavelength in the time domain. This should increase the efficiency of backbone utilization dramatically when compared to lambda switching.

    The term “Lambda Tax” is already being used to describe this potential inefficiency.

    Next Page: Optical Packet Switching

    An Optical Packet Switch (OPS) is the true, optical equivalent of an electronic packet switch, reading the embedded label and making a switching decision using this information.

    OPS devices could potentially operate in true, connectionless fashion (using destination IP addresses, for example).

    They could also operate in connection-oriented mode by using GMPLS control plane protocols to signal a path setup, and then embedded labels to allow switching to take place.

    But this brief description should sound the warning bell for you that we may be forced to read bits at high speeds. The big question is whether this is unavoidable.

    A practical OPS experiment has already been performed. It’s known as the Keys to Optical Packet Switching (KEOPS) pilot, a European project involving a bunch of research establishments, led by Alcatel SA (NYSE: ALA; Paris: CGEP:PA).

    KEOPS addressing headers are transmitted at a lower bit rate than the actual data payload.

    In order to operate with multiple bit rates in the same channel, KEOPS uses two techniques: first, a synchronization pattern that allows the clock circuits to latch onto the new bit rate; and second, a guard time between the fixed-size time slots. KEOPS used a rather generous 14-byte header. If we assume that 32-bit MPLS labels will represent the address information, this would give us room to stack up to 3 levels of label and still have 2 bytes left for status or control information.

    The payload size of 1.35 microseconds is enough to carry a full-sized Ethernet frame at 10 Gbit/s.

    So how does optical packet switching overcome the two Really Difficult Things?

    Q: How do we read bits at very high speeds?

    A1: For a connectionless OPS network, we send the packet header in-band with the data, but at a lower bit rate than the data.

    A2: For a GMPLS OPS network, we send signaling messages and the label headers on data packets at a lower bit rate than the data

    Q: Do Optical Packet Switches still need buffers?

    A: Yes, there’s no way around this one.

    OPS devices must make use of input buffers in order to give address processing circuits the time to do their job. Like Ethernet “cut through” switches that had to buffer the first 64 bytes of the packet.

    OPS devices must make use of output buffers so that they don’t drop packets.

    Click here to jump straight to the second part of this tutorial, covering optical switching fabric.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like