Tutorial on Grooming SwitchesTutorial on Grooming Switches

The first of a package of reports on optical switches * Technology explained * Why it's hot * VCs and VTs

February 4, 2002

23 Min Read
Light Reading logo in a gray background | Light Reading

Over the next few weeks, Light Reading will be publishing a package of reports covering optical switches.

This report provides a technology tutorial on grooming, a topic that’s become a key issue when reviewing optical switches with electrical cores. Two other reports will follow. One is an update of All-Optical Switching Tutorial, Part 2, featuring animated diagrams of switching fabric. The other is a full-blown survey of vendors and products in this field.

Geoff Bennett, distinguished engineer, technology strategy, for Marconi PLC (Nasdaq/London: MONI), wrote this tutorial with help from colleagues at Marconi, including John Ash, Ken Guild, Lieven Levrau, and Steve Ferguson.

The tutorial sets the scene for our other reports on optical switches – a scene that often circles around grooming today.

The main reason for this is that carriers have gotten a lot more down-to-earth in the past year or so. Instead of racing to cater to huge volumes of unprofitable Internet traffic, they’ve gone back to basics. They’ve recognized that, for instance, providing T1 (1.5 Mbit/s) or E1 (2 Mbit/s) leased lines is a big and very profitable business. So they’re trying to do that, while cutting costs.

That’s where grooming switches come in. They offer carriers ways to achieve this goal – by improving provisioning times while cutting capital and operating costs. The cost-cutting comes mainly from consolidating multiple tiers of separate add/drop multiplexers into a single box, so there’s less equipment to buy, less space to occupy, and less gear to maintain.

Grooming can also ensure that bandwidth is used more efficiently. The network management systems that come with the latest grooming switches aim to make it much easier to improve the packing of smaller connections within wavelengths. Right now, a lot of bandwidth is wasted because it’s less expensive to waste it than it is to buy more add/drop multiplexers. Similarly, the software that comes with grooming switches makes it easier to “unwind” capacity from connections that are no longer being paid for.

So, what actually is grooming? In a nutshell, it’s marshalling connections over networks in an efficient way, and – although that may sound simple – it’s actually fiendishly complicated, for a number of reasons covered in this report. Here’s a quick summary:

Page 2: Complex Issues

Grooming is a multifaceted issue. Carriers have to decide how granular to get with their grooming – and the economics vary in different parts of telecom networks. Regulations can also play an important factor.

Page 3: ADMs and DXCs

Grooming isn't about aggregating traffic at the edge of networks. It’s about reshuffling it into new channels at network intersections. Digital crossconnects (DXCs) make a better job of it than early add/drop multiplexers (ADMs) have, but inefficiencies are sometimes unavoidable.

Page 4: VTs and VCsThe smallest Sonet pipe, the STS1 (51.8 Mbit/s), and the smallest SDH pipe, the STM1 (155 Mbit/s), are way too big for carrying popular T1 (1.5 Mbit/s) and E1 (2 Mbit/s) leased lines. They have to be carried in Sonet “virtual tributaries” (VTs) and SDH “virtual containers” (VCs)

Page 5: Switch GranularityThere’s no single answer to the issue of what size of channels switches should groom. It all depends on what sort of traffic is being handled, where the switch is in the network, and what multiplexing hierarchy is in force.

Page 6: The GMPLS Hierarchy

Multilayer switch architectures may make the choice of grooming granularity less agonizing in the future. Key to this could be generalized multiprotocol label switching (GMPLS) developments in the IETF.

Page 7: Opaque vs TransparentGrooming whole wavelengths using transparent, all-optical switches has some attractions, particularly for big bandwidth applications like storage networking. However, the immaturity of network management developments is a significant problem.

Introduction by Peter Heywood, Founding Editor, Light Reading
http://www.lightreading.com

Next Page: Page 2: Complex Issues

In order to understand the issues involved in grooming, a little background on today’s telecom infrastructure is necessary. In particular, it’s important to realize that these complex networks are carved up into a variety of hierarchies based on bandwidth, topology, and regulations.

Bandwidth Service provider networks are built as bandwidth hierarchies. At the user, or CPE (customer premises equipment), end of the hierarchy the bandwidth is low, while in the core the bandwidth is very high. As traffic is aggregated and directed upwards in the hierarchy it’s essential to make sure that it’s assigned to the most efficient physical path at each stage. The aggregation, or multiplexing process follows a very clear hierarchy:

  • Electronic Time Division Multiplexing (TDM)

    • Wavelength Division Multiplexing (WDM)

    • Fiber level (or Spatial) MultiplexingAs we move down this list (or up the hierarchy) the capacity of the bearers increases, and the cost/capacity factor improves, but the granularity of multiplexing decreases. Carriers have to balance the need for grooming granularity with the cost of implementing it.

      Vendors are responding by creating multilayer switching architectures. What this means is that a vendor of an OEO switch (one with an electrical core) will add an all-optical core to their grooming switch, whereas an all-optical vendor will add a grooming switch capability for those channels that require sub-wavelength switching or grooming.

      TopologyThere are also topological, or geographical, hierarchies at work in service provider networks. In other words, the topology, or shape, of the network will have an effect on the way you do grooming – grooming in a mesh works differently from grooming in a ring. But how do you decide if you have a ring or a mesh?

      In Figure 1 you can see an abstract network that consists of an access region, a metro region, and a core region. This diagram represents the “fashionable thinking” of many vendors today – the physical topology used in access is generally point-to-point, while in the metro it’s rings, and in the core it’s a mesh.

      This is an over-simplification (as is the following argument). In North America, where distances are great between metropolitan centers, a mesh core may be more efficient. In Europe, rings are still used quite effectively for national backbones. The overall distances are great, but the density of access points is much higher, and so rings are most efficient. In pan-European networks, where distances are longer, and the connection points are sparse, meshes come back into favor.

      To complicate the matter further, we’re only discussing physical topology here. In TDM and DWDM networks, there is a logical topology that’s overlaid on top of this. For example, in a metro ring, connections may be provisioned to a set of customers in a logical mesh. Conversely, in a long-haul mesh, logical rings may be created to implement ring-based fast restoration protocols such as BLSR/4 and MS-SPRING. Not only do these protocols offer the sub-60ms restoration required by standards from the International Telecommunication Union (ITU), but they also allow 1:n sparing of bandwidth (not the 1:1 number generally quoted by the anti-Sonet/SDH lobby).

      Further reading:

    • Light Reading's Beginner’s Guide on Sonet (Synchronous Optical NETwork) and SDH (Synchronous Digital Hierarchy)

    • Introduction to SONET by Shu Zhang of the University of Nebraska

      RegulationsFinally, there are regulatory or geopolitical reasons that impede service providers from building networks in a homogeneous way.

      In Figure 2 you can see the classic problem caused by the 1996 Telecommunications Act in the U.S. A business in San Francisco can use a local exchange carrier (LEC) to transport voice or data within a local region, but in order to connect to its Boston office, the voice and data has to travel over an inter-exchange carrier (IXC) network. There are similar examples from Europe that involve national and international carriers.

      In the heterogeneous networks caused by regulatory restrictions, carriers may find that they need to groom traffic at the regulatory boundaries. This is based on simple economics.

      For example. If the two LECs shown in Figure 2 were able to connect over their own bandwidth, they may be happy to operate large pipes that are inefficiently filled. Let’s say that they have two 2.5-Gbit/s links from coast to coast, and each is filled to 25 percent capacity. To groom this traffic into a single link at 50 percent capacity would apparently be more efficient, but the grooming costs money, and since they own the bandwidth anyway the inefficiency costs them nothing.

      However, if they have to go through an IXC network, they’ll be charged for the bandwidth they’re using. It’s then probably economically acceptable to pay for the grooming equipment, because they’ll be paying the IXC only half as much. In real networks the situation is much more complex. Network operators have to balance the cost of leasing extra bandwidth, against the set of service restoration requirements (possibly dictated by SLAs) and the level of “spare” bandwidth that their capacity planning folks demand at any given time. A sophisticated grooming capability is the answer.

      Next Page: Page 3: ADMs and DXCs

      It's time to dig into Sonet and SDH issues, so let's start with a definition of terms (data rates are rounded up in this tutorial):

      Table 1: Sonet and SDH Data Rates

      Sonet Signal

      SDH Signal

      Bit rate (Mbit/s)

      STS1

      51.84

      OC3

      STM1

      155.52

      OC12

      STM4

      622.08

      STS24

      1,244.16

      OC48

      STM16

      2,488.32

      OC192

      STM64

      9,953.28

      OC768

      STM256

      39,813.12



      In a Sonet/SDH network, access connections are time-division multiplexed into a timeslot on a higher-speed trunk. For instance, an OC1 (52-Mbit/s) connection might be multiplexed with two other 52-Mbit/s connections to form an OC3 (155 Mbit/s).

      In Figure 3 you can see this happening in Mux 1. The process of combining tributary signals into a higher-speed trunk is called Time Slot Assignment (TSA). This is not grooming!

      But in Mux 2 you can see that the OC1s are being split apart and sent down different outgoing OC3s. This is called Time Slot Interchange (TSI). TSI is required to do grooming.

      TSI is more complex than TSA and needs more sophisticated electronics in the ASICs (application-specific integrated circuits). One of the reasons that it’s important to understand the difference between TSA and TSI is that the first generation of Sonet ADMs were introduced without a TSI capability.

      In contrast, because SDH ADMs tended to appear somewhat later, TSI was a standard feature from the very beginning.

      As a matter of technical trivia, this is one reason why U.S.-generated presentations claim that provisioning a coast-to-coast TDM or DWDM channel takes so long ("months" is a time-scale I’ve seen quoted). European carriers are surprised by this figure, because the additional functionality of TSI across the whole network means that they’re able to perform end-to-end provisioning or changes in services from a single operational support system (OSS).

      Another consequence of the lack of TSI in ADMs is the need to "backhaul" traffic that needs to be groomed.

      Here's an example of how backhaul might be used to solve a grooming issue in a typical carrier network:

      Six months ago two business subscribers, shown by the red and the green traffic flows, set up their offices in the same business park and contacted the local service provider. As it happened, their traffic flows both headed off towards Point X in the service provider’s network. The service provider installed an access device, Mux A, which was connected to Mux B.

      Last week a third subscriber, shown by the blue traffic, contacted the service provider. Its traffic starts in the same business park, and so is sent into Mux A and then onwards to Mux B. But, ideally, at that point its traffic should head off towards Point Y. The question is, does Mux B have the ability to do this?

      If Mux B were a TSA-only device, then the blue traffic could not be groomed off from the red and green traffic. To perform this grooming, Mux B would need to implement TSI.

      As Sonet networks began to grow, the need for TSI became obvious, and these features were implemented in interconnection devices called digital crossconnects (DCCs or DXCs).

      In Figure 5 you can see an extended view of Figure 4, that now includes a DXC. At the DXC, the red and the green traffic continues on its journey to Point X, while the blue traffic is actually sent back towards Mux B, but this time assigned to a timeslot that can be directed towards Point Y.

      As you can imagine, backhaul is generally a wasteful exercise in terms of network bandwidth, but it's often useful for a service provider in order to simplify equipment requirements (and therefore cost) or to centralize control and administration.

      Next Page: Page 4: VTs and VCsAnother hot debate in the industry today is what the granularity of grooming for a DXC or an optical crossconnect (OXC) should be. Let’s look at some terminology first.

      Sonet and SDH are synchronous multiplexing schemes. They actually replaced a system with the Jurassic-sounding name of Plesiochronous Digital Hierarchy (PDH). This is sometimes called the Asynchronous Digital Hierarchy in North America.

      The PDH network was designed to carry digital telephony signals, so it’s based around the fundamental building block of a 64-kbit/s signal (the bandwidth required by a single PCM-encoded telephone call). As PDH muxes climb their bandwidth hierarchy, we end up with signal names and bit rates that I’ve shown in the first two columns of Table 2.

      {Table 2}The second two columns illustrate an important fact – to carry PDH signals in Sonet or SDH you must put them first into a Virtual Tributary (VT) or a Virtual Container (VC). The VTs and VCs can then be fitted into the Sonet or SDH multiplexing scheme. In the case of Sonet, all legacy signals are first mapped into VT 1.5, which is then mapped into an STS1/OC1. VC 6 does exist for T2 signals but is rarely used. T3 is mapped directly into the STS1 signal and so doesn’t need a VT.



      So the basic building block of a Sonet network is the STS1 rate of 51.84 Mbit/s. Lower-rate payloads (legacies of PDH such as DS3, DS1, and DS0 services) are mapped into the STS1 using VTs, and higher-rate Sonet signals are created by synchronously multiplexing STS1s to form an STS-n. There is no additional overhead added to an STS-n signal, and so the line rate for a given STS-n is n times that for the STS1.

      SDH is based on a super-set of Sonet and is designed to operate more efficiently with the E-carrier PDH services used outside of North America and Japan. In particular, it was inefficient to map the 34-Mbit/s E3 services in use at that time into the 51.84-Mbit/s STS1 signal used as the basis of Sonet. So for SDH networks, the basic building block begins at the 155.52-Mbit/s STM1 signal.

      Crossconnect Terminology

      If you’ve read articles or seen presentations on DXCs, you may have encountered the terminology “4/3” or “4/1” or other descriptions of the crossconnect. What do these numbers actually mean? The general concept is that the first number describes the maximum aggregate link speed of the switch (or crossconnect) and the second number the granularity of switching.

      The first references to a DXC 1/0 came from North America, and referred to a crossconnect with a DS1 (1.5 Mbit/s) maximum link speed, and the ability to see inside this link to a granularity of a DS0 (64 kbit/s), of which there are 24 in a DS1. This terminology was accepted by the European Telecommunications Standards Institute (ETSI), but the DS1 is replaced there by the E1. So now an ETSI DXC 1/0 has a maximum link speed of 2 Mbit/s and a granularity of 64 kbit/s.

      So far, so good, but as we move to higher-order muxes the pattern starts to get a bit messy, because references are made to VC or VT numbers.

      Purists continue with a description such as “DXC 16/4/3/1.” This indicates a DXC with an STM16 (2.5 Gbit/s) aggregate and granularities described by the VC numbers: VC4, VC3, and VC1 (see table 2, above).

      Of course, VC numbers have nothing to do with STM (or for that matter, OC) numbers – that would be far too easy! VCs or VTs were designed to carry legacy PDH signal rates inside the new synchronous hierarchy. So most PDH signal rates travel in a VC or a VT; the exception is T3, which fits nicely into a Sonet STS1 (T3 needs to travel in a VC over SDH).

      Here is a list of Sonet and SDH container equivalents:

      • VC1: No such thing! In SDH, VC11 is used to carry T1 (1.5Mbit/s) signals; VC12 is used to carry E1 (2Mbit/s) signals. In pure Sonet, VT1.5 is used to carry T1s.

      • VC2: Used to carry T3 (45 Mbit/s) or E3 (34 Mbit/s) signals in SDH. In Sonet, T3s are carried directly in an STS1.

      • VC3: Used to carry a C3 container, running at 139.264 Mbit/s.

      Note that in pure Sonet documents, virtual containers are known as virtual tributaries, but, strictly speaking, the ITU SDH documents cover both bandwidth hierarchies. However, given that Sonet and SDH don’t operate in the same geography, it makes sense that technology-specific terminology is retained.

      The purist descriptions of DXCs became both cumbersome and confusing if you tried to understand them in too much detail! More importantly, the terminology didn’t necessarily convey the function of the device in the network. But since it was still important to understand the type of switching and grooming granularity that was available in the network, the accepted shorthand seems to have become: DXC 4/4 and DXC 4/1 for SDH; DXC 3/3 and DXC 3/1 for Sonet. In these descriptions there’s no specific reference to the aggregate rate.

      More recently, RHK Inc.has introduced a new taxonomy for optical switches of various kinds in their market reports, and the “DXC M/n” terminology seems to be restricted to classifying conventional DXCs/

      Next Page: Page 5: Switch Granularity
      It’s clear that switches in different parts of the network, at different points in the bandwidth hierarchy, will need different levels of grooming granularity. In addition, the types of traffic supported by a particular service provider will have a big influence.

      For example, an ISP that connects router interfaces at 2.5 and 10 Gbit/s is unlikely to need 52-Mbit/s grooming. Carving up the wavelength into smaller channels would make it tougher to carry large flows of traffic efficiently, so would be counter-productive. In contrast, carriers with large amounts of low-rate connections (low rate is a debatable number – less than 620 Mbit/s?) obviously have a strong requirement to groom at these rates.

      Figure 6 shows an optical switch that grooms at 2.5-Gbit/s granularity. As you can see, the switch can handle space-switching of streams from an input to any given output, but it cannot sub-multiplex traffic from any of the 2.5-Gbit/s inputs in order to switch them onto the lower-speed, 620-Mbit/s connections. The argument is that in the core, or near-core, of the network this level of granularity is not needed. This kind of switch can, however, groom 2.5-Gbit/s signals into a higher-speed channel such as 10 Gbit/s or even (one day) 40 Gbit/s.

      One argument used against coarse granularity switches is that 2.5 Gbit/s is already up to an entire wavelength’s worth of bandwidth – so why not use all-optical switches, rather than the more expensive and power-hungry OEO switches for this application?

      Once again, the main advantage of any OEO is grooming, and an all-optical switch would not be able to combine 2.5-Gbit/s channels into a higher-speed signal.

      We can carry this argument down the hierarchy too. If a carrier has voice traffic in its network (to help pay for the loss-making IP traffic), then 2.5 Gbit/s is far too coarse.

      The tradeoff for the switch manufacturer is that adding granularity means more electronics, which takes up more space, costs more money, and uses more power. ASICs can help here, of course, by cramming the functions of many general-purpose chips into a smaller space.

      The first generation of 2.5-Gbit/s crossconnect chips did not have any lower-order granularity, which is where the 2.5-Gbit/s granularity switch came from in the first place. It’s only recently that 52-Mbit/s granularity functions have been announced in merchant silicon. So some vendors have been caught at the wrong end of the chip development cycle and have had to market their way out of the problem.

      In Figure 7 you can see an intermediate granularity, which is the ability to groom 620-bit/s (STM4/OC48) signals. Each connection is still made at 2.5 Gbit/s, but now the switch is able to “see” the imbedded 620-Mbit/s channels and switch them as required. Note the color coding for the original input port, and the pattern to indicate a specific 620-Mbit/s signal. This could represent a drop operation, or a switch from a high-speed mesh trunk onto a lower-speed regional ring.

      But the ultimate granularity of grooming in optical switches today has to be the 52-Mbit/s (STS1/STM0) level. Now that off-the-shelf ASICs are becoming available with STS1 grooming as standard, we’re sure to see more of this capability appearing.

      In Figure 8 you can see the effect of STS1 grooming. In the case of the red signal entering on the upper left 2.5-Gbit/s port, we’re able to switch it onto an appropriate timeslot on the lower right 620-Gbit/s port. Meanwhile we’re grooming the green 620-Mbit/s signal onto another 2.5-Gbit/s outbound channel.

      So what is the “right” answer for grooming granularity? In fact, there isn’t one single answer. Carriers must choose the appropriate granularity for their bandwidth hierarchy, the nature of their traffic, and the kind of protection they need. But the good news is that emerging multilayer switch architectures may provide a more graceful transition up and down the grooming hierarchy than is possible today.

      Next Page: Page 6: The GMPLS Hierarchy

      I’ve described taxonomies of optical switches from the old DXC days with the emerging terminology used by industry analysts.

      There’s also a taxonomy defined within the Generalized MPLS (GMPLS) Architecture Internet Draft, which is intended to define the capabilities of the different form of “optical switch” covered by the scope of GMPLS.

      The details of this hierarchy were covered in Scott Clavenna’s report on Optical Signaling Systems, but the hierarchy has been updated a little since then:

      • PSC: Packet Switch Capable

      • L2SC: Layer 2 Switch Capable

      • TDM: Time Division Multiplexing capable

      • LSC: Lambda Switch Capable

      • FSC: Fiber Switch Capable

      One difference between this and Scott’s diagram is the addition of the Layer 2 Capable (L2SC) interface, which was added to the architecture draft when it moved into the CCAMP (Common Control and Measurement Plane) working group. (One of the dangers of describing Internet Draft content is that, by Internet Engineering Task Force (IETF) definition, any Internet Draft has a life of only 6 months.)

      Using this hierarchy it’s possible to understand why the next generation of optical switches is often described as “multilayer.” Each of these hierarchy layers becomes optimal at a specific level of the network hierarchy.

      For instance, basic fiber grooming can be achieved using an automated fiber patch panel – an FSC switch. Wavelength grooming requires LSC switches, which have visibility into selected wavebands or wavelengths, and so on.

      The difference we see by applying a GMPLS Control Plane to this network is that, for the first time, we may be able to automate grooming across several functional layers – and among different vendors in the network.

      A major limitation of the approach of the IETF to GMPLS has been the lack of emphasis on inter-layer or inter-carrier control interfaces. For example, in a coast-to-coast Sonet link, or even a full wavelength service, how far does a connection travel before it hits a carrier boundary? Both the Optical Internetworking Forum (OIF) and the ITU are working in this area, and their work will probably be incorporated into the IETF protocols in the future.

      Next Page: Page 7: Opaque vs Transparent


      Examples of such payloads are ESCON (Enterprise System Connection) and Fibre Channel, which are not compatible with most equipment available for public network transmission. But customers are increasingly interested in moving them across metro areas and beyond, without waiting for standards to evolve. In an all-optical network, as long as we can shine the optical signal into the network, it will be carried across it without any attempt to process or interpret the digital information.

      Transparency carries a price in terms of network management. If the system doesn’t know what it’s carrying, then monitoring becomes difficult. And, of course, grooming becomes impossible – how do you combine or separate signals if you don’t understand them?

      A possible answer created by the ITU is the Digital Wrapper, defined in G.709 . This provides for the encapsulation of the payload information and “digitally wraps” it with an additional overhead set of bytes to carry information for optical performance monitoring, section trace, forward error correction, etc., in an analogous way to the SDH Section Overhead (SOH). The forward error correction option in G.709 allows the receiver to detect, and recover from, errors in transmission. But this comes at a price. A slightly higher transmission rate is needed to carry the same amount of payload. This may have an impact on the amplifier/regenerator design of the network.

      An extension of the digital wrapper provides a higher order multiplexing structure, which would allow the combining of multiple independent optical formats, such as SDH/Sonet, PoW (packet on wavelength) and Gigabit Ethernet, onto a single wavelength. This arrangement supports potentially a “leased-line/private-wire” service, in which the underlying traffic may or may not be IP, but the nature of this traffic need not be visible to the network operator.

      One of the results of this evolution will be the need to support multiplexing and grooming within a G.709 multiplexer hierarchy, in addition to both legacy Sonet/SDH multiplexing and emerging wavelength multiplexing. Note that G.709 is a very new standard from the ITU, and there isn’t much real-world experience with it as yet. It’s not clear which of the FEC and multiplexing features will be used, or how widely, or what changes they will cause in the way we design optical networks.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like