Sponsored By

Optical Signaling Systems

Can GMPLS help carriers automate their networks? * Technology tutorial * Who's doing what? * How far's it got?

January 8, 2002

1h 40m Read
Optical Signaling Systems

Optical networking may have gone from boom to bust in the past year, but the need for a totally new way of controlling carrier networks has never been stronger.

In order to survive and prosper, service providers need to make more money from their existing infrastructures, and they also need to slash costs. Well, guess what? That's exactly what new signaling technologies like GMPLS (generalized multiprotocol label switching) and standardized interfaces like the Optical UNI (user network interface) aim to make possible.

In essence, these protocols promise to automate the operation of telecom networks so that capacity can be used more efficiently and services can be provisioned much more rapidly, from remote consoles or via requests from client gear. That promises to radically reduce the need to send engineers out into the field to manually reconfigure equipment.

Of course, delivering on this promise isn't going to be easy. A lot of the technology is still in its infancy, and what sounds great in theory often turns out to be problematic in practice – not least because of the huge amount of existing infrastructure that will need to be brought under the control of new signaling protocols.

All the same, it's worth remembering that the world's telecom infrastructure is undergoing radical and unstoppable change. It's shifting from being voice- to data-centric, and that's going to require a massive structural overhaul anyhow. This will have to include new signaling systems.

It's also worth noting that today's voice-centric telecom infrastructure is already automated. Signaling System No. 7 (SS7) makes it possible to make phone calls without any human intervention on the part of telephone companies – although the "No. 7" bears witness to the fact that coming up with a widely accepted signaling system is likely to take several attempts.

GMPLS aims to do more than SS7, in that it promises to automate the layers beneath Internet Protocol (IP), including the optical layer, which is now manually (and painstakingly) configured in today's telecom networks.

It's an ambitious goal, and the jury's still out on whether it's achievable. This report provides a status report on the considerable progress that's already been made. A hyperlinked summary of it follows:

Page 2: Why Signaling Is Sexy

  • Why today's multilayer telecom nets can't cut it

  • How signaling might solve the problem

  • How it addresses a key issue for carriers – making more money while spending less

Page 3: Switching Is Everything

  • How carrier sites became a mess of multiplexers

  • Why networks have to become more dynamic

  • How the idea of automation using optical switches evolved

Page 4: GMPLS and the Optical UNI

  • How GMPLS builds on MPLS and has a much wider remit

  • Distinguishing among data, signal, and control planes

  • Why the optical Internet needs signaling as well as routing

Page 5: History Lesson: Sycamore

  • Why it focused on software rather than hardware for its optical switch

  • The choices it faced when picking its control plane architecture

  • Why it used Linux as an operating system for its switch

Page 6: A GMPLS Taxonomy

  • How GMPLS embraces every layer in telecom nets

  • How different types of interface are nested within label switched paths

  • Why GMPLS is likely to be adopted step by step, as carriers automate different aspects of their networks

Page 7: The Control Plane Defined

  • What carriers want

  • How this boils down into six basic control plane functions

  • Options for implementing those functions



Page 8: Applications

  • Releasing capacity locked up in the network

  • Eliminating labor intensive provisioning processes

  • Bandwidth on demand and optical VPNs



Page 9: Optical Architecture I: Overlay

  • Opposing architectural models emerge: overlay vs peer

  • Pros and cons

  • Where the UNI and NNI fit in



Page 10: Optical Architecture II: Peer to Peer

  • Key differences with the overlay model

  • Simplifies coordination and fault handling

  • But can it scale?



Page 11: Optical Architecture III: Hybrid

  • Peer-to-peer within separate domains

  • Best of both worlds?

  • GMPLS facilitates a mixed environment



Page 12: Specs and Standards: OIF

  • Founded by Cisco and Ciena

  • Aims to bring together IP and optics

  • Biggest achievement to date: the Optical UNI



Page 13: Specs and Standards: ODSI

  • Founded by Sycamore

  • Got things going on the Optical UNI

  • Mission completed



Page 14: Specs and Standards: IETF

  • Driving force behind GMPLS

  • Feeds work into "big" standards bodies like ITU

  • Cisco very influential



Page 15: Specs and Standards: ITU

  • Slow moving but highly influential

  • Redefining some stuff already dealt with by the IETF and OIF

  • Dithering about GMPLS



Page 16: The Future

  • GMPLS adoption likely to be piecemeal

  • GMPLS's success isn't linked to MPLS's future

  • GMPLS-based services could be generating revenues in 2002



Page 17: Who's Doing What

  • A sampling of developments from 10 vendors

  • Hot startups identified

  • Developments from Ciena, Cisco, Nortel, and others featured



Next Page: Why Signaling Is Sexy

We often hear about an Optical Internet, but, in fact, one doesn’t yet exist. In today's service provider networks very distinct layers cohabitate, each managed separately, each designed to perform one function as well as possible.

Routers process packets, ATM switches manage connections and quality of service for these packets, and optical transport systems create the pipes through which these packets flow. Each, therefore, has its own unique network management system and communications protocols.

This all works well enough until bandwidth demands start doubling each year and customers start clamoring for faster circuits, delivered yesterday. These networks simply can’t scale fast enough if they must deliver services that cross distinct management domains, which each speak different languages.

Operators need a solution that scales and that allows them to reconfigure the network to changing traffic patterns. This is a fact all carriers must live with in the Internet era: Traffic patterns are unpredictable, and they are large. Add to this the fact that many different types of traffic are entering the network now at optical speeds (including ATM, gigabit Ethernet, high-speed TDM circuits, and IP) and a carrier needs a way to manage all these protocols and lambdas from simplified control structures – if not a single unified control plane then perhaps two: one for the packet layer, another for the transport layer.

Where today a carrier may have to enlist three different management systems to provision a single IP connection across the country, with many manual interventions required along the way, tomorrow the hope is a real “point-and-click” operation can take place that selects conduit, fiber, wavelength, optical switch ports, router ports, and a variety of restoration paths in a single step.

The important question facing engineers developing this technology is how far to push it. Should routers, dynamically provisioning themselves as lambdas, be making million-dollar decisions on their own? Can data equipment be expected to manage restoration in the transport layer? Is optical signaling really the final silver bullet to allow a migration to mesh networking in the optical core?

Enter the optical control plane and its promise of a new unifying networking architecture that is equally adept at managing connections all the way from the fiber beneath the streets to individual packet flows within a single circuit.

What, in short, does this optical control plane do? And what’s the big deal about GMPLS (generalized multiprotocol label switching)?

The simple answer is this: It can make carriers money, and it can save carriers money. First, carriers can start serving customers' requests for capacity more quickly, and in multiple "grades," unlocking new revenue streams. Second, they can get more out of their existing networks, unlocking idle capacity and improving operational expenditures.

The technology behind all this is quite complex, and debates are currently raging within the networking community about which protocols to use, but it all comes back to a simple requirement: to turn optical bandwidth into revenue.

There will be lots of talk about mesh-based restoration, IP and optical integration, optical VPNs (virtual private networks), and the like, but it comes back to the simple fact that large carriers, particularly wholesalers, have a difficult time remaining solvent in this market if they can’t differentiate themselves – and with optical bandwidth the only way to do that is by adding intelligence.

Sure, big carriers can continue to survive without GMPLS or optical signaling, and many will for years while they wait for mature standards; but the fact remains that carriers need this in one form or another, and eventually they will adopt it. What we plan to examine here is what’s on the table now and what's cooking in the oven, ready to come out soon.

An important point to make here is that this transition to optical signaling doesn’t have to take place in one profound leap, but can progress in stages, according to a carrier’s needs.

The first step is already taking place in many carrier networks today. It consists of improving operational support systems (OSSs) to allow for real point-and-click provisioning of optical bandwidth.

The second step a carrier may take moves them beyond the management of optical bandwidth and begins to collapse the boundary between the data services layer of the network and the optical layer. This step involves implementing a user-to-network-interface, or UNI, that allows network equipment to “ask” for connectivity across the optical network by signaling for it. This does require specifications for signaling and provisioning, and it's on the way from the Optical Internetworking Forum (OIF). In this step, inside the optical "cloud" signaling remains proprietary, whereas outside the cloud a standardized interface allows client devices attached to optical systems to talk to each other as though they were neighbors, speeding service creation and improving resource management.

The third step – and the big payoff – is the complete standardization of signaling and control planes. With this in place, carriers can begin to collapse layers of the network, either into two (a packet layer and a transport layer) or into one, adopting a unified control plane that touches all the equipment in a carrier network, allowing them all to communicate in real time, dynamically, asking for bandwidth, connections, grades of restoration, and just about anything they could want from any layer of the network. This is certainly a panacea, and probably a half-decade off, but much of the work is already being accomplished in standards bodies, and so far people like what they see.

There is much to digest when looking into the standardization of the Optical Internet, and this report intends to flesh out the rapidly evolving effort to allow the transport and services networks to start talking.

It’s as if a psychologist has come in to fix a bad marriage, with a long troubled history. The first step is communication, but that’s only the beginning of the real work. The idea of the Optical Internet is in fact the dream of a perfect marriage between unlikely mates.

One is left-brained, thinking very linearly: “A few more OC48s between New York and Boston will maintain our capacity requirements for 18 months." The other is classic right-brained, thinking in abstractions, assembling all manner of disparate information into coherent wholes: “Wouldn’t it be nice if I could have an unprotected OC12 for a few hours in the afternoon when we launch this auction, then a GigE for a few hours tonight to back up the data center to the mirror in New Jersey?”

It’s not as though these two dislike each other; it’s just that they lack a common language.

This is obviously a major undertaking that is just getting started. What’s remarkable, however, is how well it’s progressing.

Have a look at Vinay Ravuri's GMPLS/MPLS Page and see all the papers published and collected there.

All the big vendors are chiming in, and with such massive stakes, one could imagine major political battles in the standards organizations, the stalwart vendors flexing their muscles and forcing out the startups. But things are different these days. For one, this is the age of the Internet, and the lay of the land is a bit more “Wild West.”

The Internet Engineering Task Force (IETF) is involved, and this group doesn’t act much like a formal standards body, making it much easier for all voices to be heard.

Secondly, this is the age of the startup, and most of the key talent in the telecommunications industry is widely distributed, not concentrated in New Jersey or Ottawa.

That said, the Intelligent Optical Network is at hand, with a promise of delivering new service types, distributed restoration, automated network provisioning, and much improved OSS scaleability.

The challenge is getting carriers to accept and adopt these new standards and new ways of managing their transport networks.

This is no small task, considering carriers have built these transport networks over decades, have designed their own custom network management systems, and work from business models that have no place holder for optical services, as of yet. This will be a long process, but fortunately it's one that can incrementally add value to a network.

What the ultimate architecture of optical networks looks like remains open for speculation, but we can be certain that the transport network will never be the same. Optical layer switching and signaling has arrived, and with it the ingredients for a transformation, the latest “paradigm shift” in networking.

Next Page: Switching Is Everything

Optical signaling, in short, puts the intelligence in “Intelligent Optical Networks.”

Heretofore, optical networks were little more than high-priced plumbing, configured in point-to-point connections between central offices and network hubs, and thus rather “dumb.” Optical fibers were strung between Sonet or SDH multiplexers from town to town or country to country and never moved. Restoration couldn’t be simpler; for each path over which an optical channel was sent, another waited idly by in case of an errant backhoe or equipment outage.

This transmission network was quite stable, reliable, and scaled into the terabits of capacity per fiber using dense wavelength-division multiplexing (DWDM).

It was difficult to see when this model would become stressed, but the Internet took it by surprise. With little warning, carriers found themselves provisioning lots of fibers and wavelengths to ISPs and data-oriented service providers in a competitive, fast moving marketplace.

The large network hubs the major carriers managed in their backbone became a mess of multiplexers, digital crossconnects, and fiber patch panels, with hundreds of yellow fiber cables strung throughout them in a tangled web.

Their new customers wanted capacity fast, and worse yet, they wanted it to be more “flexible,” meaning they could not predict with the accuracy associated with voice services where and when demand would arise.

Internet traffic flies wildly about: An email sent to a next-door neighbor may travel a thousand miles to get there as it hops from one ISP network to another and back; Napster (well, at least a year ago it did) and Gnutella traffic engulfs campus networks with file transfers; AOL users send digital pictures to grandparents; while market analysts watch broadcasts of earnings announcements on RealPlayer.

This unpredictability of traffic made most ISPs ask for more bandwidth than they needed, if only to prevent massive congestion when activity flared up around hot Internet sites. This fueled the massive capacity explosion of the past five years, feeding the fortunes of numerous optical network vendors and carriers and creating a frenzy around all things optical.

But this model didn’t (and doesn’t) scale. It was too costly to overprovision networks to carry low-revenue data services, and it became apparent that traditional ADM-based Sonet/SDH rings lack the appropriate kind of connectivity to support rapid provisioning of high-speed circuits.

In just a few months in late 2000 the whole house of cards came tumbling down. Carriers spent too much for too little in return, and the many aggressive ISPs and data services providers found themselves without financing for further network builds. The question arose: how to scale an optical network cost effectively in the Internet era?

The answer? Optical switching

In 1998, the team at Lightera (now part of Ciena Corp. [Nasdaq: CIEN]) was damn smart – prescient, even. They saw this coming long before 2000 and built an optical switching system to sit at these congested network nodes and manage all these formerly manual interconnections dynamically. Large-scale "Bandwidth Managers" had been created by a few vendors, but these had very low density and often included data switching matrices also, trying to be all things to all parts of the network. They simply weren't appropriate for core nodes with hundreds of optical connections.

Instead of having to terminate a fiber first on a Sonet multiplexer, then patch it to a DCS (digital crossconnect), then patch it to another Sonet multiplexer, then patch it to a DWDM system, without a hope of ever changing the configuration once it’s established, a carrier could run a fiber from each output on a DWDM system to Lightera’s switch and manage connectivity in that hub from a console, not a patch panel.

That application makes so much sense that the success of the optical switching market is a foregone conclusion.

Further applications abound, however, because these switches can communicate with each other in a network using optical signaling. This is key. Once these switches start communicating (describing their resources, connectivity, and network topology to others) the optical network goes from being dumb to intelligent. The first step toward intelligent optical networking takes place.

With optical switches and optical signaling in place, a carrier can evolve its core network protection from traditional Sonet ring-based schemes to a full or partial mesh. The transport network, therefore, is able to evolve from isolated islands of connectivity, to an intelligent network that is reconfigurable across a broad geographic area and many network domains.

The benefits of mesh-based protection include improved network resource efficiencies and the ability for carriers to offer multiple “grades” of optical services at different cost points. Mesh networks are complex and difficult to manage, and the one thing they need more than anything else is “intelligence,” thus, a control plane.

Figure 1: The many layers of a telecom network present challenges to provisioning, and require multiple network management systems.

MPLS to the rescue, maybe…

While Lightera was designing its optical switch and Tellium Inc. (Nasdaq: TELM) was building its for the government-sponsored MONET (multiwavelength optical network) project, Internet routing engineers were developing Multiprotocol Label Switching (MPLS) to improve the way IP packets were routed in the Internet.



Scaling IP networks was looking to be just as hard as scaling optical networks, so a method was required that would ease the burden put on core routers, where traffic quantity was highest, by relieving these routers of the task of examining each packet headers in its entirety and instead just reading a “label” and passing the packet along according to label switching rules.

In short, MPLS was created as a combination of a forwarding mechanism (label switching), connection establishment protocols, and defined mappings onto Layer 2 technologies. Thus, MPLS could behave as a next-gen ATM, improving routing in the core of the network by putting traffic where the bandwidth is, and enabling a range of new services, including network-based VPNs, circuits over MPLS, and differentiated data services.

The foundation of MPLS was constraint-based routing, which provides IP devices the ability to establish and maintain paths through the network that are optimal with respect to a pre-determined set of metrics and constraints. These constraints can be either resource-related, such as bandwidth, or administrative related, such as restricting paths to particular links.

Without MPLS, routers tend to operate in “hot-potato” mode, forwarding packets to their nearest neighbor with little regard to its effect on network congestion. Instead of routers just interacting with neighbors, through MPLS they can participate in the network as a node with the dynamic capabilities of real-time provisioning of Label Switched Paths (LSPs) that are created to improve traffic engineering. Rather than being pushed around until they pop out at their destination, packets can now travel along routes that are known in advance to have certain transmission characteristics. Sometimes this can be a bit of a pain to implement, but it's a great idea nonetheless.

The Internet has demonstrated that the non-centralized control plane of MPLS is required to reduce provisioning and planning costs, while being extremely robust. The essential feature of MPLS is to apply virtual circuit notions (such as those used in frame relay) to IP networks to support quality of service (QOS) and traffic engineering (how to get more out of your network and reduce congestion).

The technologies that comprise MPLS:

  • Link state routing protocols, which are used to obtain network topology information, i.e., what links and nodes are in the network and what are their essential characteristics.

  • Signaling/label distribution protocols, which are used to set up virtual circuits (or LSPs) across the MPLS network.

Similar to ATM and Frame Relay, these labels only have local significance to the switch, called a label switching router (LSR), and do not require the router to perform any time-consuming route table lookup.

Fundamentally, this means that in MPLS forwarding information is separate from the content of the IP header; and through this separation of the forwarding plane and data plane, any kind of data can be mapped into LSPs (ATM or Frame Relay, for example), which is already making MPLS as attractive to edge/aggregation systems as it is to core routers.

This, in essence, makes core routers more like switches, which are known for their speed.

Additionally, MPLS evolved the fundamental architecture of routed networks by separating the forwarding plane (looking up packets) from the control plane (deciding where they go).

With this separation in place, a “best-effort” IP network can now support a variety of protection and restoration functions, as well as providing some measurable level of QOS, reducing or eliminating the need for an ATM layer in the network, and improving the IP network’s long-term scaleability.

Next Page: GMPLS and the Optical UNI

With all the work underway developing Multiprotocol Label Switching (MPLS) (some would argue too much work) and its control plane, it became rather obvious to some folks in the optical networking business that the same control plane could be abstracted to the lower layers of the network, namely Sonet/SDH and the DWDM layer.

If core routers are being simplified with a new switching scheme that relies on a separate control plane, and optical transmission networks are being simplified with the addition of a switch, then perhaps this new control plane for routers could be applied equally well to both routers and optical switches.

Sure enough, it can – and the Internet Engineering Task Force (IETF) has been off and running, developing its Generalized MPLS (GMPLS) for just this purpose, applying MPLS control plane techniques to optical switches and IP routing algorithms to manage lightpaths in an optical network.

The key distinction between MPLS and GMPLS to understand is that, whereas the control plane for MPLS was separate from the data plane, in GMPLS it can also be physically separate from the signal. This allows a GMPLS control plane to manage connectivity and resource management among multiple layers of the network, from fibers to wavelengths to Sonet circuits.

Thus, it is critically important to recognize two aspects of the rise of intelligent optical networks:

  • 1: Intelligence is derived from a distinct control plane, with the ability to manage a variety of connection functions within the transport network, and

    2: the optical switch is the key network element where this intelligence will be initially embedded.

Think of Sycamore Networks Inc. (Nasdaq: SCMR). Despite the bruising they’ve taken in the public market lately, they must be given credit for giving life to the idea of intelligent optical networking. Other vendors may have thrown the term about, but no other vendor was so entirely committed to it.

The two key elements of optical signaling today – an optical UNI and GMPLS – are now on everyone’s lips and PowerPoint slides. But Sycamore came out of the gate with these, calling them Broadleaf (a lite version of GMPLS in “branded” disguise). Sycamore also founded the Optical Domain Service Interconnect Coalition (ODSI), which developed a precursor to the optical UNI.

Sycamore’s gambit may have backfired or succeeded, depending on whom you ask. In one sense, Sycamore's enthusiasm for signaling got the market moving forward toward developing standards, while at the same time eliminating the possibility that its home-grown signaling schemes would become the de facto standards. The big vendors weren’t going to let this upstart steal the spotlight without a fight.

It’s important to note here that the emerging Optical Internet will require both signaling and routing. The signals provide the communication between formerly disparate network domains; and the routing, in the form of extensions to existing IP routing protocols, is there to provide enhanced capabilities in managing optical network resources, such as lambdas and Sonet channels. Optical signaling and routing remains in its infancy, but the fact that it is drawing upon recent advances in MPLS control plane technology has accelerated its momentum considerably.

The idea to keep in mind throughout this report is that optical signaling and routing is only part of a larger movement toward unifying control planes for multiple network layers, which is at the foundation of the GMPLS effort. Classical MPLS (meaning, derived from IP network elements) and optical MPLS (those extensions to classical MPLS that control optical network elements) are only subsets of GMPLS. Importantly, all future efforts in MPLS and optical layer management will now be developed under the rubric of GMPLS.

So the payoff of all this work on optical signaling and routing is clear: Carriers will be able to manage their networks with much greater efficacy. Improved provisioning should follow, along with improved network resource utilization and an increasing number of optical services. Simply put, optical signaling improves a carrier’s bottom line and top line. It saves them from themselves.

Next Page: History Lesson: Sycamore

Looking at how Sycamore Networks Inc. (Nasdaq: SCMR) went about selecting their control plane architecture is, at least in the small circle of telecom historians, a rather fascinating story. It illustrates in no small way how startups have radically altered this market, moving technological developments forward at previously unheard of speeds – applying the mix of talent, competitive urgency, and intellectual daring and hubris that characterizes all entrepreneurial ventures to solving a problem as rapidly as humanly possible.Sycamore was third to the optical switching market, behind Lightera and Tellium Inc. (Nasdaq: TELM), and needed something to distinguish itself. The answer was obvious – software – given it had already coined the term "intelligent optical networking," and many of the founding team had established their credibility at Cascade, an Asynchronous Transfer Mode (ATM) switch startup acquired (indirectly) by Lucent Technologies Inc. (NYSE: LU).

According to the folks at Sycamore, the development of its protocol and control plane choices started in the spring/summer of 1999, with the design of its optical switch. At that stage, it looked at a few options for its architecture:

  • Proprietary. This meant the operating system and protocols could be custom designed to support unique features. However, this had to be set against difficulty hiring and training engineers to learn this new scheme.

  • Existing carrier protocols – namely, the private network-to-network interface (PNNI) technology – was considered but rejected: first, because it is ATM-centric, which was thought to be a “legacy” system on its way out; and second, because the addressing scheme is E.164-based (an ITU-T specification), not IP, which would necessitate translation at any IP border. There were also concerns around scaling PNNI, because it's a hierarchical architecture in which the domains, both routing and signaling, are broken into layers. Layered architectures tend to add complexity, which breeds software problems in interpretation and design, which leads to “undocumented features” – i.e., bugs.

  • TCP/IP, which offered the benefits of standardization while also being IP- rather than ATM-centric.

Settling on TCP/IP, there were a few requisites that had to be satisfied. Sycamore needed economies of scale both in engineering terms and network terms, and everything possible needed to be standards-based.

The most important decision Sycamore settled on was that the network was going to be based on IP. At the time, the very influential players in the market were Cisco Systems Inc. (Nasdaq: CSCO), Juniper Networks Inc. (Nasdaq: JNPR), and Cerent (subsequently acquired by Cisco), and there were lots of software developers that knew IP, making it easier to staff this operation and take advantage of a growing number of developmental tools that are based on TCP/IP.

Choosing the routing protocol was fairly straightforward once Sycamore had settled on IP. Open Shortest Path First (OSPF) is a robust protocol and is widely deployed, thus many engineers were familiar with the protocol. In addition, John Moy, a principle architect at Sycamore, happened to be the primary editor of OSPF.

The additions made to plain vanilla OSPF were attributes specific to a lightpath. Using standard Opaque Link State Advertisements (LSAs), the switches exchange information about the lightpath, the trunk between the switches, and the switches themselves – e.g., channel, port, slot, restoration, bit error rate, what bandwidth size is the circuit, the node’s IP address for topology discovery, etc. These were the things that made automated provisioning and discovery possible (see RFC 2370 for more information on OSPF Opaque LSAs).

As for signaling in the optical switch, choosing MPLS seemed a logical extension of the choice for TCP/IP, which would add IP to the Sonet/SDH layer much the same way MPLS adds IP to ATMs fast-forwarding ability.

A crossconnect needs to know which time-slot to use for mapping a circuit. The answer is to build a table of channel/port/slot identifiers similar to the concept of ATM VPI/VCI mapping, and add constraints-based IP routing to that to make the decision of where to place the circuit (i.e., lightpath placement).

As for the choice of which label routing protocol, the choices were constraints-based routing-label distribution protocol (CR-LDP) or resource reservation protocol (RSVP).

The benefits of CR-LDP lie in its simplicity. It is a hard state protocol, meaning that it stays up until something takes it down, such as a fiber break (detected by a different layer, e.g., Sonet/SDH) or an explicit message to change state. Once the circuit is up it remains up until there is a fiber break, a failure in the network elsewhere in the path, or an operator makes a change.

RSVP, on the other hand, is a soft-state protocol, meaning that it requires a refresh message to maintain the circuit. This refresh mechanism requires soft-state maintenance and design considerations for timer variations, which leads to complexity in the software design. An additional consideration, according to Sycamore, was that the RSVP specification hasn’t been well documented, while CR-LDP is very well documented.

The final decision was for the underlying operating system. Here the decision was proprietary or off-the-shelf.

Proprietary creates a difficult development environment as mentioned above, and there were only a few off-the-shelf vendors to choose from, none with a sufficient debug environment.

Linux was chosen because of the availability of talent familiar with Linux and the stability of the platform. A universal problem developing a new product is the availability of the hardware. This fact becomes even more acute in the optical space with the cost and availability of components.

The software developers needed a way to emulate the network of switches for control traffic. Writing the switch code for a Linux operating system led to the ability to chose a hardware platform on the switch that was readily accessible and relatively inexpensive, a desktop PC. With this in place, software development could be done in parallel with the hardware development, because the PCs could emulate the actual optical switch, which ultimately became a useful demonstration tool for customers.

Next Page: A GMPLS Taxonomy

After the new crop of optical switch companies came out and introduced the concept of optical crossconnects and switching, the efforts toward creating a standardized control plane gained momentum.

Lucent Technologies Inc. (NYSE: LU) had its LambdaRouter in the works, and AT&T Research Labs, along with Nortel Networks Corp. (NYSE/Toronto: NT) and other large equipment providers, was working on developing a standardized method of communications between switches.

Much of the work in these companies centered on bringing that control plane out of the packet realm and into the transport network. In this regard, the term "MPLambdaS" was coined, given that the initial work dealt primarily with wavelength switching. After some study, however, it was clear that this control plane could be abstracted further to all layers of the network. Thus, a “generalized” concept of MPLS was formed – hence, GMPLS.

GMPLS assumes a unique control plane, derived from MPLS, that is extended to include a group of network elements that do not make forwarding decisions based on the information carried in packet or cell headers, but rather based on time slots, wavelengths, or physical ports. In the current drafts of GMPLS signaling before the IETF, four types of interface are described:

  • 1: Packet Switch Capable (PSC)
    Interfaces that make forwarding decisions based on information in the packet or cell header, such as routers and ATM switches. These interfaces recognize bit, packet, or cell boundaries and can make forwarding decisions based on the content of the appropriate MPLS header. Importantly, these are also capable of receiving and processing routing and signaling messages on in-band channels. Examples include interfaces on routers, ATM switches, and Frame Relay switches that have been enabled with an MPLS control plane.

    2: Time Division Multiplexing Capable (TDMC)
    These interfaces also recognize bits, though focus on the repeating, synchronous frame structure of Sonet/SDH. These interfaces forward data on the basis of a time slot within this structure, and are capable of receiving and processing control plane information sent in-band with the synchronous frames. Examples are interfaces on Sonet/SDH add/drop muxes, digital crossconnects, and OEO optical switching systems.

    3: Lambda Switch Capable (LSC)
    These interfaces do not need to recognize bits or frames. They forward data based on the wavelength on which the data is received, such as an optical crossconnect or lambda switch, though they can also deal with wavebands. These are not assumed to be capable of receiving and processing control plane information on an in-band channel. Examples are interfaces on an all-optical add/drop mux (OADM) or optical crossconnect (OXC).

    4: Fiber-Switch Capable (FSC)
    These interfaces do not need to recognize bits or frames and do not necessarily have visibility of individual wavelengths or wavebands. These forward data based on the position of the data in the real-world physical space, such as the interfaces on an optical crossconnect that can operate at the level of single or multiple fibers. These would be found on automated fiber patch panels, fiber protection switches, or photonic crossconnects that operate at the level of a fiber.

The nice thing about GMPLS is the wide variety of label-switched paths it can support. Whereas with MPLS, an LSP begins and ends on MPLS-enabled routers, in a GMPLS network LSPs can range from a Sonet/SDH circuit to a lambda-based optical channel trail to a physical fiber path. Additionally, each one of these can be nested within another, allowing a great deal of flexibility when setting up and tearing down LSPs within a network.

For more on GMPLS taxonomy, see All-Optical Switching Tutorial, Part 1.

If it sounds complex, it is, but this kind of administration already occurs today; it’s just that it occurs manually, and that’s what takes so long when creating a service that traverses a large network. With GMPLS, the idea is that through a single administrative order, a service can be created from the fiber through the router interface automatically.

If it sounds too good to be true, it probably is, but carriers will be able to add these automated provisioning features incrementally, first in places where internetwork management system communication is at its poorest.

Many carriers do this today by paying software houses millions to customize their network management systems and provisioning systems. With GMPLS, the fundamental tools will be in place to speed implementation of automated provisioning and support multivendor networks. That’s worth getting excited about.

Next Page: The Control Plane Defined

The optical control plane is rather difficult to define, because it intends, ultimately, to control much more than the optical layer. At first, it will be designed to manage connections among optical switches within an optical “cloud,” but much of what we are discussing here clearly indicates that the optical control plane may evolve into a super control plane that manages connections across many layers.

Daniel Awduche, VP, network architecture, at Movaz Networks Inc. and an acknowledged guru of MPLS and GMPLS, identifies the following carrier requirements for a new control plane:

  • Leverage network assets to deliver competitive service

  • Increase service quality while minimizing costs

  • Respond to evolving needs of customers

  • Harness optical bandwidth resources

  • Automate operational processes



Getting to these goals requires a new control plane, but not necessarily one built from scratch. Awduche has argued that the design of a control plane for optical networks should adapt and reuse IP and MPLS traffic engineering control protocols. Additionally, the transport of IP traffic through optical networks creates a series of internetworking, control, and coordination issues among different network domains.

This use of IP and MPLS extensions was not as intuitive as it may seem today. Optical switch vendors had, at the onset of their product developments, other options. The most obvious was the ATM control plane, which used its own standardized NNI, called ATM PNNI. It was widely deployed in most carrier networks, resident on the embedded base of Lucent Technologies Inc. (NYSE: LU), Nortel Networks Corp. (NYSE/Toronto: NT), Newbridge (acquired by Alcatel SA [NYSE: ALA; Paris: CGEP:PA]), and Cisco Systems Inc. (Nasdaq: CSCO) ATM switch gear. Lightera (acquired by Ciena Corp. [Nasdaq: CIEN]), for its part, chose the ATM-PNNI as the basis for its control plane, dubbed OSRP. Monterey (acquired by Cisco) also based its WaRP control software on the ATM PNNI control plane. Sycamore Networks later came out saying that it would use MPLS and OSPF, as described in the previous section.

In Figure 3 (below) a network model for defining the various network domains is illustrated.

In this scenario, GMPLS is limited to the area encompassing the core optical switched network, managing optical layer connectivity between switches. MPLS is used as a control plane for the IP network, from metro to core, while ATM is often used within its own area of deployment, from metro to core. The bottom of the illustration shows how ultimately the entire network will be controlled by MPLS – classical and optical.

An optical control plane is there to manage the new dynamic optical network with a common language, lest it fall prey to vendor-specific network management systems that tend to make carriers rather uncomfortable, while complicating provisioning.

Creating a standardized control plane for optical networks gives carriers a comfort level that they won’t be tied into a single vendor solution (as they were in the ATM days) and provides them with a toolkit to support a variety of protection and restoration schemes, traffic engineering in the optical layer, and provision of optical channels across their networks.

How does a control plane accomplish all this? There are six basic functions in a control plane:

1: Neighbor Resource Discovery

  • In today’s optical networks, each optical crossconnect or add/drop multiplexer must be configured individually through a network management interface to provide the necessary connection of optical links.

    Automated provisioning of an optical network starts with a process for each node to discover neighboring nodes and their capabilities. This will be accomplished through the implementation of a Link Management Protocol (LMP) that allows neighboring nodes to exchange identities, link information, and negotiate the functions to be supported between the nodes.

2: Link Status Dissemination

  • Resource discovery is a good start, but it does not provide enough information to route connections across a network. Routing protocols such as OSPF step in at this point to distribute current information about the topology of the network to each node. In GMPLS, extensions are being defined to allow OSPF to be used for disseminating routing information for optical networks.

3: Topology State Information Management

  • Allowing optical switches and network elements to disseminate information about the network topology and resource availability. According to Greg Bernstein, senior scientist of Ciena's core switching division, topology information can be exchanged only using a link-state protocol, while reachability information (what end stations or nodes can be reached) can be distributed via either link state or vector distance.

    Basically, there are two different ways for performing distributed routing:

    • Link-state protocols involve reliably flooding all changes in network topology to each network node, after which the node uses this to calculate its routing tables;

    • Distance-vector protocols involve the network nodes participating in a joint calculation of what the least-distance path is to a destination.
      Link-state protocols have superior speed of convergence and freedom from routing loops, while distance-vector protocols are somewhat less complex.



4: Path Management and Control

  • Path setup and control protocol allows the connection to be created through switch-to-switch signaling, without the need for network management intervention at intermediate nodes.

    There are many protocols in use today that provide an analogous function in packet and circuit networks. In circuit networks, this function is provided by SS7 or QSIG protocols, in ATM networks by the PNNI or INNI protocols. In IP networks, a similar function is provided by RSVP, which uses signaling to reserve resources across the IP network to support a new information flow. RSVP has not been extensively deployed, due to scaleability concerns, but extensions have been defined to improve scaleability.

    Participants in the IETF group dealing with this isue were unable to reach a unified approach towards setup signaling, and wound up with multiple signaling standards, leaving it to the market to decide. This situation has unfortunately extended to GMPLS, where equivalent modifications have been defined for both RSVP and LDP.

    The OIF has similarly been unable to resolve the issue and has incorporated both RSVP and LDP into its UNI specification. The extensions required for both RSVP and LDP in order to support optical network signaling are significant. New parameters or formats have been defined to take into account the need to specify time slots, wavelengths, and wavelength ranges, instead of packet header labels. New parameters have also been defined to allow connection requirements such as protection and diversity to be specified.



5: Link Management

  • Just as in MPLS, the optical control plane will include capabilities of establishing, maintaining, and tearing down optical channels in much the same way MPLS-enabled routers establish label switched paths (LSPs).

    An LMP will be necessary to allow adjacent OXCs to determine IP addresses of each other and port-level local connectivity information, such as which port on one optical switch is connected to which port on a neighbor.

    One of the immediate benefits of an LMP is the ability to support “Link Bundling,” according to John Evans, consulting engineer at Cisco Systems. Link Bundling allows multiple parallel links (wavelengths in a DWDM connection) to be advertised as a single link to the IGP (Interior Gateway Protocol). This improves routing scaleability by reducing the amount of information handled by the IGP so that it only sees the bundle, not the component links.



6: Path Protection and Restoration

  • Optical networks differ from packet networks in that they can provide true protection and restoration of links. The typical Sonet/SDH ring uses overhead bytes to signal protection. These rings have the disadvantage of being highly inflexible and require additional equipment when tying two or more rings together.

    Intelligent optical networking adds the option of supporting more flexible protection mechanisms. Two examples are mesh restoration and virtual rings.

    In mesh restoration, each connection does not necessarily get its own dedicated restoration capacity. Instead, when a failure occurs, a connection does not failover to a dedicated end-to-end backup connection, but is re-established from the originating node, taking advantage of any network paths that are still available. This is significantly more efficient and survivable than a dedicated end-to-end protection path.

    According to Ciena’s Bernstein, virtual rings are just that, sets of nodes and links that behave as rings for protection purposes but are not physically arranged as rings. Instead, nodes can be configured within the network as ring neighbors so that they behave for protection purposes with the same failure response as if they were on a pre-installed, dedicated ring.

    Optical network signaling can be used to control protection and restoration, and also to constrain routing of paths so that they meet a target level of reliability, for example, requiring that the path use only links that have been set up with 1+1 link protection.



The question these functions pose, therefore, is how to implement them using the toolkit handed to us from MPLS? The answers are many, and the process is clearly going to take a while – at least three years – though some of the most important work in establishing architectural models and developing standardized interfaces has already taken place.

Some of the basics in implementing an optical control plane, according to Dimitrios Pendarakis, Tellium's principal architect, include:

  • Each OXC is considered the equivalent of an MPLS Label-Switching Router (LSR)

  • The MPLS control plane is implemented in each OXC

  • Lightpaths are considered similar to MPLS LSPs

  • Selection of lambdas and OXC ports is considered similar to selection of labels

  • MPLS signaling protocols (e.g., RSVP-TE, CR-LDP) adapted for lightpath establishment

  • Inter-Gateway Protocols (e.g., OSPF, ISIS) with “optical” extensions used for topology and resource discovery

Many of these functions are available today in a non-standard form in optical switches – but carriers like standardization, and they like the ability to have at least two vendors in their network so they don’t fall prey to a single-vendor solution as they did with frame and ATM. Carriers are continually pushing optical vendors to improve their standardization, and, now that the market has slowed, there is more time to move prudently towards standards-based network implementations.

Startups aren’t done pushing this development forward. A new crop of “wavelength switched optical transport” vendors have emerged to bring the benefits of an optical control plane to the metro. Companies such as Movaz Networks, Meriton Networks, Polaris Networks, Firstwave Intelligent Optical Networks Inc., Opthos Inc., and Internet Photonics are demonstrating the economic benefits of an automatically optical switched network in the metro. This market segment is emerging as one of the most exciting to arrive in the last few years.

Next Page: Applications

If optical signaling and routing were only about improving operational efficiencies, the incentive to deploy it would be limited by carrier reluctance to apply fixes to things that aren’t quite broken. Operational improvements always take time, especially in big networks.

Granted, optical signaling has some nice operational benefits for carriers, particularly the ability to "unwind" capacity once it's been canceled. This is a unique pain for carriers these days, especially those that wholesale a great deal of bandwidth. Say they provide a coast-to-coast OC48 connection for an ISP that in a few months goes belly up. Making that capacity available to the network again is time consuming and often just plain impossible, given today’s deployment processes.

With optical signaling in place, it’s much easier to simply turn off that connection and make that link, or any segment of it, available to new customers.

Other more vague but useful benefits of optical signaling include improving the provisioning workflow process, which today is quite labor intensive and manual. Automating that would be a godsend to carriers looking to get capacity to customers as quickly as possible. This isn’t just good resource management; it’s good for business.

But what are the real applications, especially the ones a carrier can bill for? In a recent seminar organized by IIR Ltd., Yong Xue, of the network architecture and technology planning group at UUNet/WorldCom Inc. (Nasdaq: WCOM), identified three compelling optical services that will drive GMPLS:

1: Provisioned Bandwidth Service, which allows point-and-click and static near-real-time provisioning of optical circuits through a management interface. The operators do not have to search for which timeslot or ports are available beforehand, nor need they go into each node to make the crossconnect. In this case, the customer has no network visibility and depends on network intelligence, thus has a client/server relationship with the network. This, fundamentally, is the basic service provided by the Optical UNI and the overlay architecture.

2: Bandwidth-on-Demand Service, whereby a signaled connection is established via a request through an Optical UNI interface to the network. The key values are: Any-size connections – lambda, OC-n, STS-n; Service velocity – fast end-to-end connection management for reduced service turn-up time; QOS for flexible service-level agreements; and arbitrary network architecture. This model supports true dynamic and real-time provisioning in seconds or subseconds of optical circuits. The customer can have no, limited, or full network visibility, depending on the control model used. This service relies on network or client intelligence based on the interconnection and control model used.

3: Optical Virtual Private Network (OVPN) is the one optical switch vendors have been touting since day one. Customers contract for a specific network resource such as link bandwidth, wavelength, and/or optical connection ports. The optical connection can be based on signaled or static provisioning within the OVPN sites. Customers may have limited visibility and control of contracted network resources and, in many cases, specify the type of restoration they prefer.

Next Page: Optical Architecture I: Overlay

Creating a control plane that manages both routers and optical switches was bound to raise some hackles within carriers. The data network folks want their routers to manage network resources, since they believe IP is taking over the world, while the transport network managers bristle at the thought of routers (especially those owned by a customer) having the ability to “see” the topology and resources of the optical network and make changes.

The result is rather predictable. Two camps have formed, offering up two architectural models for the Optical Internet, one that favors routers, and another that favors optical switches. They are called, respectively, the overlay and the peer. Here’s what they’re about:

Overlay or Domain Services Model

In this model, the optical network “cloud”, made up of Sonet/SDH, DWDM, and optical switching systems, provides connection services to IP routers and other “client” devices attached to the network.

In this client-server network architecture, different layers of the network remain isolated from each other, but dynamic provisioning of bandwidth is made possible, though entirely on the optical network’s terms.

In this model, routers or switches “ask” the optical network for a connection, and the optical network either grants it or denies it. These requests can be fairly sophisticated, asking for a certain size circuit with a particular grade of restoration. The key here is that these devices can’t see into the network. They’re talking to a doorman with firm instructions to keep outsiders where they belong.

The benefits of this model have led to its early endorsement by the Optical Domain Service Interconnect Coalition (ODSI), the Optical Internetworking Forum (OIF), and the International Telecommunication Union (ITU). Chief among the values of the overlay model are, according to its proponents:

  • The optical layer comprises subnetworks with well defined interfaces to client layers

  • It allows each subnetwork to evolve independently

  • Innovation can evolve in each subnet independently

  • It does not strand “older” infrastructure

  • It provides IP, ATM, and Sonet interoperability using open interfaces

  • Optical network topology and resource information is kept secure



To build the overlay, standard network interfaces are required. Network interfaces typically come in two forms, those for within the network and those for its entrance.

The User Network Interface (UNI) provides a signaling mechanism between the user domain and the service provider domain, while the Network-to-Network Interface (NNI) provides a method of communication and signaling among subnetworks within an optical network.

The UNI allows attached clients of an optical network to establish optical connections dynamically across the optical cloud, using a neighbor-discovery mechanism and a service-discovery mechanism. Thus, devices attached to an optical network will be able to quickly identify other attached devices, build reliable connection maps, and automatically discover the service resources of any optical network. This, put simply, speeds the provisioning of services and dramatically reduces operational expenses associated with optical networks.

The NNI is the control plane over which the network’s connections are orchestrated, involving lightpath routing, signaling, status reporting, and scheduling. In the context of the optical switched network, the NNI refers to a connection between any of the following:

  • Different service provider networks

  • Subnetworks of the same provider

  • Connection between different vendors’ switches within a subnetwork

The definition of the NNI in the optical network remains in the very early stages of development. NNI interface routing options under consideration include static routing, default routing (applicable only to the single homed scenario), and dynamic routing.

NNI interface signaling options include Constraint-based Routing Label Distribution Protocol (CR-LDP) and Resource Reservation Protocol with Traffic Engineering extensions (RSVP-TE), both of which are IETF drafts. GMPLS extensions to both CR-LDP and RSVP-TE have been proposed to adapt the respective protocols within the context of the optical domain.

The overlay model, based on the UNI and NNI, makes sense today because it is well suited for an environment that consists of multiple administrative domains, which most carrier networks have.

This is particularly useful in large carrier networks, where the group that controls the transmission network does not necessarily like cooperating with the group that controls IP services. The large IXCs all fit into this camp and will likely move first into the overlay model, using it to control their network of optical switches, once they get deployed. With a standardized UNI in place, large IXCs will be able to offer some bandwidth-on-demand services and improve the management of their optical networks.

The OIF is largely responsible for developing the specification for an Optical UNI, though the ODSI, established by Sycamore Networks and a lot of other vendors, should be given credit for pushing hard to get the industry excited about an Optical UNI back in 1999. The work accomplished in the ODSI meetings and interoperability trials has been offered to the OIF for adoption as part of their Optical UNI specifications.

The overlay model has its limitations, however, and most people in the industry feel it's just a step in the right direction, not the ultimate model. The debate seems to have come to this: Do we stop at two control planes (one for the transport layer and one for packet layer), or is a unified control plane the ultimate goal of these efforts? The big issue at the heart of this debate is scaleability – namely, can a unified control plane scale to support every device in a network?

According to Bill St. Arnaud, Senior Director Network Projects CANARIE Inc. Inc., one limitation of the overlay model is loss of synchronization between signaling layer and control/switch layer. "This is a problem that has bedevilled the Internet industry for years. It was a frequent problem with ATM UNI and is largely why netheads are skeptical of an overlay model. When you separate the signaling plane from the switching plane you can get all sorts of anomalous behavior - ships in the night connections, forwarding of invalid paths, etc. Personally I think we will have to have an overlay model but be extremely careful of synchronization."

According to Dr. Yakov Rekhter of Juniper Networks (one of the inventors of MPLS) there is no fundamental technical reason to stop at two, just because transport systems are of a differing technology than packet-forwarding systems. The overlay, by itself, is neither sufficient nor necessary to provide scaleability, because scaleability of the control plane is a function of the number of the devices controlled by the control plane – so any control plane must deal with several devices, whether they are of one technology or many.

In the case of the overlay, its simplicity does come with the trade-offs of potentially less efficient use of resources due to information hiding at the domain boundaries, and a susceptibility to a single failure within one domain causing multiple seemingly unrelated failures in other domains. This can be overcome, it seems, with proprietary "tweaking" of signaling within an optical network, so it remains to be seen if this represents a fatal flaw for the overlay model, or just part of its character.

Next Page: Optical Architecture 2: Peer to Peer

In the peer-to-peer model, not surprisingly, optical switches and routers act as peers, using a uniform and unified control plane to establish label-switched paths across these devices with complete knowledge of network resources.

In this model there is little or no distinction among UNI, NNI, and router-router (MPLS) control planes; all network elements are direct peers and fully aware of topology and resources. IP-optical interface services are folded into end-to-end MPLS services, meaning label-switched paths could traverse any number of routers and optical switches.

This is a key distinction. In the peer model a single instance of a control plane can span multiple technologies/network elements, provided that the control plane can support each of the technologies.

This allows a network operator to create a single network domain composed of different network elements, thereby allowing them greater flexibility than in the overlay model in which an optical cloud is created as a domain unto itself.

The work around GMPLS and the peer model allows complex layered networks to scale by building a forwarding hierarchy of interfaces, from fibers all the way up to routers. Label-switched paths (LSPs) can be established within each layer and “nested” within others so that an LSP beginning and ending on optical switch interfaces may contain many LSPs within it that begin and end on routers.

The unique IP/MPLS-based control plane in the peer model certainly would simplify control coordination and fault handling among network elements with different technologies, though at the same time require significantly more work to ensure proper integration with the control plane.

Additionally, this model offers the benefits of end-to-end protection and failure restoration, traffic engineering based on MPLS concepts, and efficient use of resources in a network composed of multiple technologies. The concern here is obvious: This is biting off an awful lot to chew, and that scares carriers.

In the peer model, a significant amount of state and control information flows between the IP and optical layer, making the development of this model more time consuming and complex.

Important to companies like Cisco and Juniper, routers may control the end-to-end path using traffic engineering-extended routing protocols deployed in IP and optical networks, thus giving them ultimate power over network utilization and resource management. As it favors the intelligence of routers over optical switches, the peer model of the Optical Internet is being pursued most aggressively by router vendors. These have their voices heard most favorably within the IETF.

The peer model does, however, present a scaleability problem because of the amount of information to be handled by any network element within an administrative domain. It is easy to see any one network element getting choked by a constant barrage of network state updates.

In addition, non-optical devices must know the features of optical devices, which can be an operational nightmare in many traditional networks today, where the boundaries between the transport network and data network are as impassable as the Chinese Wall.

The peer model may well take hold in the future, but it is likely to be a rather distant future.

For the time being, carriers are much more likely to reach a comfort level with the overlay model, particularly if the OIF UNI is adopted by a large number of vendors and the ability to support new optical services and restoration schemes is provided.

The added benefits of the peer model may, indeed, not be sufficient to justify the complexity of implementation. Most carriers today, when posed with the basics of the peer model, respond with a simple question: “Would I want routers (or my customer's routers) making million-dollar decisions on their own?” The answer is always no. And that current perception will make the process of developing standards to suit a peer model very challenging over the next few years.

Next Page: Optical Architecture III: Hybrid

Even though much of the debate has centered around overlay vs. peer, some people are now beginning to propose hybrid solutions.

The hybrid model represents a middle ground between overlay and peer. From the overlay model the hybrid takes the support for multiple administrative domains. From the peer model, hybrids take support for heterogeneous technologies within a single domain.

Ideally, this avoids limitations of the peer and the overlay while combining their benefits and gives a carrier a wide degree of flexibility in how to design its core network. Some areas it may want to keep entirely separate for security reasons, and use the UNI to segregate them, while other areas may benefit from having a mix of optical switch and IP routers acting as peers. These domains can be stitched together with a standardized NNI.

The most likely scenario for this model is one in which IP and optical networks retain their clear demarcations and exchange only reachability information. For simplicity’s sake, separate instances of routing protocols would run in the IP network and in the optical network; any network domain could still accommodate different technologies.

It certainly sounds nice on paper, and it's up to vendors and carriers to work together to define what is best for given functions or services. The one thing these all have in common is that GMPLS supports any architecture equally well. By providing a standardized method of signaling across of variety of technologies, GMPLS can open up equipment markets currently populated with proprietary solutions. Carriers always like this, and it almost always drives increased deployments, which is good for everyone.

Next Page: Specs and Standards: OIF

The Optical Internetworking Forum (OIF) was launched in April 1998 by Ciena Corp. (Nasdaq: CIEN) and Cisco Systems Inc. (Nasdaq: CSCO) to bring together IP and optical networking in a way that would foster both services growth and network efficiencies. Founding members also included AT&T Corp. (NYSE: T), Hewlett-Packard Co. (NYSE: HWP), Qwest Communications International Corp. (NYSE: Q), Sprint Corp. (NYSE: FON), Telcordia Technologies Inc., and WorldCom Inc. (Nasdaq: WCOM).

The OIF is not a formal standards body, per se, but does create detailed specifications, or Implementation Agreements, that are presented to formal standards bodies for adoption. According to Greg Bernstein, senior technology director of Ciena's core switching division, “The OIF as an organization tends to have more optical expertise than the IETF and more formal procedures. In particular, the OIF goes by a ‘one company, one vote’ majority mechanism, and the IETF goes by a ‘rough consensus and running code’ mechanism, where the measure of consensus is based more on individuals than companies. With the IETF's protocol expertise these two organizations can complement one another.”

The working groups within OIF include Architecture, Carrier, OAM&P (operations, administration, maintenance, and provisioning), Physical & Link Layer, and Signaling, each of which have participated to some degree in the development of the Optical UNI specification.

The OIF’s most important contribution to the world of optical signaling and routing to date has been the Optical UNI. The Optical UNI is meant to support rapid provisioning of circuits between clients of an optical network, with various levels of circuit protection and restoration. The Optical UNI includes signaling for connections establishment, automatic neighbor discovery, and automatic services discovery, fault detection, localization, and notification. In this regard, the Optical UNI is fundamental to the implementation of the Overlay model of the Optical Internet.

The focus within the OIF UNI work is currently on IP clients, but, as the specification is developed, work will also include ATM, Ethernet switches, Sonet ADMs, and crossconnects.

The components of the Optical UNI, as defined by the OIF, include:

  • UNI-N: The User Network Interface – Network (ONE), which is implemented on an optical switch, and

    UNI-C: The User Network Interface – Client, which is implemented on access switches and routers.



The applications associated with the Optical UNI are the fundamentals of intelligent optical networking. They range from simple point-and-click provisioning of optical circuits to dynamic bandwidth allocation based on traffic levels. The key requirements that any Optical UNI specification will support include:

  • Rapid provisioning of circuits between clients

  • Various levels of circuit protection and restoration

  • Signaling for connection establishment

  • Automatic topology discovery

  • Automatic service discovery

  • Fault detection, localization, and notification

Most groups within the OIF have some input to the UNI specification, though the Architecture and the Signaling Working Groups are most active. The Architecture Working Group is developing a framework architecture and UNI 1.0 requirements. Currently, three services based on Sonet/SDH framing with STS1/STM1 bandwidth (or higher) have been defined:

  • Connection creation: This action allows a link (lightpath) with the specified attributes to be created between a pair of termination points.

  • Connection deletion: This action allows an existing connection (lightpath), referenced by its ID, to be deleted.

  • Connection status enquiry: This action allows the status of certain parameters of the connection, referenced by its ID, to be queried.

The Signaling Working Group will be working on a number of issues to be resolved with the current UNI and some additional scope and requirements for the next phase of the UNI. The Signaling group will be developing the real fundamentals of the UNI, including the key features of service discovery, end-system discovery, signaling protocol definition, and the relationship of NNI to UNI.

The OIF UNI is in lock step with the Internet Engineering Task Force (IETF), making it well on its way to guaranteed success. There is little bickering between the two, and the work remains very complementary.

The OIF is concerned primarily with the UNI specification and will eventually tackle the NNI specification as well. Carriers are quite active within the OIF, giving it much credibility, and the implementation agreements arising out of the OIF will likely find their way into carrier networks once they are published and accepted.

Next Page: Specs and Standards: ODSI

The Optical Domain Service Interconnect Coalition (ODSI) was founded in 1999 by Sycamore Networks Inc. and a number of other startup vendors to address the need for an open interface to the optical network.

The ODSI Coalition went about defining an optical UNI and generated a significant amount of interest around the concept of intelligent optical networking, but the initiative suffered from lack of participation by the large, established telecom networking vendors or router vendors.

Many vendors, such as Cisco Systems Inc., Juniper Networks Inc., and Nortel Networks Corp., saw the coalition as little more than a marketing effort undertaken by startups and decided to put their stock in the Optical Internetworking Forum (OIF) and Internet Engineering Task Force (IETF) instead, where their own currency was strongest.

According to representative members of the ODSI coalition, it completed its optical UNI specification in early 2001. This specification and its accompanying protocol suite, together with the results from the December 2000 interoperability test, have been shared with the official standards organizations via formal submissions and informal dialogue.

The ODSI coalition was never intended to be an official standards-making organization, but was to serve as an industry catalyst to sharpen focus on the optical UNI (not much was happening on that front in late ’99 when the group first came to fruition).

Following the submission of ODSI progress to the OIF during the course of 2000, the basic structural framework for the optical UNI was adopted by the OIF signaling workgroup.

While the latter organization continues to work through the specifics of their UNI specification, the ODSI framework remains pretty much intact. While many structural components of each document are identical (in fact, the docs share a number of editors), one of the key differences between the two specifications was the choice of signaling protocol. In the case of ODSI, a TCP-based protocol was developed, whereas in the OIF UNI, MPLS (e.g., RSVP, CR-LDP) has been selected as the signaling scheme.

In late 2000, the editors of the ODSI signaling specification (including contributors from Alcatel SA, Lucent Technologies Inc., Redback Networks Inc., Sycamore, and Tellium Inc.) submitted their work to the G.ASON committee of the ITU (see Specs and Standards: ITU) for informational purposes; they also shared the work-in-progress with relevant IETF work groups.

As it currently stands, the ODSI coalition has completed the objectives set forth in the initial meeting, namely:

  • Develop a recommendation for an open UNI interface to enable electrical layer devices to signal the optical switched network to request bandwidth on demand.

  • Conduct multivendor interoperability testing.

  • Hand off the functional specification and interoperability test to official standards organizations.

With these objectives achieved, the coalition was wound up.

Individuals and companies from the ODSI coalition are continuing to contribute to, and drive the development of, a standards-based Optical UNI within the official standards organizations. It is generally agreed that ODSI played an important role in kick-starting the work on the optical UNI, forcing the OIF to move forward much more quickly than it had been accustomed to.

For more on ODSI:

  • Crunch Time for Signaling Standard
    Sycamore in Standards Setback
    Sycamore Faces "Moment of Truth"
    Sycamore Stuck on Signaling Standard
    Vendors Demo Signaling Synergy



Next Page: Specs and Standards: IETF

The Internet Engineering Task Force (IETF) is a large, open, international community of network designers, operators, vendors, and researchers that means to develop Internet standard specifications.

The IETF standards are "open"; the specifications are non-proprietary and freely available. Adherence to these standards is voluntary, yet they've been widely adopted because they allow the global Internet to function as a unified system.

The IETF developed and standardized the core technology used in the Internet and has recently focused on the definition of control protocols for intelligent opticall networks, through its GMPLS and other related groups.

Like many other standards groups or industry forums, the IETF divides its labors among a number of Working Groups in which the technical work around standards development occurs. Working groups are further organized in to technology areas. (see www.ietf.org/iesg.html). A temporary technology area has been established to over see the development of GMPLS and other "sub-IP" technologies. (see www.ietf.org/IESG/STATEMENTS/sub_area.txt)

The IETF's IP over Optical Working Group, in its "Sub-IP" area, is led by Daniel Awduche and James Luciani. Its goals and milestones include producing a framework document for IP/optical networking and developing features and requirements for carrier optical services; IP-over-optical traffic engineering, restoration and protection; and IP-based protocols for signaling and disseminating network topology state information in IP/optical networks. The purpose of the group is to ensure that that the specific needs of optical networks are satisfied by the generic technology developed by the rest of the sub-IP area.

The Working Group, aware of the monumental task at hand, has lain out a timeline that stretches to December 2002. The reason behind the formation of the Sub-IP area is a perceived need for coordination across areas and a focus on ensuring that adopted Internet drafts loosely adhere to an architectural vision. That architectural vision is GMPLS, whose ultimate goal is the evolution of the Internet to one in which all control protocols are based on IP technology.

Where GMPLS is primarily being standardized is within the Common Control And Management Plane Working Group (CCAMP WG) of the Sub-IP area, which is defining a common control plane and common measurement plane for core IP tunneling technologies. Most of the work here now is focused on GMPLS, the use of GMPLS with Sonet, tunnel trace, and extensions to LMP and IGP.

The most important thing to understand here is that the IETF will be largely responsible for pushing GMPLS before any of the large standards bodies begin their formal work. The IETF will differ from the International Telecommunication Union (ITU) in that the latter body is generally concerned with standardized elements of complete architectures, whereas the IETF is building the toolkits and protocol suites that underpin larger architectures. Since the IETF has been driving the development of MPLS, it follows that GMPLS will arise from that work. Simply put, GMPLS is based on the premise that MPLS can be used as the control plane for different switching applications, including:

  • TDM where time slots are labels (e.g., Sonet)

  • FDM where frequencies (or lambdas) are labels (e.g., WDM)

  • Space-division multiplexing where ports are labels (e.g., OXCs)



To be clear, Generalized MPLS is a set of protocols rather than a single protocol. It is an extension of previous IETF work on MPLS for traffic engineering of IP networks. GMPLS generalizes the MPLS signaling protocols to allow the same protocols, with extensions, to be used to control optical switches as well as packet switches. The set of GMPLS protocols includes:

  • Link Management Protocol for neighbor discovery

  • Extensions to OSPF and ISIS for link-status dissemination

  • Extensions to RSVP-TE and CR-LDP for path management and control

The Sub-IP area was formed to handle items that really don’t belong in the realm of IP, a Layer 3 protocol. The Working Groups established within the Sub-IP area include:

  • CCAMP (Common Control And Management Plane Working Group): Profiled above.

    TEWG: Internet Traffic Engineering, concerned with the optimization of traffic handling in operational networks.

    IP over Optical: This group is chartered with determining the control plane requirements and issues that are unique to optical networks. There is no new protocol work with the IPO WG, though there are discussions about a OIF UNI derivative of GMPLS and is largely a usage profile of GMPLS signaling. The likely scenario is to just adopt whatever the OIF specifies.

    GSMP: General Switch Management Protocol, which will define a protocol to control ATM and optical switches.

    IPORPR: IP over Resilient Packet Ring, producing a framework and requirements to be used as input to IEEE 802.17, which is developing RPR standards.

    MPLS: Multiprotocol Label Switching, defining the base MPLS technology (essentially a Layer 2.5). Within this group, standardization of architecture, signaling, encapsulations, restoration, and the definition of label switch paths over MPLS, ATM, Frame Relay, LAN technology, and Sonet is conducted. The initial goals of this group have mostly been realized, and MPLS is well on its way to being the heir to the throne of ATM in core networks.

The standardization of GMPLS will arise out of the work of all these groups. The work underway here is primarily around creating extensions to existing routing and signaling protocols. The coordination of this work is conducted by the CCAMP WG

Since the concept of time or specific bandwidth doesn’t exist in routing – only the notion of getting from point A to point B – most efficiently, extensions are required for the routing descriptor to have a time-slot and other useful data such as protection scheme, the size of the bandwidth, bit error rate, etc. Since there have been two label distribution methods defined for routing in IP networks, the extensions for optical are being made to CR-LDP and RSVP-TE. Related base extensions have already been standardized by MPLS for signaling and routing.

Current Status

GMPLS is still a work in progress, and some functions – especially inter-network signaling functions – are still work for further study in IETF. The Path Management and Control specifications are most complete, and have recently begun Working Group Last Call, a two week review of the specifications at the Working Group level prior to asking for their approval as IETF standards. The next step after clearing Working Group Last Call will be an IETF-wide Last Call, and then final editing prior to receiving an RFC number. Currently, WG last call is complete and signaling documents should be handed off for IETF last call soon.

An excellent document to read from the IETF is titled "A Framework for Generalized Multi-protocol Label Switching (GMPLS)," which provides an excellent overview of the IETF's justification for GMPLS, and its outline of the work that needs to be accomplished. Also see "Generalized Multi-Protocol Label Switching (GMPLS) Architecture" for the most up-to-date description of GMPLS applications, as presented in the latest, most complete draft on GMPLS.

The most stable components of GMPLS are those that support the UNI interface functions and have been incorporated into the OIF UNI spec. This is sufficient to provide a user interface into the Intelligent Optical Network for requesting bandwidth allocation by an optical user device such as a high-speed router or multiservice access device.

GMPLS specifications are at different levels of completion in the IETF. The Path Management and Control specifications are most complete, and have recently begun Working Group Last Call, a two-week review of the specifications at the Working Group level prior to applying for approval as IETF standards. The next step after clearing Working Group Last Call will be an IETF-wide Last Call, and then final editing prior to receiving an RFC number

LMP specifications are somewhat further away from completion. Routing specifications are also further out and have not been given a specific timeframe for completion. For UNI purposes, only the Connection Setup and LMP protocols are required.

Once the protocols have gone through the approval process, they become IETF Proposed Standards. The IETF and other groups like the Optical Internetworking Forum (OIF) will begin the process of interoperability testing to identify holes or inconsistencies in the standard. Results of interoperability testing are then incorporated into a revised specification that can subsequently be elevated to IETF Draft Standard.

Next Page: Specs and Standards: ITU

The International Telecommunication Union, Standardization Sector (ITU-T) is largely responsible for the formation and creation of international standards in telecommunications to which nearly every developed nation adheres. The ITU, therefore, has significant influence in the market, but at the same time moves at a much slower pace than any one individual market or technology.

In the realm of optical signaling, The ITU-T has been deeply involved in the creation of what is best referred to as a standardized architecture for optically signaled networks. Though it often appears to be moving at a snail’s pace, this is one mollusk that can easily win the race, as myriad battles rage within looser organizations like the IETF. Additionally, carriers listen to the ITU, while often regarding other bodies with justified skepticism. The ITU-T has been instrumental in laying the standardized foundation for optical networking (including the essential "ITU-grid" of DWDM channels), so it is important to appreciate how significant any work conducted here is.

The ITU-T differs importantly from other standards bodies in that it is approaching this work from an architectural point of view, regarding the entire network first, then deciding on how to implement that vision. That said, the ITU standards have rather weighty sounding names, beginning with the obvious but essential Architecture of Optical Transport Networks, which lays out the fundamentals for how the ITU-T envisions an optical network to be designed. The recommendations currently finalized or given a formal number include:

  • 1: G.709 (2001) Interface for the optical transport network (OTN), which specifies the interfaces for interconnections among service providers/network operators and facilitates mid-span meet between equipment from different vendors. This is, in effect, an optical NNI.
    2: G.705 (2000) Characteristics of Plesiochronous Digital Hierarchy (PDH) equipment functional blocks
    3: G.707 (2000) Network node interface for the synchronous digital hierarchy (SDH)
    4: G.959.1 specifies physical layer interfaces for the OTN.
    5: G.783 (2000) Characteristics of synchronous digital hierarchy (SDH) equipment functional blocks
    6: G.8030 (2000) Architecture of transport networks based on synchronous digital hierarchy (SDH)
    7: G.8050 (2000) Generic functional architecture of transport network
    8: G.871 (2000) Framework of Optical Transport Network Recommendations
    9: G.872 (1999) Architecture of Optical Transport Networks: This is the mother ship of optical networking architecture standards. It defines an optical transport network consisting of optical channels within an optical multiplex section layer within an optical transmission section layer network. The optical channels are the individual lightpaths, and that is what is important to the Optical Internet, managing those optical channels from a standardized control plane.
    10: G.8070 (2001) Formerly G.astn, this details requirements for the Automatic Switched Transport Network. The ASTN provides a set of control functions for the purpose of setting up and releasing connections across a transport network. The requirements contained in this recommendation are technology independent. The architecture of switched transport networks meeting the requirements in this recommendation and the technical details required to implement these networks for particular transport technologies will be found in other recommendations. This currently represents the overall requirements document for ASTN networks.
    11: G.8080 Formerly G.ason, this describes the reference architecture for the control plane of the Automatically Switched Optical Network that supports the requirements identified in Recommendation G.8070, which is a client-server model of optical networking. This reference architecture is described in terms of the key functional components and the interactions among them. This recommendation describes the set of control plane components that are used to manipulate transport network resources in order to provide the functionality of setting up, maintaining, and releasing connections. The use of components allows for the separation of call control from connection control and the separation of routing and signaling. G.8080 takes path-level and call-level views of an optical connection and applies a distributed call model to the operation of these connections. What's important to note here is that with a call model in place a network operator may now bill for calls and, based on the parameters of the class of service requested for the call, select the connections with the type of protection or restoration required to meet the class of service.

    • Other recommendations associated with G.8080 include:
      G.7713/Y.1704 Distributed Call and Connection Management
      G.7714/Y.1705 Generalized Automatic Discovery Techniques
      G.7712/Y.1703 Architecture and Specification of Data Communication Network



Here's a good link to check the status of these standards: Study Group 15 Status

The fact is, Optical Transport Network (OTN) specifications are far from complete. For example, G.709, which defines the OTN, still requires considerable work in defining the multiplexing and transport overheads. G.872, the OTN architecture document, is undergoing massive revision and expansion. G.798, which defines the OTN functions that are needed for mid-span meet has not been completed yet. G.841 and G.842, which deal with OTN protection, have yet to be even started. Finally, G.874 and G.875, which provide the Network Management information model and functional requirements, also are incomplete. It is expected that the first issue of G.ason will be based on a Sonet/SDH transport network.

The ITU is most complementary to the OIF, which is taking on the task of defining the specifications and implementation of a user network interface (UNI) and a network to network interface (NNI). Regarding the IETF, there can be overlap but not complete cooperation, because the ITU is not entirely wedded to IP and MPLS for signaling – as the IETF is – and is considering alternatives.

The important thing to note about the ITU is that this is the standards body carriers really listen to. All the major carriers worldwide get their transmission network standards from the ITU, so monitoring their progress is key. The IETF may believe they have the keenest sense of the evolution of the Internet, but the ITU makes the rules for implementing the entire network, IP and optical alike.

Right now, the ITU has not made up its mind about GMPLS. It is considering three different approaches to signaling, with GMPLS being only a contender, not a forgone conclusion, as it is in the IETF. The two other contenders include a signaling scheme based on ATM PNNI, and a proposal for an entirely new set of protocols.

It’s arguable that GMPLS is the front runner because of the general enthusiasm for leveraging control plane work accomplished thus far with MPLS, but only time will tell if it reigns supreme. MPLS has gotten knocked around a bit lately, and if it falls out of favor, the ITU may give more credence to proposals for a completely new scheme. The ITU moves at a rather deliberate pace, so this is going to take years, no matter what.

Next Page: The Future

Predicting when standards will get adopted may be a futile – not to say, foolish – exercise. Yet the momentum behind creating a standardized control and interfaces to an optically signaled network is strong today, and a number of carriers are at least interested in seeing how it can benefit their networks. This does not mean they will be deploying it as soon as specifications are published, but it does mean it addresses some of their requirements for lowering operating expenses, supporting automated provisioning, improving network restoration and resource management.

The questions remain, however, as to what will drive this move to GMPLS. Will it be new revenues or opex and capex savings? One would hope it’s new services, because something that simply improves carrier networks can take a very long time to get adopted (witness the ITU's TMN (Telecommunications Management Network) efforts). If services are driving it, then carriers will have a much more compelling reason to push ahead. The challenge here is convincing transport network operators that they should automate their part of the network. It sounds interesting to them, but they are often the slowest part of a carrier to adopt any radical new solutions and their familiarity with signaling protocols is often nil.

A second and very important question is whether the success of GMPLS depends on the success of MPLS. In the last few months a volume of rather negative commentary has begun to emerge on MPLS, and many industry observers are beginning to see some alarming parallels between the development effort around MPLS and that of ATM years ago (see MPLS Gets Lukewarm Reviews, The Monster Memo, and Poll: Is MPLS BS?).



If MPLS work gets too muddled by hundreds of vendors submitting hundreds of drafts for everything-over-MPLS it’s possible carriers will throw up their hands and say “Forget it, I’ll just over-provision bandwidth between my routers and stick with what I know.” If MPLS is abandoned, does GMPLS get thrown out too?

“I don’t think so,” says John Drake, chief network architect of Calient Networks Inc. and a key player in the MPLS standardization effort. “The original draft of GMPLS is very clear about what GMPLS is and is not and is still very fresh today. It’s about expedience and leveraging the work already accomplished with MPLS and OSPF. Even if routers do implement MPLS, they can still use GMPLS to interface with the optical network. The ops staff within a carrier can understand GMPLS because it’s based on common protocols, making it easy to implement, even if MPLS isn’t present in the IP network.”

Drake sees three key steps underway for GMPLS. First, the DWDM layer must be brought into the control plane through standardization of the link management protocol for DWDM networks. Second, work is underway at the IETF traffic engineering workgroup to generate requirements for span and path protection for GMPLS, a key feature carriers will require when they deploy optical switches. And third, over the long term all the standards bodies will be considering how all-photonic networking will influence GMPLS.

That will not be a quick process, but debate is already under way. From here on out, there remains plenty of work to be done around developing specifications for supporting optical VPNs and associated specifications for inter-area routing and the Optical NNI.

Carriers may be in a funk lately, taking lumps for expanding their networks too rapidly with optical gear and not building in ways to get real return on that investment, but adoption of new architectures always takes time and patience.

Amy Copley, until recently a senior product manager at Sycamore, made the following comments on the likely adoption of new signaling developments, in an email message to Light Reading:

  • I’m of the mind that carriers we’ve talked to so far want to start testing the applications this year [2001] or early next year [2002], generally at this point in an R&D or test lab environment. From the interest we’ve seen I think they would like to see some sort or revenue generating service from the intelligent optical network some time next year. Clearly the automated provisioning aspects have an impact day one. Once that is in place you can get more creative with the service offerings.

    It really depends on what kind of carriers we are talking about. But for the big ones we’ve talked to, there is a lot of architecture activity going on to get to the next generation of their network. For some it is 2nd generation, for others it is 3rd. I am also of the opinion that there will be a mixed environment of those applications that will need the traffic engineering and robust peering capabilities of the IP devices and need those devices to control the optical core (peer-to-peer). There are also those applications where the trust factor between the networks will necessitate a ‘black box’ approach where the edge devices ask for bandwidth and the core makes the connection.



Another big question centers around carriers' willingness or ability to adopt GMPLS (or some derivative of it) when they have built custom network management systems with different types of provisioning features. Some carriers have said they can already support point-and-click provisioning through their NMS, while others see a clear value in GMPLS.

Mathew Ooman, formerly of Williams Communications Group (NYSE: WCG), is a proponent of the Optical UNI and GMPLS. He observes: “It is okay to customize an NMS, since there are provisions in the standard to accommodate that, but it is critical that the management systems and the network elements support an integrated control plane that is standardized. This may require network equipment to run multiple control planes, however, which is quite a challenge.”

So the future for GMPLS and the optical control plane certainly isn’t clear. In the world of 1999 and 2000 it would have been easier to say carriers were going to adopt this rapidly to exploit their next-gen optical networks, but in the world of 2002 not much looks next-gen, and the risk-taking carriers have gone either quiet or out of business.

The important question to answer, then, is: How will the incumbents take to GMPLS? The answer, common to all questions directed at incumbents, is: slowly and cautiously.

The major IXCs and ILECs are continuing their expansion into high-speed data services and wavelength services, so the market for optical network equipment and software is by no means extinguished, but GMPLS and the Optical UNI certainly feel a bit “fluffy” right now – offering certain operational benefits to carriers, but not necessarily providing immediate return on investment. It will take most of 2002 just to prove the efficacy of adopting an Optical UNI.

Proving the value of GMPLS is a much greater undertaking, and though many carriers express real interest in “playing” with the technology, making real commitments takes time. It’s important to acknowledge here that GMPLS proposes an entirely new way of managing network resources and provisioning, so this is not adopted as easily as putting a few new boxes in a limited metro rollout.

It will come down to economics. If early adopters are able to prove that the new control plane provides them with a competitive edge, while improving their bottom line, large carriers will take notice.

Like MPLS, it may be used initially behind the scenes, as a traffic management solution in the core, then find its way into broader deployment, tied directly to services and therefore tied directly to new revenue streams.

What will be interesting to watch is just where it is adopted. Some carriers may try it within isolated network areas – say, a particular metro to provide managed wavelength services – while others may use it to stitch their core optical switches together in an effort to improve resource management or migrate to mesh-based networking. Both will likely happen simultaneously, as the stage has been set by the growth of the optical switching market. As carriers grow comfortable with the performance of these switches in the network as optical-speed digital crossconnects, they will then be more willing to try out software features, taking the first steps towards transforming their networks, rather than simply optimizing them.

Next Page: Who's Doing What

Because GMPLS and the Optical UNI are signaling and interface standards, it isn’t easy to talk about which vendors are building this type of equipment, since all networking vendors can easily adopt the spec once it’s finished. At Supercomm last year there were more than 20 vendors demonstrating the OIF’s optical UNI, and lots of vendors are working on GMPLS, some with more fervor than others. Of course, since it is a collaborative effort, no one of them can claim to have invented it.

What we have decided to do here is just pick ten vendors that are the most aggressive, or at least the most vocal, in their development of GMPLS-like features for their systems or software platforms. This list includes most of the optical switch vendors out there, some software vendors, and a few router companies.

These profiles, therefore, aren’t meant as our “picks” for GMPLS winners or leaders, but rather ten illustrations of how vendors are already advancing the development of GMPLS by making it a fundamental part of their product designs.

Accelight Networks Inc.

Accelight is a vendor taking GMPLS very seriously, and has thought more about how to create new services based on the capabilities GMPLS offers more than most other vendors interviewed for this report.

Accelight has over time been modifying and evolving its positioning, from what at first appeared to be an ultimate “God box” for the core, with optical crossconnect through IP routing capabilities, to a more service-focused platform for the edge of the core. Today, the system is meant not solely to displace a number of network elements at large carrier POPs, but instead to provide a platform from which optical services are provisioned that exploit a unified control plane and what Accelight terms "photonic burst" switching. Utilizing a lithium niobate optical switch fabric that switches at nanosecond speeds, Accelight has built an optical switching system that has the ability to forward packets, switch TDM circuits, and switch lambdas across a common fabric. This is Accelight's key distinction: With a unique fabric scheduling algorithm and an optical fabric that can switch as fast as a group of packets, they propose an answer to the question of how to truly unify the data and transport networks from a single network element. This creates an important distinction from many other proposed God boxes. This is not cramming multiple switch fabrics and interfaces in a single rack, effectively trying to collapse four or five network elements onto individual cards and unify them by plugging them into a common backplane. It's devising a system that can take in traffic from its optical ports and make forwarding decisions at any layer of the network through the use of GMPLS. This can be said to be the only example today of a true “GMPLS box” because every proposed benefit of GMPLS can be realized within this single system, rather than across many systems in a network.

This certainly is a heady undertaking, and no such projects are without significant risk these days. The point Accelight is trying to make here is a service one. It says that its system will enable carriers to improve core bandwidth utilization and provide a much broader suite of core network services to end users. For example, if MPLS paths can now be directly associated with unique optical layer connections (TDM circuits, lambdas, or even fibers), they can be priced differently and “bundled” differently within a carrier network, thus improving network optimization and potentially improving carrier economics.

In its GMPLS developments, Accelight is decidedly in the “peer model” camp, believing that for GMPLS to create rich service platforms, network elements need to support a full complement of required routing protocols, namely OSPF and ISIS.

To that end, the company, based in Ottawa, has made some key hires in the Pittsburgh area. Key IP protocol engineers from Marconi and Carnegie Mellon work today in a separate facility in the Pittsburgh area, while transport engineers and overall system software and hardware designers work in Ottawa.

More about Accelight:

  • AcceLight Scores Major Change
    Terabit MPLS Switches in the Works
    Accelight Networks Inc.



Calient Networks Inc.

Calient Networks always intended optical signaling and routing to play a key role in its optical switch design, and it was quite prominent in developing the specifications and working drafts for GMPLS.

Since 1999, Calient's lead engineers have played primary authorship roles within the IETF network working committee developing the GMPLS standards suite. John Drake, chief network architect; Jonathan Lang, senior systems engineer; and Ayan Banerjee, lead network systems engineer, have co-authored many protocol extensions to guide the formation of a new link management protocol, as well as adaptations to the OSPF/ISIS routing protocols and to the RSVP/LDP signaling protocols.

Calient Networks has also collaborated with other IETF working committee members from Ciena, Cisco, Juniper, Level 3, Movaz, Redback, and Tellium to produce working documents and co-author papers in leading journals.

Since Q4 2000, Calient’s photonic switching systems have moved from long-term lab testing in the Internet Exchange Centers of Equinix Inc. (Nasdaq: EQIX), and facilities of Juniper Networks and Tellabs, to interoperability testing in Marconi’s European trial networks, Hitachi’s test facilities, and other carrier sites. In these trials, Calient has already begun to implement early features of its GMPLS protocol stack, which the company developed in 2000-2001 with its signaling protocol software partner, London-based Data Connection Ltd.

The software features of this stack and Calient’s DiamondWave architecture support both peer-to-peer and overlay network topologies, in the gigabit router/IP domain as well as the photonic switching/DWDM transport domain.These include:

  • Drag-and-drop provisioning of wavelength services

  • Multi-service Sonet and Gigabit Ethernet delivery

  • Dynamic call setup of lambda paths over a fully photonic network, initiated by either gigabit core routers or Calient’s DiamondWave switches

  • Basic Link Management Protocol procedures between photonic switches

  • OSPF-TE routing between the photonic switch and core router OS, e.g., Juniper JunOS

  • Photonic switch interoperability between gigabit/terabit routers and metro network grooming platforms



Under controlled release, Calient made its GMPLS software stack more broadly available on the DiamondWave photonic switching system during Q3 2001. This step is anticipated to be the first field implementation of a GMPLS protocol stack on an all-photonic system.

Calient is preparing to co-launch GMPLS Test Bed activity in Q4 2001, involving a range of DWDM, gigabit router, and photonic switching players. The goal of this effort is to prove that wavelength routing and connection management can be achieved across multiple network elements from multiple vendors. This effort will be done in concert with the IETF committee work and constitutes one of the first cross-industry and cross-element collaborative standards implementation efforts. Calient will also support industry adoption of its own GMPLS protocol stack by enabling its code to be licensed by other element vendors over the course of the next year. Long-term, Calient expects extensive management applications to be commercially available, to deliver SLA monitoring, connection trending, and analysis, planning, and provisioning tools.

More about Calient:

  • Calient Captures a Contract
    Calient Networks
    Calient Achieves Top 10 StatusCalient Networks, Inc



Ciena Corp.

Ciena was one of the early entrants with an optical switching system, so it follows that they would be quick to market with optical layer signaling and resource management solutions.

Ciena’s signaling scheme for its CoreDirector switches is called OSRP (Optical Signaling and Routing Protocol), a pre-GMPLS implementation supporting Intelligent Optical Network signaling requirements. OSRP shares some basic principles with GMPLS but is currently more complete and stable for use within optical networks, particularly for critical functions like crankback, protection, and restoration.

This optical signaling platform underpins Ciena’s Lightworks OS, which, like other optical switch operating systems, offers automated end-to-end point-and-click provisioning; multiple protection and restoration mechanisms, including Line, VLSR ring, and FastMesh; automated grooming; automatic topology discovery; flexible routing, mechanism-automated or user-defined; and the capability to provision circuits and protection groups.

Ciena took a certain amount of heat from competitors early on for its OSRP, because it is based on ATM’s PNNI signaling scheme. However, it has arguably been a useful tool in the early stages of the market because most carriers are quite familiar with PNNI.

One caution, though. Nine times out of ten, when you mention PNNI to a carrier engineer they begin to groan and roll their eyes. It is quite possible that OSRP will represent an interim step for Ciena, which will ultimately adopt a fully standardized GMPLS platform once it is adopted as a standard by the ITU – or the IETF drafts become de facto standards in the North American networking community. To be sure, Ciena has been quite active in its optical UNI efforts and has the benefit of market leadership in optical switching systems worldwide.

For standards development, Ciena was one of the founding members of the OIF, along with Cisco, and has been quite active in developing the Optical UNI and associated signaling specifications, including neighbor discovery, routing, and signaling. Ciena also coordinates efforts between the OIF and IETF.

Most recently Ciena has led in the definition of transparency services and in inter-domain routing principles and requirements. Ciena has been actively contributing to the GMPLS effort and is a co-author of all of the main IETF draft documents, including the OSPF and ISIS routing extensions, RSVP and LDP signaling extensions, and LMP.

Cisco Systems Inc.

Cisco is an interesting company to watch in the GMPLS world. It dominates the IP router market worldwide and has been quite active in standards bodies in their promotion of MPLS.

GMPLS is another thing, however. Cisco has not announced the availability of an optical switching system since its abandonment of the switch meant to come from their acquisition of Monterey Networks (see Cisco Kills Monterey Router). That switch did have an optical control plane, dubbed WaRP, or wavelength routing protocol, yet Cisco thus far has been a bit coy in regards to what any new optical switching system its products will use. It has made reference to what it calls the “Cisco Unified Control Plane (UCP),” an optical control plane meant to be an integral part of the Cisco IP+Optical strategy. In its most basic form, the UCP is meant to support the communication between Cisco routers and its transport products, such as the ONS 15454. This is only part of Cisco’s four-phase UCP plan however. These phases include:

Phase 1: Single-Domain End-to-End OTN Provisioning

This phase is currently under way today and is meant to support “point-and-click” provisioning of optical circuits within a single optical transport network. For example, today the Cisco ONS 15454 and 15327 Metro Optical Transport systems use OSPF to automatically discover nodes and create a network topology. The benefits of this first phase include per-domain, point-and-click, end-to-end OTN provisioning and automated creation of complete circuit inventory records. Cisco has already demonstrated its capabilities in this phase by participating in the OIF’s Optical UNI demonstration at Supercomm 2001. To its credit, Cisco is the only vendor to have both router and transport products within its lineup that can be deployed with a unified control plane.

Phase 2: Signaling-Based Provisioning

This phase exploits the adoption of the OIF UNI specification, which is meant to accelerate and simplify the provisioning of services between networks. According to Cisco, using the OIF standards-based UNI 1.0 mechanisms, a network element will be able to generate an IP-based provisioning request to both the IP and optical transport network elements, either from the platform itself or via a single domain manager. This will enable automated and comprehensive inventory management and support the first step in breaking the boundaries between the data network and the transport network.

Phase 3: Multidomain End-to-End Provisioning

This step in the evolution of UCP is to combine end-to-end provisioning with OIF UNI signaling. The goal is to enable the provisioning of an end-to-end optical service that spans multiple transport domains (for example, metro-core-metro) within a single service-provider network. This phase, therefore, sets out to combine the OIF UNI signaling and IP exterior gateway protocols with optical extensions. This gets Cisco to the more sophisticated benefits of GMPLS, such as enabling true optical VPNs, automating provisioning across diverse network domains, and improving OAM&P.

Phase 4: Integrated IP and OTN Intelligence

This final phase is meant to simply take advantage of a completed GMPLS standard and provide the full suite of GMPLS features and network level provisioning, management, and service creation capabilities. This scenario enables new models of restoration, protection, and distributed network management, enhanced service velocity, and the potential of new multiprovider services such as bandwidth exchange. This phase is mainly built around the peer model of GMPLS networking, in which routers and IP devices have the ability to make resource management and path selection decisions based on their “awareness” of the transport network.

Cisco already has one customer singing their praises about the unified control plane. Velocita Corp., a carriers’ carrier building a nationwide optical network with Cisco gear (and Cisco financing) plans to implement the Cisco UCP to support signaling between Velocita’s own long-haul network and other providers’ local fiber networks.

Movaz Networks Inc.

Movaz remains in a stealth mode, but its team gives away its clear focus on building GMPLS in from day one. Movaz is building a metro optical switching and transport system with an aim to transform the economics around metro DWDM by making those lambdas as affordable as Sonet’s STS1 is today.

To accomplish that, Movaz has redesigned the metro DWDM system around an optical switch (RayStar) that acts as a very scaleable hub in a metro optical network. The switch communicates with access nodes (RayExpres) and client devices through a GMPLS-like control plane.

In fact, this may turn out to be the most “GMPLS-like” of any control plane, since the people building it today include Daniel Awduche and Lou Berger, formerly of UUNet and the godfathers of MPLS and GMPLS.

Movaz remains in a relatively stealthy mode right now, so more details aren’t yet publicly available. Looking at the recent drafts of MPLambdaS and GMPLS, however, reveals the protocol team at Movaz has clearly organized themselves around delivering a product platform that leverages GMPLS further than incumbent metro optical vendors.

More about Movaz:

  • Movaz Moves Up
    Movaz Makes a Splash
    {doclink4199}



NetPlane Systems Inc.

NetPlane Systems is the protocol software business of Mindspeed Technologies and has begun an aggressive effort to establish itself as a third-party source of optical control plane software to system vendors. Currently, NetPlane is providing MPLS software to over 60 customers, supporting both LDP/CR-LDP and RSVP-TE. This MPLS software platform will now be called Classical, making room for NetPlane’s optical control plane software, called LTCS (Label Traffic Control System) Optical.

NetPlane launched its optical signaling software at Supercomm 2001. The new product provides optical extensions to NetPlane's MPLS-LTCS software. NetPlane’s optical signaling software includes the support of in-band and out-of band signaling; support for the setup, routing, and teardown of light paths; support of multiple types of switching (such as TDM, lambda, fiber port, as well as packet); and the ability to offer QOS and support the requirements of VPNs. NetPlane’s Classical MPLS software now includes extended label types, the GMPLS hierarchy and link bundling capabilities, O-UNI Neighbor, and Service Discovery. The software currently utilizes RSVP-TE signaling and will support CR-LDP in future releases.

LTCS-Optical supports the signaling requirements of photonic networks, including access to those networks by traditional IP equipment using LTCS-Classical. The initial software release supports GMPLS. Future releases will support OIF Optical-UNI, as well as integrated routing and signaling. Accelight and VIPSwitch were the first announced customers for this software. Pricing for LTCS-Optical begins at $240,000 for new systems. Upgrade pricing is available for existing LTCS-Classical customers.

More about NetPlane:

  • Terabit MPLS Switches in the Works
    NetPlane Opens Up IP Routing



Nortel Networks Corp.

Nortel has made quite a fuss about MPLS this past year, claiming it will be an integral part of its overall equipment strategy, including the five categories of Optical Switch, Optical Ethernet, Multiservice Switch, IP Service Switch, and MPLS Router. This represents a major initiative within Nortel and includes its plans to add optical control plane capabilities to its OPTera product line.

Nortel is a bit different from many of the other vendors mentioned here, because it has focused its efforts on ANSI/ITU rather than the IETF. In this sense, Nortel is more of an endorser of the G.astn than GMPLS, though the two will likely be synonymous in the future. Nortel’s optical control plane strategy is of three parts today, including the following:

OPTera Smart OS

This is the fundamental operating system for OPTera optical systems, both DWDM and Sonet. OPTera Smart OS presents a wide set of optical monitoring and control engines available across OPTera Long Haul Optical Line Systems. These include wave ID, tunable sources, dynamic impairment management (PMD and chromatic dispersion), embedded optical spectrum analyzers, and receiver-Q monitoring. OPTera Smart OS uses the GMPLS signaling protocol for lightpath connections, while OSPF protocol, modified to work within an optical framework, is used for routing. This platform is meant to provide all the features of GMPLS and G.astn, providing:

  • Auto-discovery and network awareness edge-to-edge of lines, ports, and connections

  • Embedded line intelligence for automated provisioning, monitoring, and performance optimization

  • Flexible CoS assignment for any connection, any port, any service

  • Non-associated signaling for scaleability and transparency

  • Dynamic Lightpath activation using optical routing and signaling (based on GMPLS)

  • Flexible restoration and protection over an arbitrary mesh topology

  • Open optical control architecture (based on G.astn) for multivendor and multicarrier networks

  • O-VPN support with single-ended provisioning

OPTera Smart Agent

This represents the Optical UNI for Nortel, providing Layer 2 and higher client devices with the ability to signal the core optical network and make connections across an optical network with other enabled clients at the network edge. Features include: authenticated auto-discovery of client peers; policy-based bandwidth management; usage monitoring and SLA verification; cross-layer network management; and bandwidth scheduling.

Smart Management System for OPTera

The third software component fully integrates front-office and back-office network management platforms and addresses the increased network complexity that accompanies the move from static optical transport networks to agile, switched optical networks, supporting the delivery of optical services. This is basically Nortel’s Preside NMS.

Polaris Networks

Polaris is an 18-month-old startup building a metro switched optical transport system. With a heritage from Stratacom, the company is building a large switch that is lambda-, TDM-, packet-, and cell-capable. Yet another box facing the charge of God box but, again, one that arguably knows its function – and much of that relies on GMPLS.

The Polaris optical network architecture, dubbed iMON (intelligent Multiservice Optical Network) is first and foremost an edge grooming switch, with very competitive space, power, and density performance versus other TDM-based switches. It has the additional flexibility to accommodate packet and cell traffic over a common fabric and allow client devices to signal for the type of connectivity they require through the switch.

Polaris supports the four GMPLS functional elements (TDM, packet, cell, and lambda), which is necessary, since the system is multiservice aware. Ideally, a carrier using Polaris’s solution can effectively open its network to allow control and provisioning of any type of service.

In addition to the four categories, Polaris has also implemented further proprietary extensions to include ATM traffic management and path/flow trace capabilities. The ATM extension will allow service providers with cell-based backbones to migrate their networks to GMPLS.

Support for both the overlay networks as well as peer networks (OIF UNI) gives service providers the flexibility and choice of how to introduce GMPLS into their networks while allowing them to leverage their existing bases.

Polaris is an interesting company coming to market in an interesting time. It has multilayer transport capabilities for the core and in many respects qualifies as a member of our Optical Taxonomy’s “Switched Optical Transport” category, yet iMON can also behave as a DCS or CoreDirector-like optical switch.

Those are certainly challenging markets to pursue at this time, but in many respects the platform Polaris has developed is impressive in its configurability to different applications. Polaris has also recognized the value of GMPLS to a metro core node, where a significant amount of interconnection among subnetworks occurs, presenting a clear opportunity to this product class to act as a “network arbiter,” providing on-ramps to various networks through the use of an optical control plane.

More about Polaris:

  • Polaris Builds a God Box



QOptics Inc.

QOptics is a startup that is entirely devoted to developing an optical control plane software platform that enables logical or Layer 1 provisioning on optical networks with an integrated signaling solution. QOptics intends to address the market by delivering software that has at its premise multivendor interoperability, separating it from the vendor-specific implementations of GMPLS thus far. Since the founder of QOptics also founded Arbinet, a bandwidth exchange, it seems evident that QOptics will be designed with the requirements of bandwidth trading in mind, as well as multiple carrier interconnection via optical circuits.

These capabilities will be embedded in its Intelligent Provisioning Node, a distributed, software-controlled, server-based solution residing on high-performance servers (e.g., Sun Ultrasparc III). Its IP-centric control plane infrastructure exists as a network element alongside existing optical network elements (e.g., core OEO switches, DWDM transport nodes, routers) in a client-server and/or peer-to-peer relationship.

In future implementations, its infrastructure will be integrated in other network elements and will be designed to work with next-generation, all-optical networks of all kinds, as well as legacy-based systems such as Sonet, ATM, and optical DACS systems. QOptics’ distributed data processing and provisioning solution is being designed to support real-time network inventory, route mapping, price and SLA rationalization and to support sub-100-millisecond provisioning of virtual optical circuits across networks.

According to QOptics, the Optical Control Plane is middleware resident on a network of interconnected Intelligent Provisioning Nodes (IPNs). The IPN is a high-performance hardware platform. The Optical Control Plane exists as a network element integrated with the switch or router, and is based on a multithread, distributed architecture that can accommodate end-to-end Layer 1 provisioning in real-time, plus facilitate network signaling. In the future, the Optical Control Plane may be functionally and physically integrated with the network node.

Initially, QOptics will focus its efforts on developing the Optical Control Plane and Intelligent Provisioning Node for deployment in the “core” optical network. The company is in the process of building a test bed that will emulate a multivendor, multiprotocol environment that will allow cross-network signaling, provisioning, and transport on a next-gen core network. Additionally, beyond its ambitions of delivering a control plane, the company plans to decouple the provisioning and inventory management software that is proprietary to each equipment vendor and service provider and standardized it across all “meet me” points.

With a target market of bandwidth traders, colocation providers, enterprise customers, and ASPs, QOptics certainly faces a challenging environment in the coming year, but if service providers find that optical control plane solutions delivered from vendors limits their service capabilities, then QOptics will have an opportunity to address these requirements separately from a single infrastructure contract.

Sycamore Networks Inc.

Sycamore Networks, drawing on its founders’ heritage at Cascade Communications, made a point of focusing on software as the key differentiator from the beginning. Each product launch was coupled with a “soft optics” story that leveraged the concept of intelligent optical networking, which in many ways reflects the goals of the movement towards standardizing an optical control plane.

Sycamore’s software platform includes two key products: SILVX, their network and element management system; and Broadleaf, a network operating system for the SN16000 optical switching system. Broadleaf is perhaps the most appropriate to discuss in this profile, since it represents the optical control plane, with MPLS-based signaling and OSPS-based routing.

Broadleaf is designed to operate as “pre-GMPLS” for the Sycamore optical switches, and is meant to deliver all the benefits of GMPLS, including point-and-click provisioning, mesh-based restoration, optical virtual private networks, customer network management, network awareness, and support for heterogeneous networks. Broadleaf will also incorporate the OIF Optical UNI. Though Sycamore at first pushed for its particular implementation of a UNI (via the ODSI coalition) to be adopted as a standard, they are now endorsing and implementing the OIF UNI 1.0 as part of Broadleaf.

An interesting part of Sycamore’s Broadleaf strategy was the network emulator, which ran the UNIX-based software on PCs to dmeonstrate for a carrier how the software platform would operate in a network of many switches. This emulator was instrumental in landing the 360networks contract, but in the end the financial instability of 360networks turned that important win into a bust. As the market goes through its “optimization” phase in the coming year, it remains to be seen how important a factor Sycamore’s emulator and optical signaling and routing features will be in landing customers, particularly in North America, where the bug buyers have suddenly gone very conservative.

Sycamore in many ways has bet the farm on its vision of GMPLS, claiming from the beginning that software (“soft optics” to use their phrase) would represent the next phase of optical networks’ evolution. The market, however, has not cooperated with this vision as quickly as they might like, preferring for the time being optical switching systems that extend the function and capacity of broadband digital crossconnects and transport systems that are simply faster and cheaper. But as GMPLS develops into an international standard, those vendors that have a history in optical signaling and routing and deep software talent within the company will in many ways have a leg up on those that must adopt GMPLS from scratch.

More about Sycamore's Broadleaf:

  • Sycamore Demos Software Scalability



Tellium Inc.

Tellium is the original optical switching system company, tracing its roots back to 1997, when it was launched with a package of intellectual property and 13 employees from the Bellcore Optical Networking team that had been part of the LambdaNet, ONTC (Optical Networking Technology Committee) and the MONET (Multiwavelength Optical Networking Consortium) projects at Bellcore. From that team Tellium inherited both optical crossconnect hardware design expertise and network management and signaling software. Krishna Bala, Tellium’s CTO, has been a prominent voice in the development of switched optical network standards and driven Tellium’s control plane efforts since the company’s inception.

Though it was beat to market by Ciena/Lightera, Tellium has been developing the hardware of optical switching the longest. Once the company decided to focus exclusively on optical switching and shed its DWDM systems, it set to work on its optical control plane.

Like Sycamore, Tellium’s control plane was based on MPLS and OSPF before these became the prime candidates for the GMPLS standard. Tellium’s signaling platform, StarNet OS, is another “pre-GMPLS” platform that supports many of the features and benefits planned for the GMPLS standard. StarNet, based on MPLS and OSPF, offers various protection methods, known as Class of Protection Services: dedicated, shared, unprotected, and pre-emptible. StarNet OS uses distributed signaling between Aurora Optical Switches for node discovery and link status to develop a real-time view of the network topology. The network topology information is automatically updated on a continuous basis without the need for operator assistance.

Tellium in many ways seems in a situation similar to other vendors in the optical switch space today. Optical signaling software is important to demonstrate, but carriers are not yet deploying mesh-based optical networks and, therefore, are making decisions based on features other than optical lightpath provisioning and management.

Tellium lacks the STS1 grooming of Ciena’s CoreDirector, which to date has been the key enabler of Ciena’s dominance in this market (along with, of course, being first to market with a product that worked).

Tellium’s choice to evolve its platform in the core has led it to add all-optical switching, rather than sub-OC48 grooming. This choice will leverage its optical control plane but, again, is predicated on market acceptance of a mesh-based core optical network. While this evolution in many ways appears inevitable, timing is everything. If that transition does not begin until well into 2003, then optical switch vendors such as Tellium will have to fight with the incumbents and a few startups for limited market share.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like