GMPLS Showcased in Demo

Participants in a multivendor demo at the McLean, Va., offices of Isocore say they've proven that Generalized Multiprotocol Label Switching (GMPLS) can be used to streamline the management and provisioning of multivendor IP/MPLS applications.

This isn't the first GMPLS demo by a long shot. But according to Isocore's Bijan Jabbari, it's a particularly ambitious one. Instead of running GMPLS in isolation, the test was aimed at showing that IP/MPLS applications, including Layer 2 and Layer 3 VPNs and videoconferencing via VPLS, could be managed via GMPLS (see Isocore Validates IP-Optical). Also included were tests of GMPLS interoperability using multivendor implementations of the UNI (user network interface) defined by the Optical Internetworking Forum (OIF).

The live event followed the MPLS 2003 International Conference in Washington, D.C., and the vendors involved were only just starting to wind down late this afternoon. "It's difficult to stop them," quips Jabbari [ed. note: the zany madcaps!]

At least one carrier expressed interest in the proceedings earlier this week. "We already have a lot of transport gear, so we're still in the exploratory phase when it comes to GMPLS," said Christian Noll, research director of science and technology with BellSouth Corp. (NYSE: BLS). "But it's good to see multivendor interoperability testing going on. If we were to go with GMPLS, we'd stay away from any proprietary implementations."

The testbed included an "optical core" of routers and switches that generated optical bandwidth via GMPLS from Cisco Systems Inc. (Nasdaq: CSCO), Fujitsu Ltd. (OTC: FJTSY), Juniper Networks Inc. (Nasdaq: JNPR), Movaz Networks Inc., Sycamore Networks Inc. (Nasdaq: SCMR), and Tellabs Inc. (Nasdaq: TLAB; Frankfurt: BTLA).

Avici Systems Inc. (Nasdaq: AVCI; Frankfurt: BVC7) and Tellabs also generated optical traffic using the OIF UNI.

The optical control plane for the IP/MPLS applications was included via gear from Alcatel SA (NYSE: ALA; Paris: CGEP:PA), Extreme Networks Inc. (Nasdaq: EXTR), Foundry Networks Inc. (Nasdaq: FDRY), Laurel Networks Inc., and Redback Networks Inc. (Nasdaq: RBAK) as well as from Cisco, Juniper, and Tellabs.

Also included in the test was software from Furukawa Electric Co. Ltd., NEC Corp. (Nasdaq: NIPNY; Tokyo: 6701) and Japanese carrier NTT Communications Corp.

Test gear used in the demo included the InterWatch platform from Navtel, which, according to Jabbari, was the only tester capable of emulating both GMPLS and MPLS traffic. Testers from Ixia (Nasdaq: XXIA) and Spirent Communications were used for detailed MPLS validation and traffic generation, he says, but only Navtel could offer GMPLS, too.

So what's next? Jabbari is clear there's more testing in store, probably starting in January. "It's still early for carriers to deploy GMPLS," he says. "But to be really ready for when that time comes, we need to be testing now. We've demonstrated that more work needs to be done."

— Mary Jander and Marguerite Reardon, Senior Editors, Light Reading

<<   <   Page 4 / 4
mvissers 12/4/2012 | 11:17:43 PM
re: GMPLS Showcased in Demo An overview of some of the elements involved in automatically switched optical networks (ASON/GMPLS).

The transport network is stated to have three planes: data/user/transport plane, control plane and management plane. The management plane may be further decomposed into network management and services management.

Network management is concerned with FCAPS: Fault management, Configuration management, Account management, Performance management and Security management. Configuration management includes connection management, and it is this FCAPS element that is partly delegated to the control plane.

In the past 10 years operators have tried to interconnect network management systems and get seamless interworking between those (vendor specific) network management systems for all five FCAPS elements... it hasn't come through yet in general (some operators do have it however already), and it is understood that interworking is most desired/required at connection management level. A control plane will help to realise this, as it has done so in other transport layer cases (e.g. voice, IP, ATM).

So far the existing control planes deal with a single layer. The control plane (ASON, GMPLS) for the transport network m,ay have to deal with at least 4 (switching) layers (see also http://www.maartenvissers.nl/a...
1) SDH LOVC/SONET VT (digital 1M..100M)
2) SDH HOVC/SONET STS (digital 50M..1G [10G])
3) OTN ODUk (digital 2.5G..40G [160G])
4) OTN OCh (optical 10G, 40G)

The addition of a control plane in any of the above layers doesn't change the data/user/transport plane of those layers. It only adds a second (and important) means to manage the connections within such layer. E.g. network edge to network edge connection setup (unprotected or protected), recovery of connections after link failure by means of restoration, Layer 1 VPN service, modifying connection bandwidth (virtual concatenation).

The control plane activity is also used to push for a different ordering means (via UNI) for connections, suggesting dial up capability of e.g. 10G connections. It may take some time - for reasons mentioned in other posts - before this will be happening at those high speeds. For lower speeds this is either already available in some networks (via web based interfaces connected to service management systems (SMS)) or may be added in the near/medium term future. Note: UNIs may be connected to SMSs if there is not yet a control plane.

Transport networks have a large number of nodes, often much larger than IP networks (e.g. 30..100x). It may thus take some time before each of the 4 layers is converted into a switched layer. Currently most ASON/GMPLS development/deployment activity is concentrated on intra-layer connection management in the HOVC/STS layer. Next steps are interworking of network management controlled and control plane controlled subnetworks, and multi-layer interworking.

An important concept in the transport plane is the layer independence. A call/connection request in layer X will be served within the infrastructure/topology constraints (number of nodes and links, bandwidth of links) of layer X. The call may thus fail (e.g. not enough bandwidth on a link available).

A layer X topology manager is responsible for the layer's infrastructure. If e.g. additional link bandwidth or additional links between nodes in the layer X are needed the topology manager of X should order this from its server layer(s) (note - this is decoupled from connection setup in layer X). I.e. place a call on server layer Y to establish a connection in layer Y between two Y/X ports in layer X nodes. Layer X is thus another type of customer for layer Y.

A layer X restoration manager may also request additional layer X link bandwidth or links to be setup (temporarily) to perform its task of restoring a set of layer X connections.

To have the layer X topology manager and layer X restoration manager perform their tasks, these managers should have knowledge of the layer X's nodes (fabrics) and available ports on those fabrics. This information should be automaticly discovered or configured via network management. When to Y/X ports are connected by means of a layer Y connection, a layer X link is created and it and its characteristics should either be automatically discovered/verified or configured. The addition of control planes is pushing this discovery as well.

If a network would consists of 10 OCh nodes, 100 ODUk nodes, 500 HOVC/STS nodes and 2500 LOVC/VT nodes would it then be reasonable to have the LOVC connection setup messages popping up in the OCh, ODUk, HOVC nodes (just to be forwarded again)? As there are many more LOVC connections then OCh connections, this may put an unreasonable load on the OCh control plane processors. Direct LOVC control plane processor communications seems to be the best choice here for LOVC connection management (i.e. overlay). This doesn't have to imply that this is also the best alternative for restoration. E.g. when a fiber breaks, all four layers will see one or more of their layer links fail. Passing this signal fail information from equipment to equipment (ignoring the layers supported in each equipment) may be the best alternative (as e.g. total number of messages in the network are minimised)... i.e. peering. Note that the control plane messages are typically routed independently of the transport plane signals (i.e. out of band). To reach the "next" equipment (at the end of the fiber) may imply for a control plane message to hop through multiple data comm nodes.

Mark Seery 12/4/2012 | 11:17:43 PM
re: GMPLS Showcased in Demo Hi Maarten,

>> Direct LOVC control plane processor communications seems to be the best choice here for LOVC connection management (i.e. overlay). This doesn't have to imply that this is also the best alternative for restoration. <<

Which I think speaks to the issue that a control plane uses a user plane just like any other traffic, and that the user plane that is used need not be assumed or fixed architecturally; and that a control plane has its own attributes/requirements independent of the user plane that it happens to use.
Mark Seery 12/4/2012 | 11:17:42 PM
re: GMPLS Showcased in Demo Hi mdwdm,

>> In the end, as most people here said, SONET world could care less about GMPLS since "dynamical provisions" are addressed by the likes of coredirector and MEMS coming up. <<

but coredirector uses a control plane, it just happens to be one that is different from GMPLS - or is that your point?

>> ......fogot it is now a DWDM core out there and they had not clue about the complicity in implementing a DWDM mesh.<<

here do you refer to dealing with dynamic compensation, gain flattening, and other optical issues, or do you refer to something different?
mdwdm 12/4/2012 | 11:17:42 PM
re: GMPLS Showcased in Demo Hi Mark,

"but coredirector uses a control plane, it just happens to be one that is different from GMPLS - or is that your point?"

Right, and what else should I say? I hope we can change -48V power supply to -50V as well. I
got a good reason for that. No, just kidding.

"here do you refer to dealing with dynamic compensation, gain flattening, and other optical issues, or do you refer to something different?"

Well, partly yes. I won't get into more details about that mesh story. I get a headache everytime I did.

gea 12/4/2012 | 11:17:41 PM
re: GMPLS Showcased in Demo "One more striking thing is, the people
talking about old mesh experiments(SONET)fogot it is now a DWDM core out there"

This depends on what you mean by a "DWDM core".
Of course, in the mid-90s it became "obvious" to those of in in the field that building out LH using DWDM would save money, even with plenty of fiber. As a result, DWDM build went extremely quickly. However, the VAST majority of DWDM out there is still point-to-point. If 1% of OC-48s/192s were switched in the wavelength domain, I'd be very suprised.

No, despite the fact that LH is DWDM wherever possible, the switching is still not only electronic, but you can bet down at the STS-1 level or below.

My point was and is that the largest immediate opportunity for GMPLS lies not in multiwavelength optical meshes, nor in dynamic bandwidth provisioning, but in merely cutting the provisioning time from months down to hours/minutes. That market would be huge.

keflin 12/4/2012 | 11:13:51 PM
re: GMPLS Showcased in Demo GMPLS will NOT reduce provisioning time from months to minutes. The bottleneck of provisioning is adding transmission capacity such as new fiber builds, DWDM links, racks, shelves, tranponders, inside plant cabling, and the good old 24-hour BER circuit testing.

Furthermore, many SONET vendors' equipment already provide internodal connections which reduced provisioning if existing capacity exists.

<<   <   Page 4 / 4
Sign In