x
Optical/IP

GMPLS Showcased in Demo

Participants in a multivendor demo at the McLean, Va., offices of Isocore say they've proven that Generalized Multiprotocol Label Switching (GMPLS) can be used to streamline the management and provisioning of multivendor IP/MPLS applications.

This isn't the first GMPLS demo by a long shot. But according to Isocore's Bijan Jabbari, it's a particularly ambitious one. Instead of running GMPLS in isolation, the test was aimed at showing that IP/MPLS applications, including Layer 2 and Layer 3 VPNs and videoconferencing via VPLS, could be managed via GMPLS (see Isocore Validates IP-Optical). Also included were tests of GMPLS interoperability using multivendor implementations of the UNI (user network interface) defined by the Optical Internetworking Forum (OIF).

The live event followed the MPLS 2003 International Conference in Washington, D.C., and the vendors involved were only just starting to wind down late this afternoon. "It's difficult to stop them," quips Jabbari [ed. note: the zany madcaps!]

At least one carrier expressed interest in the proceedings earlier this week. "We already have a lot of transport gear, so we're still in the exploratory phase when it comes to GMPLS," said Christian Noll, research director of science and technology with BellSouth Corp. (NYSE: BLS). "But it's good to see multivendor interoperability testing going on. If we were to go with GMPLS, we'd stay away from any proprietary implementations."

The testbed included an "optical core" of routers and switches that generated optical bandwidth via GMPLS from Cisco Systems Inc. (Nasdaq: CSCO), Fujitsu Ltd. (OTC: FJTSY), Juniper Networks Inc. (Nasdaq: JNPR), Movaz Networks Inc., Sycamore Networks Inc. (Nasdaq: SCMR), and Tellabs Inc. (Nasdaq: TLAB; Frankfurt: BTLA).

Avici Systems Inc. (Nasdaq: AVCI; Frankfurt: BVC7) and Tellabs also generated optical traffic using the OIF UNI.

The optical control plane for the IP/MPLS applications was included via gear from Alcatel SA (NYSE: ALA; Paris: CGEP:PA), Extreme Networks Inc. (Nasdaq: EXTR), Foundry Networks Inc. (Nasdaq: FDRY), Laurel Networks Inc., and Redback Networks Inc. (Nasdaq: RBAK) as well as from Cisco, Juniper, and Tellabs.

Also included in the test was software from Furukawa Electric Co. Ltd., NEC Corp. (Nasdaq: NIPNY; Tokyo: 6701) and Japanese carrier NTT Communications Corp.

Test gear used in the demo included the InterWatch platform from Navtel, which, according to Jabbari, was the only tester capable of emulating both GMPLS and MPLS traffic. Testers from Ixia (Nasdaq: XXIA) and Spirent Communications were used for detailed MPLS validation and traffic generation, he says, but only Navtel could offer GMPLS, too.

So what's next? Jabbari is clear there's more testing in store, probably starting in January. "It's still early for carriers to deploy GMPLS," he says. "But to be really ready for when that time comes, we need to be testing now. We've demonstrated that more work needs to be done."

— Mary Jander and Marguerite Reardon, Senior Editors, Light Reading

keflin 12/4/2012 | 11:13:51 PM
re: GMPLS Showcased in Demo GMPLS will NOT reduce provisioning time from months to minutes. The bottleneck of provisioning is adding transmission capacity such as new fiber builds, DWDM links, racks, shelves, tranponders, inside plant cabling, and the good old 24-hour BER circuit testing.

Furthermore, many SONET vendors' equipment already provide internodal connections which reduced provisioning if existing capacity exists.

gea 12/4/2012 | 11:17:41 PM
re: GMPLS Showcased in Demo "One more striking thing is, the people
talking about old mesh experiments(SONET)fogot it is now a DWDM core out there"

This depends on what you mean by a "DWDM core".
Of course, in the mid-90s it became "obvious" to those of in in the field that building out LH using DWDM would save money, even with plenty of fiber. As a result, DWDM build went extremely quickly. However, the VAST majority of DWDM out there is still point-to-point. If 1% of OC-48s/192s were switched in the wavelength domain, I'd be very suprised.

No, despite the fact that LH is DWDM wherever possible, the switching is still not only electronic, but you can bet down at the STS-1 level or below.

My point was and is that the largest immediate opportunity for GMPLS lies not in multiwavelength optical meshes, nor in dynamic bandwidth provisioning, but in merely cutting the provisioning time from months down to hours/minutes. That market would be huge.

Mark Seery 12/4/2012 | 11:17:42 PM
re: GMPLS Showcased in Demo Hi mdwdm,

>> In the end, as most people here said, SONET world could care less about GMPLS since "dynamical provisions" are addressed by the likes of coredirector and MEMS coming up. <<

but coredirector uses a control plane, it just happens to be one that is different from GMPLS - or is that your point?

>> ......fogot it is now a DWDM core out there and they had not clue about the complicity in implementing a DWDM mesh.<<

here do you refer to dealing with dynamic compensation, gain flattening, and other optical issues, or do you refer to something different?
mdwdm 12/4/2012 | 11:17:42 PM
re: GMPLS Showcased in Demo Hi Mark,

"but coredirector uses a control plane, it just happens to be one that is different from GMPLS - or is that your point?"

Right, and what else should I say? I hope we can change -48V power supply to -50V as well. I
got a good reason for that. No, just kidding.

"here do you refer to dealing with dynamic compensation, gain flattening, and other optical issues, or do you refer to something different?"

Well, partly yes. I won't get into more details about that mesh story. I get a headache everytime I did.


mdwdm 12/4/2012 | 11:17:43 PM
re: GMPLS Showcased in Demo Mark,

I agree with most of you said but like to comment on following,

The whole GMPLS business got started to address
the defects in MPLS in dealing with wavelength
setup/teardown, the one of hottest issues during the bubble days. It has evolved to deal with every corner case and every protocol on earth
(which new standard doesn't?) but these extra issues are becoming more and more irrelevent.

In the end, as most people here said, SONET world could care less about GMPLS since "dynamical provisions" are addressed by the likes of coredirector and MEMS coming up.

G709 remains the only relevant/practical protocol WRT wavelength setup/teardown as far as GMPLS is concerned in the CORE. I cannot imagine
any carrier class CORE player use only out-of-band(OSC) control plane to deal with issues like express channels, fancy topology (mesh etc) and protection and restoration etc which got all
these people excited about GMPLS in the first
place. One more striking thing is, the people
talking about old mesh experiments(SONET)fogot it is now a DWDM core out there and they had
not clue about the complicity in implementing
a DWDM mesh.
mvissers 12/4/2012 | 11:17:43 PM
re: GMPLS Showcased in Demo An overview of some of the elements involved in automatically switched optical networks (ASON/GMPLS).

The transport network is stated to have three planes: data/user/transport plane, control plane and management plane. The management plane may be further decomposed into network management and services management.

Network management is concerned with FCAPS: Fault management, Configuration management, Account management, Performance management and Security management. Configuration management includes connection management, and it is this FCAPS element that is partly delegated to the control plane.

In the past 10 years operators have tried to interconnect network management systems and get seamless interworking between those (vendor specific) network management systems for all five FCAPS elements... it hasn't come through yet in general (some operators do have it however already), and it is understood that interworking is most desired/required at connection management level. A control plane will help to realise this, as it has done so in other transport layer cases (e.g. voice, IP, ATM).

So far the existing control planes deal with a single layer. The control plane (ASON, GMPLS) for the transport network m,ay have to deal with at least 4 (switching) layers (see also http://www.maartenvissers.nl/a...
1) SDH LOVC/SONET VT (digital 1M..100M)
2) SDH HOVC/SONET STS (digital 50M..1G [10G])
3) OTN ODUk (digital 2.5G..40G [160G])
4) OTN OCh (optical 10G, 40G)

The addition of a control plane in any of the above layers doesn't change the data/user/transport plane of those layers. It only adds a second (and important) means to manage the connections within such layer. E.g. network edge to network edge connection setup (unprotected or protected), recovery of connections after link failure by means of restoration, Layer 1 VPN service, modifying connection bandwidth (virtual concatenation).

The control plane activity is also used to push for a different ordering means (via UNI) for connections, suggesting dial up capability of e.g. 10G connections. It may take some time - for reasons mentioned in other posts - before this will be happening at those high speeds. For lower speeds this is either already available in some networks (via web based interfaces connected to service management systems (SMS)) or may be added in the near/medium term future. Note: UNIs may be connected to SMSs if there is not yet a control plane.

Transport networks have a large number of nodes, often much larger than IP networks (e.g. 30..100x). It may thus take some time before each of the 4 layers is converted into a switched layer. Currently most ASON/GMPLS development/deployment activity is concentrated on intra-layer connection management in the HOVC/STS layer. Next steps are interworking of network management controlled and control plane controlled subnetworks, and multi-layer interworking.

An important concept in the transport plane is the layer independence. A call/connection request in layer X will be served within the infrastructure/topology constraints (number of nodes and links, bandwidth of links) of layer X. The call may thus fail (e.g. not enough bandwidth on a link available).

A layer X topology manager is responsible for the layer's infrastructure. If e.g. additional link bandwidth or additional links between nodes in the layer X are needed the topology manager of X should order this from its server layer(s) (note - this is decoupled from connection setup in layer X). I.e. place a call on server layer Y to establish a connection in layer Y between two Y/X ports in layer X nodes. Layer X is thus another type of customer for layer Y.

A layer X restoration manager may also request additional layer X link bandwidth or links to be setup (temporarily) to perform its task of restoring a set of layer X connections.

To have the layer X topology manager and layer X restoration manager perform their tasks, these managers should have knowledge of the layer X's nodes (fabrics) and available ports on those fabrics. This information should be automaticly discovered or configured via network management. When to Y/X ports are connected by means of a layer Y connection, a layer X link is created and it and its characteristics should either be automatically discovered/verified or configured. The addition of control planes is pushing this discovery as well.

If a network would consists of 10 OCh nodes, 100 ODUk nodes, 500 HOVC/STS nodes and 2500 LOVC/VT nodes would it then be reasonable to have the LOVC connection setup messages popping up in the OCh, ODUk, HOVC nodes (just to be forwarded again)? As there are many more LOVC connections then OCh connections, this may put an unreasonable load on the OCh control plane processors. Direct LOVC control plane processor communications seems to be the best choice here for LOVC connection management (i.e. overlay). This doesn't have to imply that this is also the best alternative for restoration. E.g. when a fiber breaks, all four layers will see one or more of their layer links fail. Passing this signal fail information from equipment to equipment (ignoring the layers supported in each equipment) may be the best alternative (as e.g. total number of messages in the network are minimised)... i.e. peering. Note that the control plane messages are typically routed independently of the transport plane signals (i.e. out of band). To reach the "next" equipment (at the end of the fiber) may imply for a control plane message to hop through multiple data comm nodes.

Maarten
Mark Seery 12/4/2012 | 11:17:43 PM
re: GMPLS Showcased in Demo Hi Maarten,

>> Direct LOVC control plane processor communications seems to be the best choice here for LOVC connection management (i.e. overlay). This doesn't have to imply that this is also the best alternative for restoration. <<

Which I think speaks to the issue that a control plane uses a user plane just like any other traffic, and that the user plane that is used need not be assumed or fixed architecturally; and that a control plane has its own attributes/requirements independent of the user plane that it happens to use.
mdwdm 12/4/2012 | 11:17:45 PM
re: GMPLS Showcased in Demo Clueless...
Mark Seery 12/4/2012 | 11:17:45 PM
re: GMPLS Showcased in Demo mwdm,

there has been a history of conflict WRT several g.709 issues etc., so understand the sensitivity. that said, GMPLS has evolved beyond mlambdas and has/is having extenstions added for signaling sonet/sdh circuits, for example at the STS-1 granularity.

also, while g.709 (or equivalent wrapper) is useful in a non-all optical world, it is still largely an in-band management and multiplexing user-plane mechanism (as I understand it-corect me if I am wrong). GMPLS on the other had was envisioned initially as an out-of-band signaling mechanism with the assumption of all-optical networking. Therefore while GMPLS may benefit from g.709 if it wants to do in-band signaling (wants to make use of a supervisory channel), architecturally the layers should be considered disjoint.

so I think there is a miscommunication going on here leading to a misagreement (a disagreement that would not occur if both parties that were on the same page).
bigtaildog 12/4/2012 | 11:17:46 PM
re: GMPLS Showcased in Demo oh, mdwdm, I am afraid you misunderstand GMPLS. What is more, G.709 is a ITU OTN protocol, not exactly GMPLS.....
turing 12/4/2012 | 11:17:56 PM
re: GMPLS Showcased in Demo Signmeup expressed a valid opinion that short duration demand for hi-cap circuits is already here (e.g. Superbowl).
---------------

But the Superbowl is actually a bad example I think. Like the Olympics, it's an event where the bandwidth required is so great, for such a short time, that hardware has to be installed - at least at the access side. No one's going to leave OC-48 interfaces sitting idle so that once a year they can be used (and the superbowl moves location too). That's why it takes so long to happen: stuff has to be installed/moved-around. And it probably goes through some locally owned sonet that hands it off to another carrier.

Now it may help the sonet/wdm core move wavelengths around faster to compensate for the extra load, once it has been ordered, paid for, and verified, but that seems like such a small bit of this equation. Most of that equation is application back-end integration, if anything.
Mark Seery 12/4/2012 | 11:17:57 PM
re: GMPLS Showcased in Demo >> Actually, mesh protection hs existing on the sidelines for years. Some SONET vendors made it available in the mid-90s <<

studies done at one major carrier I know of showed that at that time, the CPUs available did not have sufficient horespower for the job. a number of people believe this situation has changed.

issues worth discussing on this subject (IMHO) include:

-multi-layer fate sharing (aka disjoint layer networks)
-the value of automation (a commonly expressed desire by a growing number of carriers). it is is valuable and proper to point out the need to actually installed physical equipment. there is still an automation discussion to be had beyond that as the same equipment is reconfigured for different purposes.
-the potential to get insights into spare capacity in the network
-no other layer in the network runs without a control plane (voice has one, data has one...)
-the importance of reliability at the transmission layer, and how to achieve this
-the bandwidth savings of mesh networks
-scalability of mesh networks
-that OSSs and control planes do compete with each other on some level, so a) where do the tradeoffs exist, and b) can better reuse be made by putting a subset of functionality in a commonly reused control plane.
-the amount of legacy SONET/SDH equipment installed that can not support GMPLS, and what impediment this does/does not create to widespread GMPLS deployment
-convergence between major standards bodies
-and of course the old peer vs overlay thing.
jim_smith 12/4/2012 | 11:17:57 PM
re: GMPLS Showcased in Demo Great posts. Pretty much says everything there is to say on this topic.

My experience is that even though there might be a business case for OC-N or wavelength "dialtone" services, today service providers have a million other things to worry about.

GMPLS is *JUST ANOTHER WAY* of solving a small part of the on-demand provisioning puzzle. GMPLS zealots are either ignorant of the big picture or they have a vested interest in promoting GMPLS as the panacea to all service provider problems.
straightup 12/4/2012 | 11:17:57 PM
re: GMPLS Showcased in Demo Signmeup expressed a valid opinion that short duration demand for hi-cap circuits is already here (e.g. Superbowl). I agree that there are some examples ... but are there enough Superbowl-like events to make the demand volatile enough for the AVERAGE connect time for large capacity circuits to drop to days or hours? Thnk: even the Superbowl is scheduled years in advance with ample lead time to fit within a two-month provisioning window. Are carriers going to convert there entire network to GMPLS just so FOX can leave the Superbowl to the last minute? Would Fox trust the carriers to perform? For GMPLS to take off, there has to be large volumes of demand. Just because people working on GMPLS wish it was so does make it so.

The more challenging problem is performing provisioning well in OSSs. Unfortunately, this restarts a debate of which computing technology will make TIRKS automate this and all the zealots come out to talk about EAI and XML and god knows what else. I have heard and seen all the promises about control plans, databases, new computing hardware with more horsepower, standards for protocols, standards for object definitions, standard naming conventions, (insert your favourite panacea du jour here).

Instead of fighting religious wars over which protocol is best or which standard to follow, try to figure out what the real limiting factors are; you will find they have little to do with technology and a lot to do with economics and incentives.

Cheers!
mdwdm 12/4/2012 | 11:17:58 PM
re: GMPLS Showcased in Demo Ho Ho Ho, talking about brain demage. Fighting with youself now? Ok I give up and go back to my real hardwares now.

----------
"So tell me: What does QoS have to do with GMPLS applied to SONET circuit setup/teardown?"

Wow. Your brain does not work properly...major logical jumps and gaps. Either that or you're trolling me...I'm starting to suspect the latter.
gea 12/4/2012 | 11:17:58 PM
re: GMPLS Showcased in Demo "Such a restoration service, while possibly slower in restoration time, promises to be more capacity efficient than traditional 1+1 or ring type protection mechanisms (20-30% more efficient in studies)."

Well, if the restoration path is pre-computed it does not have to be slow.

Actually, mesh protection hs existing on the sidelines for years. Some SONET vendors made it available in the mid-90s, but it never took off (that, despite the fact that it can clearly reduce capex if done properly). The problem with mesh protection ended up being a very practical one: thebig carriers just couldn't figure out how to do proper troubleshooting, maintenance and inventory work with mesh protection.

Don't get me wrong...I think mesh protection will eventually return, particularly when the network starts supporting a wider variety of services (including optical subnetwork services). I also think GMPLS will be the way that mesh protection eventually happens. But I don't think the big carriers are clamoring for this right now.
gea 12/4/2012 | 11:17:59 PM
re: GMPLS Showcased in Demo "So tell me: What does QoS have to do with GMPLS applied to SONET circuit setup/teardown?"

Wow. Your brain does not work properly...major logical jumps and gaps. Either that or you're trolling me...I'm starting to suspect the latter.

GMPLS can (theoretically) do a LOT of things, in the packet domain, in the optical domain, in the ATM domain (which is where MPLS had it's start, by the way), and in the SONET domain.

My focus here is on where there's a clear and immediate market need: provisioning SONET circuits. When used to provision SONET circuits GMPLS has little to do with QoS.

As for...

"GMPLS is implemented by G709 which wrapps
SONET/whatever."

OK, you clearly know little about what you're talking about. GMPLS is in the CONTROL PLANE. Do you have any clue what this means?

At this point it's obvious I'm wasting my time responding to you. You need to start doing some homework and stop thinking you know something because you read it on a bulletin board or whatever.

rpm23 12/4/2012 | 11:17:59 PM
re: GMPLS Showcased in Demo I agree with all the posts saying fast provisioning may not be a strong enough selling factor\value proposition for GMPLS deployment. One other aspect of GMPLS that does not seem to
have been mentioned is the possibility of
dynamic restoration in case of failures. Such a restoration service, while possibly slower in restoration time, promises to be more capacity efficient than traditional 1+1 or ring type protection mechanisms (20-30% more efficient in studies). On the flip side, CAPEX reduction
(as a result of capacity efficiencies) may not be a burning issue with carriers at this point given excess inventories. But a different aspect of GMPLS that may prove in its value over time.
mdwdm 12/4/2012 | 11:17:59 PM
re: GMPLS Showcased in Demo You are as clueless as ever.

Go do a search on this board, it is you
who said again and again the MPLS (not GMPLS)
is for "dynamic provisioning".

Now for GMPLS, it has provisions to support fast wavelength setup and teardown. It is a optical layer thing and has little to do SONET. In fact, it is meant to do away from SONET.
GMPLS is implemented by G709 which wrapps
SONET/whatever. This bring up another joke from
you: G709 is another SONET.

----------
So tell me: What does QoS have to do with GMPLS applied to SONET circuit setup/teardown?
turing 12/4/2012 | 11:18:00 PM
re: GMPLS Showcased in Demo The problem is that today the duration of the circuit is measured in terms of days and not hours. Why? Because it takes that long to provision and unprovision a circuit to use.
-----------

I think I understand everything you're saying, but I still don't get why it takes a control plane protocol to do it. You say Vyvx takes days to provision it, and that they'd rather use a web-based system to automate it. I believe that can be done through automated scripts or better integration of config, but I fear the real slowdown is not the configs - it's the ownership/business process side, and hardware availability (i.e., who would have an OC-48 sitting idle waiting for a once-a-year superbowl that changes location)

Those problems are not solved by a control plane protocol.
turing 12/4/2012 | 11:18:01 PM
re: GMPLS Showcased in Demo There is a pressing business need to provide on-demand high-speed transport for limited time slots TODAY. How do you think real-time video transport occurs? Today it occurs by calling a transport provider like Vyvx and having them set up a circuit from point A to point B for a specified amount of time. How do you think the Superbowl gets from the stadium to the network?
----------

Good example. One good example. But do you think Fox/Vyvx would trust an automatic protocol to save a few hours of some lacky's time, for millions of dollars in investment?
Don't get me wrong - there are plenty of cases where an automatic protocol makes sense over hand-configuration (routing protocols, for example). But those are for cases where many things change very often - where it literally can't be statically configured.
signmeup 12/4/2012 | 11:18:01 PM
re: GMPLS Showcased in Demo turing wrote:
"Good example. One good example. But do you think Fox/Vyvx would trust an automatic protocol to save a few hours of some lacky's time, for millions of dollars in investment?"

But they do it today! The difference being is that instead of an packet-based solution, it is a circuit-based one. They already provision short-use circuits across a transport network. It's not a question of how many lackys it takes - they want to remove the lackys completely out of the equation by having the end user "provision" the circuit. For example today if I needed X amount of bandwidth at Y date for Z amount of time, I would call Vyvx to book the service. They would then turn that into a work order for a circuit. A transport provider would then provision a circuit to transport the information. The problem is that today the duration of the circuit is measured in terms of days and not hours. Why? Because it takes that long to provision and unprovision a circuit to use. Now multiply that by 500 circuit requests a day to see how much capacity is being wasted as well as the number of people required to make the system work...

A better way would be to have a so-called "automated" provisioning service where the end user could access a provisioning utility on the web and tell how much bandwidth is required, for how long and from point A to point B. This provisioning system would then generate an automated provisioning request that would build the circuit using GMPLS at the required time, bandwidth, and duration. No more lacky's and no more inefficient ciruit utilization!

Regardless of whether the techologies are ready for prime-time (yes that was a joke..) or not, the business case IS there and people are looking at doing it within the next 1-3 years. Trust me, it makes business sense.



signmeup 12/4/2012 | 11:18:02 PM
re: GMPLS Showcased in Demo straightup wrote:
"Until customers start ordering STS-48s for 8 hour sessions, GMPLS has no business case. Chock it up as a misread of the real factors at play."


This is EXACTLY where we are headed. There is a pressing business need to provide on-demand high-speed transport for limited time slots TODAY. How do you think real-time video transport occurs? Today it occurs by calling a transport provider like Vyvx and having them set up a circuit from point A to point B for a specified amount of time. How do you think the Superbowl gets from the stadium to the network?

This is reality - there is need.

gea 12/4/2012 | 11:18:02 PM
re: GMPLS Showcased in Demo "Until customers start ordering STS-48s for 8 hour sessions, GMPLS has no business case. Chock it up as a misread of the real factors at play."

Well, that's worth thinking about. My original comments weren't really referring to this case...I thought (and still think) that there would be a strong demand for GMPLS to do exactly what it takes humans so long to do now, and down at the good ole' DS1/DS3/STS-3 level.

This would of course assume that the appropriate circuit packs are in place at the endpoints of the circuit (and of course the capacity between those points). But instead of someone manually entering the provisioning commands (even if remotely) to set up a DS3 across mutliple BLSRs (even given a planning tool spat out that set of orders), it would seem to me far more desirable for this to be done automatically.

Your argument is not irrelevant here, though, I suppose. But I would counter-suggest that the reason there's no demand for a month of DS-3 (or OC-48) service cross country, say, is because 1) price and 2)provisioning time. If both came down dramatically (say through the use of GMPLS) then I'd bet you'd see some action. But I could be wrong....
gea 12/4/2012 | 11:18:02 PM
re: GMPLS Showcased in Demo mdwdm wrote...

"Geez, why is it so hard to understand that MPLS
is all about QOS,"

Uh...what on EARTH are you talking about?

The term "MPLS" basically refers to the packet world. It's an additional shim header in a layer-3 packet. It's also a set of control plane protocols used to set up the 'circuits' that are defined by the exchange of the MPLS headers.

GMPLS is almost completely different, and as usual you are mouthing off on a subject you clearly have zero knowledge of. G (see that 'G' there?) GMPLS takes many of the control-plane protocols proposed for use with MPLS and extends them into both the new-ish optical domain as well as SONET (and everything else, for that matter).

So tell me: What does QoS have to do with GMPLS applied to SONET circuit setup/teardown? It's still a circuit, and once the circuit is set up then whatever is inside the circuit will experience the exact same QoS as if that circuit were provisioned by traditional means.

jim_smith 12/4/2012 | 11:18:06 PM
re: GMPLS Showcased in Demo "Let's forget YOUR ignorance and just concentrate on GMPLS for SONET provisioning."

My my... aren't we ticked off...

"I'll grant those are enormous ifs, but the market is clearly there."

Yes... also, there is a market for a "cancer cure".

I hope you read the other posts. As you can see, other people are also "ignorant" like me.
mdwdm 12/4/2012 | 11:18:06 PM
re: GMPLS Showcased in Demo A year ago I first tried to tell gee that fast provisioning is all about networks connectivity though MEMs switch, and fully routed backplanes.

He kept talking about MPLS, clearly something
he has no clue about.

Geez, why is it so hard to understand that MPLS
is all about QOS, another ATM for circuit
emulation and all the goodies? Even an idiot should understand it by now after trolling
lightreading everyday.

----------------
This "fast provisioning" stuff is irrelevant.
road__runner 12/4/2012 | 11:18:07 PM
re: GMPLS Showcased in Demo Straightup, you said it man.

This "fast provisioning" stuff is irrelevant.

Software tools already exist to provision circuits in minutes AFTER THE EQUIPMENT AND NETWORK IS ALREADY IN PLACE. GMPLS cannot fly in a network if one doesn't exist and if one does exist then services can be turned up quick enough even today without GMPLS.
straightup 12/4/2012 | 11:18:09 PM
re: GMPLS Showcased in Demo Think: the reason why provisioning takes months is not because it takes months of continuous labour... it is because for carriers are unwilling to invest the money in equipment for large capacity circuits before they have a signed order. If the equipment was already there, the provisioning wouldn't take months!

For GMPLS to improve the economics, the carriers have to have enough volatility in demand such that provisioning is a larger share of the overall cost. For services that have contract lengths measured in years, a provisioning cycle of months is reasonable. Until customers start ordering STS-48s for 8 hour sessions, GMPLS has no business case. Chock it up as a misread of the real factors at play.
turing 12/4/2012 | 11:18:09 PM
re: GMPLS Showcased in Demo ------
What kinds of circuits?
------

Access lines (DS3) through one sonet ring (I assume a BLSR ring) to cisco, and trunk side OC-3 and higher through Nortel Optera ring to Nortel/Bay BCNs. I didn't touch the sonet ring part - that was done by the sonet gear owner. And the Nortel optera I did my half and someone else did their half.
gea 12/4/2012 | 11:18:10 PM
re: GMPLS Showcased in Demo "I have provisioned circuits in a small backbone"

What kinds of circuits?

In a worst-case scenario imagine provisioning a DS3 across 7 dual-node-interconnected BLSRs.

Of course, some of the tools needed to spit out the provisioning orders shields the user from SOME thinking, but this can still be a very tricky proposition.
turing 12/4/2012 | 11:18:10 PM
re: GMPLS Showcased in Demo --------
Basically, provisioning in the core is a MAJOR problem, and is the main reason it currently takes months for a circuit to be set up.
--------

Yes, I have heard that, but it confuses me. I have provisioned circuits in a small backbone, and it didn't take long to do the config part. What took long was getting the hardware in place (if we needed more), and getting to the right owner/admin. Once they were in place and verified the authorization of the job it got done in minutes.

GMPLS won't solve the ownership model problem, or the hardware part (obviously).
gea 12/4/2012 | 11:18:10 PM
re: GMPLS Showcased in Demo "Careful... your ignorance is showing..."

Uh...how many central offices have YOU been in?

How many NEs have you personally had to configure?

Have you ever attempted to provision dual-node interconnected SONET rings?

Let's forget YOUR ignorance and just concentrate on GMPLS for SONET provisioning.

Theoretically, this would just be a software load w/o the need for extra hardware.

Don't think that just because your knowledge of GMPLS and provisioning is limited to merely rejecting marketing hype that mine is the same.

If GMPLS can live up to its promises, get standardized, AND integrate with existing OSSs (suh as TIRKS), it will experience widespread, wildfire adoption. I'll grant those are enormous ifs, but the market is clearly there.
jim_smith 12/4/2012 | 11:18:11 PM
re: GMPLS Showcased in Demo


... is the main reason it currently takes months for a circuit to be set up...




... If the provisioning process could be reliably automated...



Careful... your ignorance is showing...

I'm really excited to find out how GMPLS is going to automate "provisioning"!

Does GMPLS have specifications for operational robots that will deploy the plugins?

Does GMPLS use Woodoo-Over-Witchcraft (WOW) to automagically manufacture hardware so that service providers don't have to wait for weeks to get equipment from the vendors?

gea 12/4/2012 | 11:18:12 PM
re: GMPLS Showcased in Demo Turing:

Well, although I appreciate the sentiment, I don't agree with the conclusions you seem to imply.

Basically, provisioning in the core is a MAJOR problem, and is the main reason it currently takes months for a circuit to be set up.

If the provisioning process could be reliably automated (a big if, I know), then GMPLS provisioning gizmos would sell like....SONET Network elements.

As for QoS, remember that if this is applied to SONET or optical pathways, the complexity/reliability of the GMPLS control plane protocols will have no impact there. It's still a circuit, just it got there via software. Of course, if GMPLS protection capabilities are leveraged, that may be a completely different story, and here I agree with your cisco analogy. I doubt the ILECs of the world will want this function done by software any time soon.
turing 12/4/2012 | 11:18:13 PM
re: GMPLS Showcased in Demo Let's see now... transmission equipment works well and doesn't crash and doesn't need much maintenance/upgrades. Router equipment has frequent software issues and needs upgrades monthly, because it is complex software. So let's take the complex software and put it on transmission equipment!

Yeah, that way we can automate and speed up the provisioning time, because there is soooo much provisioning going on in core transports. And an hour vs. a minute is going to save us lots of money, like $50/hour! (of course we'll have to pay the GMPLS expert $100/hour, but that's good for the economy)

And the operators are sure to trust our automagic protocol for provisioning, vs. the silly user-configured/well-known/easily-debuggable way, because the core transport is not that important anyway... it's a sandbox for testing our latest fad protocol.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE