x
100G

40G: Time for the Third-Party Candidate to Bow Out?

There's no question that 100G is gathering momentum this year. During this past week alone, there were two significant announcements aimed at 100G:



The 100G messages from these vendors resonate with the messages we heard from network operators at Light Reading's Packet-Optical Transport Evolution conference last week, including Telus Corp. (NYSE: TU; Toronto: T) VP of Technology Strategy Zouheir Mansourati, who called 40G a "stopgap measure" on his company's road to the endgame.

We think that the industry support behind 100G has reached a critical mass so that – for the first time – we are beginning to question the future utility of 40G DWDM technology. Infinera president and CEO Tom Fallon described the problem 40G faces as "the 40G squeeze" – meaning that 40G is destined to get squeezed between the continuing popularity of 10G transport and the future uptake of 100G transport.

We like this description, and we agree that 40G is increasingly looking like a technology without a permanent home -- the third-party candidate in a two-party political race, to use an analogy. In the near future, operators with big bandwidth needs will choose 100G, and those that don't require 100G will stick with 10G. The biggest problem for 40G is that, so far, it has been unable to compete with 10G on price – as 10G cost curves have been too steep for 40G to match. Forget about 40G pricing at 2.5x10G transponder pricing. From what we've been told, 40G can't even compete with 10G at 4x10G pricing!

Returning to the political party analogy, third-party candidates often serve a significant purpose in bringing issues to the forefront that would otherwise be buried in the debate. Here, 40G has indeed served a useful purpose in compelling the industry to take a much more standardized approach to 100G development – work that is now being spearheaded by the Optical Internetworking Forum (OIF) and includes dual-polarization QPSK modulation and coherent detection as well as standardized form factors to which components vendors can build.

However, with 100G now well on its way, perhaps the optical networking industry as a whole would be best served by retiring work on 40G and focusing full-steam on bringing 100G to market. The clear benefit of doing this is focusing limited industry resources on a single purpose and, as a result, speeding 100G to market at economical price points (possibly even profitable ones). On the other hand, what is the real value of pressing forward with 40G development, knowing that it can't compete with 10G on pricing or with 100G on capacity?

— Sterling Perrin, Senior Analyst, Heavy Reading

Page 1 / 2   >   >>
Stevery 12/5/2012 | 4:34:48 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

The biggest problem for 40G is that, so far, it has been unable to compete with 10G on price – as 10G cost curves have been too steep for 40G to match. Forget about 40G pricing at 2.5x10G transponder pricing. From what we've been told, 40G can't even compete with 10G at 4x10G pricing!


And how does 100G coherent solve this?


Until there is a solid answer for that, nobody is going to believe that 100G is squeezing 40G.  (The expectation is that 100G will be even worse for pricing:  Coherent receiver + mux schemes are not known for being cost-saving measures.)


Sterling Perrin 12/5/2012 | 4:34:47 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

Stevery,


I have 2 points on this issue:


1. 100G initially won't have to compete with 10G exclusively on transponder price. There are operators with high capacity requirements that will go with 100G over 10G, based on immediate capacity needs. For this reason, there is a market for 40G today, even though it's not cheaper at the transponder level compared to 10G. My point in the article is that 100G will take the high capacity market away from 40G immediately, leaving 40G in the unenviable position of having to beat 10G transponder pricing to survive.


2. I'm assuming lack of volume is one of the reasons that 40G pricing can't come down steeply enough to compete. If the high capacity market continues to be divided among 40G and 100G, then components volumes continue to be split among the two. If there is one high capacity option, and its relatively standard, then, it seems, the components industry has a much better shot at price declines through volume.


Sterling

Sterling Perrin 12/5/2012 | 4:34:46 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

No, not at all. The problem I see with 40G and 100G is that they are timed so closely together - something we did not have with past speed generations (also there's not a 4x separation in speed between them).


What if, in 1998, a 5G speed was introduced just as 10G was starting to move (or a 20G speed?) They wouldn't have both survived - there'd be one winner.


Sterling

Stevery 12/5/2012 | 4:34:46 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

So applying your logic to the next node:  We will have 400G squeezing out 100G for the high-end links, and then 10G for the rest.  After that, 1.6T takes the high end from 400G, and 10G for the rest.  And after that...


I don't buy it.


Here's a question:  What is the 10G to 2.5G ratio?  (And since 10G is seriously confused (still) by investment from the bubble, what is the OC-48 to OC-12 number?)


EDIT: I should have written "What is the current 10G to 2.5G ratio?" because it will show you something about the mythical 2.5x

Stevery 12/5/2012 | 4:34:45 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

The problem I see with 40G and 100G is that they are timed so closely together


Really?


First OC768 was in the late nineties.  First integrated transponders (ie, not internal-use-only for LU or NT) were around 2003.  Are there even any serial 100G links generating revenue yet?


How long do you think the gap was for previous OC generations?





paolo.franzoi 12/5/2012 | 4:34:44 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

 


You guys are talking about the value of a single 10G or 40G or 100G interface and conclude that this alone is the deciding factor in migration.  10G became popular because 10G interfaces became prevalant and cost effective on the devices that the connected to.


40G ports as router upgrades seem like a 1/2 step.  So, the reason that 40G is challenged is that really it is a mux of 4x10Gs. One would do this only in places that are challenged today with problems - where adding another wavelength is not possible.  100G seems like a much better upgrade as it will be either a transport for 100G customer interfaces or 10G customer interfaces with high efficiency.


I think the real question here is how prevelant Layer 2 multiplexing becomes in optimizing these networks.  I think this presents a HUGE problem for higher speed optics as oversubscription in the transport lowers the demand for higher speed.


seven

Pete Baldwin 12/5/2012 | 4:34:43 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

I've talked to a few people lately who side with Sterling. The timing issue is residue from the downturn, the result of the length of time carriers waited before upgrading.  It's not something that would happen at every speed grade, just the luck of the draw for 40G - some carriers have avoided an upgrade long enough that they're deciding they can just skip to 100G.


That's been the theory for a while, and it seems to be catching on. But obviously not everyone buys that.

Pete Baldwin 12/5/2012 | 4:34:42 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

7 writes:  I think the real question here is how prevelant Layer 2 multiplexing becomes in optimizing these networks.  I think this presents a HUGE problem for higher speed optics as oversubscription in the transport lowers the demand for higher speed.


I can see that having an effect, but you think it would be that big an effect? That would be interested.

paolo.franzoi 12/5/2012 | 4:34:42 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

Abso-frickin-lutely.


The whole idea here has been that we are connecting router ports through a layer 1 network.  We are growing this network linearly with the speed of the router ports.


Now a there are really 3 kinds of traffic here.  First, is business and residential traffic that is best effort.   Second, is business traffic under an SLA.   Third, is planned and actual high value services like IPTV.


The reality is that carriers want to build a shared network that can handle all 3 of these traffic types so that they don't have to manage and maintain 3 separate networks.  There has been a traditional issue in sharing these networks that is cultural issue (don't eliminate the stuff I support).  There has been (and will continue to be) challenges with offering these traffic types on a single infrastructure.


So, how does this impact Optical and Layer 2.  If they can move Layer 2 into the Optical Network (POTS right?), then the number of high speed ports can be dramatically lowered for type 1 traffic and some for type 2 traffic.  This is standard grooming behavior in traditional transport networks where you can think of Ethernet replacing SONET for this.  The good news with Ethernet is you can oversubscribe (you can build a 24 port GigE switch with a GigE uplink as an example).


So, there will be LOTS of high speed 100G Data Center links.  How that will translate to the metro and the long haul may no longer be anywhere near 1:1 the way that things have been built in the past.


Back to my commentary about Cisco....Cisco HATES this.  This whole layer 2 sub-layer has the added benefit that the number of new router ports can be held flat with only a higher speed required.  These router ports would then support multiple virtual sub-ports connected over a layer 2 network.


So, the whole thing is can we effectively build a Layer 2 grooming network between the Routers in carrier networks.  If this can be done, then the number of OPTICAL connections can be lowered rather dramatically.


seven


 

Pete Baldwin 12/5/2012 | 4:34:38 PM
re: 40G: Time for the Third-Party Candidate to Bow Out?

OK, that's certainly true -- that's why Nokia Siemens was talking so much about 40G at OFC (they've got a 40G installed base that isn't going away).  & that's an opportunity Infinera missed, which is why they see 100G as the "real" viable market.


Not sure you're being fair about the timeframe.  Yes, it's been a decade for 40G, but for a lot of carriers, the first half or more of that decade doesn't count -- they weren't gonna spend no matter what.

Page 1 / 2   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE