& cplSiteName &

Going Beyond 100G? Not So Fast…

Sterling Perrin
3/10/2014
0%
100%

In October 2006, at Light Reading's Optical Expo conference in Dallas, renowned AT&T Labs Vice President Simon Zelingher (now retired) made a passionate case for the need for 100G transport, predicting: "We will need 100 Gbit/s by the end of the decade."

Despite heavy skepticism, Zelingher's prediction proved amazingly accurate. The first commercial 100G cards were shipped at the end of 2009 -- for Verizon Communications Inc. (NYSE: VZ), not AT&T Inc. (NYSE: T) -- setting the 100G migration in motion.

Fast forward to the present, and 100G has already overtaken 10G, in terms of capacity shipped in long haul networks. As well, 40G transport, which held promise just a couple of years ago, is now on a path of sharp decline, because the capacity and cost-per-bit advantages of 100G cannot be matched.

Today the optical industry buzz is all about "beyond 100G" bit rates. As OFC takes place this week, announcements and discussions focus on the topics of 400G transport versus 1 Terabit transport and how to get to the B100G (Beyond 100G) end game as quickly as possible.

While we welcome progress, Heavy Reading sounds a strong note of caution regarding the widespread commercialization of B100G. While B100G is an appropriate topic for forward-looking optical conferences such as OFC and ECOC, the industry risks getting well ahead of itself in the need for B100G adoption.

In researching our newest Heavy Reading report -- The Rise of 100G & Terabit Transport Networks -- we found that the drivers for B100G are simply not in place, and that 100G is by far the best tool for the job for long haul networks during the next five years.

Here are our main arguments to counter the B100G commercialization hype:

  • Historically, the telecom bit rates have increased in 4x increments (i.e., 155 Mbits, 622 Mbit/s, 2.5 Gbit/s, 10 Gbit/s). While some operators did move to 40 Gbit/s, the vast majority did not. For most service providers, the 100G migration marks the first 10x jump in network capacity as they move from legacy 10 Gbit/s networks to 100G. This 10x jump in bit rates is giving service providers an unprecedented boost in capacity.
  • Although it is not often discussed in our industry, the fact is that the rate of Internet traffic growth is slowing, governed by the law of large numbers. A quick look at Cisco’s widely-cited Visual Networking Index (VNI) findings reveals this trend. Cisco’s VNI forecasts that global IP traffic will increase at a 23% CAGR from 2012-2017. Cisco's 2011-2016 forecast CAGR was 29%. The 2009-2014 forecast CAGR was 34%, and a few years prior to that, the CAGR was in the 50% range. The Internet is not shrinking, but the growth rates are slowing.
  • Economics will ultimately dictate the timing of the next bit rate adoption. This simple truism is often overlooked as our technology-driven industry focuses on, Can it be done? and then on Can it be done economically? However, the real question for B100G must be, Can it be done more economically than 100G? As 100G has entered the volume production phase and as component companies focus their efforts on 100G cost reductions, B100G technologies will not be able to compete with 100G on a cost per bit basis over the next five years, we strongly believe.

Work on B100G must occur now in order to build the technology innovations and sustainable ecosystem for a commercial B100G future. It is the myth of B100G urgency today that we are hoping to dispel in this blog and in our related research.

As the industry takes a realistic view of B100G timing, it does bring up an interesting point that requires thought and discussion. Suppliers have rallied around 400G as the next-generation bit rate because it is the most doable option considering a near-term time horizon. However, if the need for B100G is not so urgent, does 400G really make the most sense? Or should suppliers set their sights on another 10x jump to 1 Terabit transport?

We don't believe this issue has yet been settled.

— Sterling Perrin, Senior Analyst, Heavy Reading

(17)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 2   >   >>
SachinEE
50%
50%
SachinEE,
User Rank: Light Sabre
3/23/2014 | 4:17:58 AM
Re : Going Beyond 100G? Not So Fast…
@ kq4ym, what might look like an obvious thing or rule of thumb today may not necessarily be true always. In fact direct jump of most of the carriers to 100G from 10G, marking a 10X jump, has already exposed this rule. In technology, we have witnessed that after crossing a certain limit there can be no stopping from being multiplied quickly and at a much faster rate.
SachinEE
50%
50%
SachinEE,
User Rank: Light Sabre
3/20/2014 | 2:55:58 PM
Re : Going Beyond 100G? Not So Fast…
There has always been a tendency of over touting some upcoming thing which might be years from coming to reality. I think tech companies sometimes do it intentionally to create a market for upcoming technology much earlier than its advent. It is alright though to emphasize on upcoming technologies because such conferences are platforms for sharing the ideas and are proved to be drivers of innovation.
ipv456
100%
0%
ipv456,
User Rank: Light Beer
3/17/2014 | 8:48:37 AM
400Gb/s Ethernet vs >1Tb/s ?
If 400Gb/s ethernet is as wildly successful as 40Gb (lots of sarcasm), there is a good chance that >1Tb/s may not be ethernet.

Long haul PtP links are not dependant on ethernet functionality; low cost, high volume framer technology was the driver at lower speeds.

Data centers want infinite bandwidth at $0 cost.  "Fabrics" are their desire to connect compute, storage and external network connectivity.  They don't seem to be religious about ethernet as a frame format.

MEF is driving ATM like complexity into ethernet for timing, virtual circuits/paths and OA&M.  Will this slow development of higher speed interfaces, increase cost/lower volumes or split ethernet into MEF vs. basic ethernet chipsets?

It seems like 400Gb/s ethernet will be a critical test.
mvissers
50%
50%
mvissers,
User Rank: Light Beer
3/14/2014 | 12:25:42 PM
Re: Need for speed
Sterling,

The optical transport network equipment includes more and more an electrical switch fabric which support L2 PKT switching (MPLS-TP and carrier Ethernet) and L1 ODU switching, and an optical switch fabric which support L0 OTU/OCh switching. Operators will make use of the presence of these three switching layers in their optical transport networks by allocating the traffic to the best fitting switching layer.

Transport SDN may make the switching capability in these three layers available to the clients/users. At least, I assume that operators are going to offer virtual network services in which the client/user controls L2 switched connections and/or L1 switched connections and/or L0 switched connections. If this occurs, then e.g. 100GE or 400GE interface types between client/user and optical transport network may be the wrong interface types in this SDN era... we may need interface types that support a mix of these switched connection types. Perhaps something to investigate further...

Maarten
sterlingperrin
50%
50%
sterlingperrin,
User Rank: Lightning
3/14/2014 | 11:28:09 AM
Re: Need for speed
Maarten,

Interesting commentary, as usual. The flexible line rate option does seem the most sensible option moving forward. 

At OFC, there was also some interest raised in a flexible client side option - the first I had heard of this. The logic was the same as that used for the line side: more flexibility and not having to convene new standards constantly.

Sterling
sterlingperrin
50%
50%
sterlingperrin,
User Rank: Lightning
3/14/2014 | 11:20:29 AM
Re: Going Beyond 100G?
Jramelia,

it was not clear to me how the statements you've listed disagree with what we found in the research. I do not see how B100G technologies will be lower in price than 100G over the next five years. 

As far as defining LH, there are different ways to do this, I agree. For HR research purposes, we do divide up by distance, around ~1000km or greater. For 600-1000km, we view this as regional. This could be a LH network for European operators.

Sterling
mvissers
50%
50%
mvissers,
User Rank: Light Beer
3/13/2014 | 5:39:58 PM
Re: Need for speed
Assuming a factor of 4 every 5 years, would give us the following client side bit rates: 2017: 400 Gb/s, 2022: 1.6 Tb/s, 2027: 6.4 Tb/s, 2032: 25.6 Tb/s.

Actual service layer rates of clients are much lower. In 2000: 150 Mb/s, in 2009: 1 Gbit/s and today highest client bit rate is more often 10 Gb/s.

In the SDN era, virtual network services and orchestration between layers will make these service layer rates more relevant than in the past. With SDN, each transport network client has full control over its service layer connections in its virtual transport network, and therefore there is no need anymore to put up a big transport tunnel (which is often run half empty) between the customer edge nodes.

So the client does not have to groom its service layer traffic into a big fat tunnel any longer in its customer edge nodes and have the transport network carry this big tunnel. Instead, the client gets service layer grooming control inside the transport network (i.e. inside its virtual transport network) and connect to the transport network with lower rate interfaces.

Maarten
jramelia
50%
50%
jramelia,
User Rank: Light Beer
3/13/2014 | 5:12:02 PM
Going Beyond 100G?
Disagree with your statement -

In researching our newest Heavy Reading report -- The Rise of 100G & Terabit Transport Networks -- we found that the drivers for B100G are simply not in place, and that 100G is by far the best tool for the job for long haul networks during the next five years.

 

What defines long haul typically is crossing LATA boundary not distance.

What drives market is price.

Other technologies such as VPLS will also drive demand.

There is always an engineering tradeoff to be made.  Such as Capacity vs. Reach, Reach vs. Spectral efficiency. etc.

 

Another consideration is variable channel, super groups and superchannel.

June 2011 - HR Review of ZTE's 11.2Tb/s OFDM Superchannnel Experiment.

 
Phil_Britt
50%
50%
Phil_Britt,
User Rank: Light Sabre
3/13/2014 | 3:24:54 PM
Re: Need for speed
It won't be too long before that's considered to slow, too. It wasn't all that many years ago when dial-up was great. Now I'd almost rather use a typewriter (what's that? my children wonder). Even in hotspots offering broadband, the connections are just too slow.

The bigger the pipes get, the more data people will want to download. Pretty soon we'll need what a Saturday Night Live skit called Einstein (maybe it was Newton) Delivery: "When it absolutely, positively has to be there yesterday."

 
mvissers
50%
50%
mvissers,
User Rank: Light Beer
3/13/2014 | 2:47:34 PM
Re: Need for speed
1972: 8 Mb/s, 1976: 34 Mb/s, 1980: 140 Mb/s, 1984: 565 Mb/s electrical, 1986: 565 Mb/s optical, 1990: 2.5 Gb/s, 1996: 10 Gb/s, 2000: 40 Gb/s, 2010: 100 Gb/s

Specifications of 40Gb/s STM-256/OC-768 were completed at the end of 2000. Normally, systems with this interface would have entered the network 2 years later (end of 2002/begin 2003)... but at that point in time the world had a little bit of a crisis... technology developments nonetheless moved on during the downturn and in absence of 40G in the networks, 100G technology became more compelling than 40G. If 40G would have been in the networks in 2007, 100G would have been considered a too small increase and not an economical next step; instead the market would have waited for 160G.

40G was the turning point in optical transmission; single carrier transmission got replaced by multi-carrier and/or coherent transmission. Furthermore, the client side optical technology now differs from the line side optical technology. And the client side bit rates will differ from the line side bit rates.

The next developments in the line side will be following a n x 100G approach. The next line side bit rate is 200G, and this will be followed by 300G, 400G, 500G, etc. These nx100G line rate signals will be used to carry larger service signal aggregates through the metro, core and backbone networks.

The next development in the client side will be following a 4x approach. Thus the next client side bit rate will be 400G. As there are hardly any 100G clients today, it will take some years before the first true 400G clients appear. Possibly 2 years after the 400G line interfaces enter the networks.

Maarten
Page 1 / 2   >   >>
More Blogs from Heavy Lifting Analyst Notes
NICs have evolved many times, and the smart NIC is the next step, offering a programmable resource that can be configured to provide additional CPU offload functions for different applications.
Operators are applying artificial intelligence and machine learning technologies to leverage the power of their new programmable, software-based networks.
This year's show evinced healthy interest in effectively using data and analytics to run telecom businesses better, but how well are operators actually doing with it?
FTTx rollouts need a more automated process for collecting and analyzing test results, and analytics could provide the answer.
Driven by web-scale Internet companies, three key trends – disaggregation in terminals, open line systems and 100G+ transponders – are reshaping the DCI market.
Featured Video
From The Founder
Light Reading is spending much of this year digging into the details of how automation technology will impact the comms market, but let's take a moment to also look at how automation is set to overturn the current world order by the middle of the century.
Flash Poll
Upcoming Live Events
November 30, 2017, The Westin Times Square
March 20-22, 2018, Denver Marriott Tech Center
May 14-17, 2018, Austin Convention Center
All Upcoming Live Events
Infographics
SmartNICs aren't just about achieving scale. They also have a major impact in reducing CAPEX and OPEX requirements.
Hot Topics
Nokia Bell Labs & Verizon Stretch Fixed 5G to the Home
Dan Jones, Mobile Editor, 11/13/2017
Eurobites: Telefónica Reckons Plastic Is Fantastic for FTTH
Paul Rainford, Assistant Editor, Europe, 11/15/2017
Juniper's New Contrail VP Hails From Google
Craig Matsumoto, Editor-in-Chief, Light Reading, 11/15/2017
Animals with Phones
Why Cats Don't Run Tech Support Click Here
Live Digital Audio

Understanding the full experience of women in technology requires starting at the collegiate level (or sooner) and studying the technologies women are involved with, company cultures they're part of and personal experiences of individuals.

During this WiC radio show, we will talk with Nicole Engelbert, the director of Research & Analysis for Ovum Technology and a 23-year telecom industry veteran, about her experiences and perspectives on women in tech. Engelbert covers infrastructure, applications and industries for Ovum, but she is also involved in the research firm's higher education team and has helped colleges and universities globally leverage technology as a strategy for improving recruitment, retention and graduation performance.

She will share her unique insight into the collegiate level, where women pursuing engineering and STEM-related degrees is dwindling. Engelbert will also reveal new, original Ovum research on the topics of artificial intelligence, the Internet of Things, security and augmented reality, as well as discuss what each of those technologies might mean for women in our field. As always, we'll also leave plenty of time to answer all your questions live on the air and chat board.

Like Us on Facebook
Twitter Feed
Partner Perspectives - content from our sponsors
The Mobile Broadband Road Ahead
By Kevin Taylor, for Huawei
All Partner Perspectives