No Sure Bets in NG PONs

The standards are complete. Vendors are rolling out their shiny new gear. And it's all over the conference program at next week's Broadband World Forum in Paris. But is the world actually ready for next-generation PONs?

That's a question we set out to answer recently in a survey of almost 100 network operator executives, and the results – just published in Heavy Reading's new report, "Next-Gen PONs & Fiber Access: A Market Perception Study" – make for thought-provoking reading.

We defined next-gen PONs to include the ITU 10G standard (XG-PON), the IEEE's 10G-EPON standard, and WDM PON. And respondents to our survey reported strong interest in all three variants – at least in principle – with more than half saying they were investigating or actively monitoring developments in these areas.

Getting more bandwidth to high-end residential users, and to business customers, were the key catalysts for moving forward, respondents said, with end-user bandwidth demand expected to grow at 40 percent per annum over the next five years (though this average conceals wide variations in opinion). Most took the view that demand would look after itself.

But that does not mean a transition to NG PONs is imminent. The big obstacle – one that came up time and again in our survey – is cost. Unless the cost can be held down as low as possible, NG PON is a non-starter for now, except in a few niche markets.

As a result, we think that widespread deployment of NG PONs is pretty unlikely before 2013 – and the longer it takes, the stronger the potential position of WDM PONs, a more radical alternative that is not yet standardized. Survey respondents told us 10G looks like the way to go for now, especially for FTTH, but sentiment can change fast.

And as in other areas of the FTTB and FTTH market, the situation will vary enormously from country to country, with a few pioneers – KT Corp. , NTT Communications Corp. (NYSE: NTT), China Telecom Corp. Ltd. (NYSE: CHA), perhaps Verizon Communications Inc. (NYSE: VZ) – moving earlier than most, but others holding back or using simpler fixes for high-end residential customers such as 1G "active" or point-to-point Ethernet solutions.

For both wireline operators and vendors, it's a bit of a tricky bet, and becoming trickier by the month. Among the many variables that must be built into any business decision: the potential of G.vector and DSM to delay initial FTTH deployment; deep fiber investment by cable MSOs, which could both promote and hinder the case for NG PON; the imminent deployment in some places of LTE, which can also be both positive and negative for PON investment; and the future regulatory and political environment. Not least: Where are the big, fat services to justify NG PON? Can 3DTV or consumer telepresence really drive all that investment – and if not, what can?

I look forward to investigating these issues further in Paris (and discussing it with those of you attending), and reporting back here in a couple of weeks. Watch this space for more.

— Graham Finnie, Chief Analyst, Heavy Reading

paolo.franzoi 12/5/2012 | 4:20:49 PM
re: No Sure Bets in NG PONs


Okay, you have a wonderful little article here but there is a fundamental problem with your one hard data point.  And it is a doozy.

A GPON system can burst 1Gb/s to a user on the PON.  It could actually burst all 2.5G downstream, but I am going to a useful interface.  So, when you talk about your 40% yoy growth in traffic - you keep thinking you need to grow the access bit rate.

BUT, is it really the case that the oversubscription on the PON is a bottleneck?  Or is it MORE likely that deeper in the network there is a bigger bottleneck?

Let me rollout a few facts to you from the FiOS BPON deployment.  The first systems in Keller ran 155 Mb/s uplink for the entire OLT - all 50 PONs shared that connection.  This was then upgraded to about 155 Mb/s per 20 PONs.  This was later upgraded to 1 Gb/s per 20 PONs. 

These uplinks are then multiplexed again in a service edge router that is shared between all the OLTs in the CO.  In the old days these were ERX1440s and are now E320s.

Is it more likely the problem that it is not burst performance, but sustained bit rate being raised?  If this is true (and I believe quite strongly that it is), then are not the bottlenecks in the OLT uplinks and Routing more likely to be problems than those on the PON itself?  If so, then are the carriers not better off just upgrading to 10Gb/s uplinks on their GPON systems? 

I think that is why NG PONs will sit there for a very, very long time.  Unless you are willing to build them cheaper than GPON/EPON then there really are other places to improve first.  The only reason people talk about these systems is the theoretical revenue improvement by increasing the bit rate at the demarc.



Duh! 12/5/2012 | 4:20:43 PM
re: No Sure Bets in NG PONs

I'm of two minds on this.  A few years back, I was predicting roughly 2013 for NG-PON FOA. That was based on projected per subscriber traffic growth of about 30% per year (i.e., the AT&T Labs traffic measurement slide), leading to roughly 30% per year increase in service rate,  assuming the operators would try to maintain the oversubscription factor on the access.   That seemed to represent a consensus of the folks I was talking to at the time.  I haven't seen anything to indicate that traffic growth has declined, and if anything, the discussion of OTT pretty strongly suggests strong growth.

On the other hand, I'm not seeing as many announcements of new, super-premium service tiers as I was a few years back.    One of the MSOs (Time-Warner, was it?) is rolling out 100M service selectively on some DOCSIS 3.0 systems.  But no announcement from Verizon, and they seem to have stopped talking about it to the trade press.  It appears that the take rate for very high speeds is low enough not to be worth the cost of marketing and engineering for them.  So maybe there's so much headroom in the basic 10/5 or 15/5 service that consumers don't really see a need to pay for more.

As to backhaul, I partially agree.  Operators have gone to multiple GigE or single 10GE interfaces from OLT to service edge router, and an XG-PON OLT chassis would probably have to be looking at multiple 10GE interfaces.  On the other hand, The Law of Large Numbers is your friend when you're working with oversubscription.  For the same reason that classical teletraffic engineering tables for trunk provisioning called for proportionately more trunks for a small switch than for a large one.  So if you think about the math, they'll probably have to reinforce the backhaul, but probably not by a factor of as much as 4:1.

As far as where the bottlenecks really are in these networks, those who know aren't talking, and those who talk probably don't know.

paolo.franzoi 12/5/2012 | 4:20:23 PM
re: No Sure Bets in NG PONs



I don't really disagree with you.  Where I was heading with the backhaul bit is that the services that are being predicted are these very high bandwidth video services.  There are really three ways to approach these depending on take rate:

1 - Unicast them to various endpoints as they subscribe to the stream.  This is very problematic for backhaul as take rate increases.

2 - Deep Multi-cast (aka pull all the streams to the CO).  This relieves the pressure on the backhaul considerably as only one copy of any stream shows up in the backhaul.  The biggest increase will be the cost of upgrading the uplinks and switching capacity in the CO.

3 - Shallow Multi-cast (aka think PIM sparse mode).  This is probably a "happy" medium to get started with until take rates of HD 3D movies is high.  Given that you might start with PPV type streams, this is a good saver of bandwidth and you can likely overcome the channel switch time issues.

I still return to that I think what is happening is that people are using more of their bandwidth more often.  Netflix videos, uploading pictures, etc.   None of this requires more access bit rate but puts more pressure on the oversubscription points.



Duh! 12/5/2012 | 4:20:22 PM
re: No Sure Bets in NG PONs


You're right, I'd forgotten to mention multicast gain (which, incidentally, is another strong advantage of TDM-based architectures). Noting, however, that the bulk of PON deployments in North America and (if I recall) Japan presently use overlay for broadcast video and separate MPEG over GigE backhaul from regional headend to CO.

Otherwise, I think we agree to a point.  Our difference is that I argue that in TDM/TDMA access networks, the access network is an oversubscription point.  As consumers use more of "their bandwidth" (as you put it),  queue depths will grow during busy hour and when that becomes unacceptable, the access network will need to be reinforced.   I view this use as secular growth that can be projected from historical trends, rather than in terms of specific applications, but the net result is the same.

Of course, the same applies in the intra-office,  backhaul, metro-regional and peering links.  I think you're arguing that these are likely to need reinforcement sooner.  That would be a very interesting question for the enterprising journalist.

Sign In