& cplSiteName &

AT&T Testing Virtualized GPON

Carol Wilson
5/15/2015
100%
0%

Having already deployed NFV in almost 70 of its central offices, AT&T is now testing an approach to virtualizing its optical access, with the intent of reducing costs and adding flexibility. Tom Anschutz, Distinguished Member-Technical Staff for AT&T, told the audience at Light Reading's Carrier SDN Networks event this week the company's approach will also work for copper-based G.fast networks and probably support cable DOCSIS networks as well.

AT&T Inc. (NYSE: T) presented its virtualization plan for Gigabit passive optical networks to the Open Compute Project earlier this year and it was accepted as a working project, Anschutz said, in addition to being demonstrated at the upcoming Open Networking Summit in June. The idea is for this to be an open industry-wide approach.

Bringing virtualization and its benefits -- lower-cost hardware, greater flexibility and programmability and faster service delivery on open systems -- to the access network is important because that's where much of AT&T's costs lie.

"Talking about big routers is sexy but let's face it, all the money goes here," he said, indicating the access network. "If I make a difference in this part of AT&Ts network, I get to pull the big lever in terms of the economic game to the company."

The AT&T engineer, introduced by Heavy Reading Senior Analyst Sterling Perrin as a co-author of AT&T's Domain 2.0 manifesto, gave the audience a detailed look into what he calls "Refactoring Communications for SDN and NFV." That process focuses on looking inside today's special-purpose and proprietary boxes to see how their functions can be broken down and handled by IT-grade data center infrastructure. (See AT&T Puts SDN/NFV in Driver's Seat.)


The Open Compute Project is one of several industry groups holding workshops at Light Reading's Big Telecom Event on June 9-10 in Chicago. Get yourself registered today or get left behind!


In the case of GPON access, the open line terminals that provide much of that functionality are most commonly deployed today in central offices, not in the outside plant, although the latter is still a candidate for virtualization, Anschutz said. Most of the OLT's functions can be directly and relatively easily mapped to standard NFV infrastructure, he said.

"I've got basic switching and routing I can do in my fabric, I've got management I can do in x86 [standard off-the-shelf computing]," he commented. Line cards, interfaces to the fabric and processing can be done using standard fabric switches, storage, servers and Ethernet connections.

"The only thing here that I couldn't do in standard data center infrastructure is to create the physical layer for PON," Anschutz said. That physical layer, the PON MACs, will require "something new, I don't get that in a top-of-rack switch today, so let's do something about that."

The GPON Phy on its hopefully new platform is mapped to the NFV fabric, he says, using separate paths for the forwarding of data and control. The forwarding data is routed to the Internet through a series of what he calls "spine and leaf switches" through a virtual broadband network gateway and out to the metro core. Control and management via SDN also are handled by the same standard hardware on a different path.

This approach looks very much like that software architecture use for SDN control of standard top-of-rack switches in the data center environment, Anschutz says. It uses OpenFlow in the abstraction layer between the physical network and the upper layers, including an OpenFlow agent and controller, along with Netconf/Yang to handle configuration. AT&T can then run standard IT applications on top of this virtualized access platform.

"Based on my IT people telling me what they'd rather see it look like, I made it look like a big Ethernet switch," Anschutz said. "I can make G.fast look like a big Ethernet switch and I think I could make DOCSIS look like a big Ethernet switch and the IT people wouldn't care. So when they write the app that goes at the top that does 802.1X authentication or IGMP snooping or another SDN control function, they would be written once and work on GPON, XG-PON and G.fast, and maybe DOCSIS. That is going to save me a load in IT costs."

What AT&T is testing for optical access builds on what it has already deployed in many central offices to support its NetBond service, which provides secure VPN connections to a wide variety of cloud services. (See AT&T Spotlights Early SDN Efforts, AT&T Adds HP to Cloud Ecosystem, SDN Powers AT&T, IBM On-Demand Cloud Connections and AT&T NetBond Getting Amazon Ties.)

Prior to talking specifically about optical access, Anschutz presented a more general view of breaking down network gear into virtualizable parts, based on AT&T's adopting the approach to infrastructure that many Web 2.0 players are taking. All of this is being done in the name of business imperatives to make AT&T successful in 2020.

"We have to open our network, become modular and be able to allow not just ourselves but other people to program it," he said. "We have to simplify and scale and we think that NFV infrastructure and new operational paradigms borrowed from the Web 2.0 folks is a great way to get that done."

— Carol Wilson, Editor-at-Large, Light Reading

(13)  | 
Comment  | 
Print  | 
Threaded  |  Newest First  |  Oldest First        ADD A COMMENT
dwx
100%
0%
dwx,
User Rank: Light Sabre
5/15/2015 | 1:58:51 PM
Testing?
Reading the article it doesn't sound like they are testing anything.   Saying something like "virtualized GPON" makes little sense since the bulk of the cost and complexity is the MAC layer which no one is virtualizing.   Really what they are doing is trying to virtualize some of the management functions of the OLT, which makes sense.   At its lowest layer an OLT today just converts GPON MAC PHY to Ethernet or Ethernet/IP.   The GPON MAC PHY is specialized hardware, there isn't a whole lot you can virtualize there.  If you really wanted to commoditize it end to end you should have used Active Ethernet... 
cnwedit
0%
100%
cnwedit,
User Rank: Light Beer
5/15/2015 | 2:13:38 PM
Re: Testing?
Anschutz would disagree that they aren't testing anything and most of the folks in the room would as well. This presentation was the hottest topic of the day. 

He admitted they need something new for the MAC - a standalone piece of hardware. But being able to virtualize the other functions is a significant step because if they can do that, there is no need for siloed applications for different methods of access. 

He also showed the same approach to disassembling gateway routers that connect data center infrastructure to the WAN, breaking down functions and using IT-grade hardware in place of highly specialized boxes. 

I guess it's easy to be dismissive at this stage but it sure comes across as real. 
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
5/15/2015 | 4:35:45 PM
Re: Testing?
"Saying something like "virtualized GPON" makes little sense since the bulk of the cost and complexity is the MAC layer which no one is virtualizing. "

 

The MAC is actually pretty simple and about the same complexity is an Ethernet MAC.  The Optics on the OLT side are a pain, only the 3 wavelength version on the ONT side have any challenges at all.

I think they are talking about virtualization at the ONT.  That would be pretty simple if you create the actual GPON function as an Optical module attached to a MAC with a single Ethernet Connection.  You could restrict the OMCI to inside that module and manage just the MAC and the Ethernet Interface.

After that, its just a computer.  Run whatever software on it you want.

But here is the thing.  None of that is the real cost of even the electronics of a GPON network as deployed at something like FiOS.  It is all in the electro-mechanical and constrcution.  I think you could make the electronics (not the optics) inside the ONT costs 0 and it is still a challenge.  

seven

 
dwx
50%
50%
dwx,
User Rank: Light Sabre
5/16/2015 | 12:45:00 PM
Re: Testing?
 Vendors have made ONT SFPs for years now which put what's needed for PON into a very small form factor and convert it to Ethernet on the electrical backend.   There are also pretty simple OLTs today which operate at L2 that don't really do much other than the PON functions needed and act as a simple bridge between it and Ethernet.  Some would argue if you put more intelligence in the OLT itself you may not need a bunch of other "SDN" boxes.  

Management functions you can offload from the OLT to make it even simpler of course, but generally those things run on low power general CPUs on the device anyways.  The realtime processes which run at the MAC layer like DBA, encryption, etc. are going to be in specialized hardware.  

Of course as you said, the boxes like the OLT/ONT add up in cost at scale but are just a small fraction of the cost, the main one being getting the fiber to the home to begin with.   I see this as a neat exercise, but i'm not sure it has a real business case at this point.   More of a benefit is breaking these access technologies into building blocks and re-using VNFs.  So what handles upper layer functions for DSL works for DOCSIS vs. GPON etc.  
cnwedit
50%
50%
cnwedit,
User Rank: Light Beer
5/17/2015 | 5:06:10 PM
Re: Testing?
So how is what you described - "So what handles upper layer functions for DSL works for DOCSIS vs. GPON etc" - different from what AT&T's Anschutz said: 

"Based on my IT people telling me what they'd rather see it look like, I made it look like a big Ethernet switch," Anschutz said. "I can make G.fast look like a big Ethernet switch and I think I could make DOCSIS look like a big Ethernet switch and the IT people wouldn't care. So when they write the app that goes at the top that does 802.1X authentication or IGMP snooping or another SDN control function, they would be written once and work on GPON, XG-PON and G.fast, and maybe DOCSIS. That is going to save me a load in IT costs."
patricknmoore
50%
50%
patricknmoore,
User Rank: Lightning
5/18/2015 | 2:03:43 PM
Re: Testing?
Your first paragraph pretty well states one of the big pro/anti SDN debates. I hear people all the time say, "if we just put the intelligence in the box we don't need SDN". This is true, if we want to maintain the rigidly structured environment we have today that takes months/years to shift to any change in technology and/or customer demand.

Your last paragraph is exactly what I think they are doing. They realize that you can't get anywhere by just making a box (whatever kind of box it is) a virtual component. There is a huge difference in "making something virtual" and "virtualizing" it.  

Making something virtual is trying to create an exact duplicate of the physical thing you have out in the network today and hosting it in a cloud somewhere. This can lower your costs a little, and it can make it easier to deploy said devices, but it does not introduce real innovation into the network that allows for agile and dynamic networks. You may be using a virtual instance of that router/switch/etc, but if all you did was make an exact duplicate of it all of the weaknesses are still there.

Virtualizing something is identifying the functions a thing does and implementing VNFs that do those same things. This allows a granularity of dynamic management that not only saves you money, but allows evolution of your network. For the hardware side, you keep a less intelligent box (presumably also less expensive) out in the world doing only those things for which you will never be able to virtualize, such as the optical functions and somewhere to actually plug in a wire/fiber.

These may seem like the same thing to some people, but if you really study and consider the differences you can see the potential here.

The future will involve new business models that we haven't even thought of yet. In "current thought" there may not be a clear business case in your mind, but what does this allow that we have never been able to do before? That is where the business case will come from...
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
5/18/2015 | 2:23:18 PM
Re: Testing?
The problem with Virtualizing GPON at the level of deinstalling boxes is that GPON is tied to specific hardware.

*rant on*

This is one of my biggest problems with folks not really understanding the Access Network.  You can not virtualize the physical plant.  GPON can not be run on an Ethernet Port.  It can emulate multiple Ethernet connections from an architectural standpoint.  But at the end of the day you need a Chip (ASIC/FPGA/ASSP) that handles the GPON protocol.  This chip needs to be connected to specific optics.

Second problem is trouble shooting. Verizon had us build this super complex ATM to Ethernet Interworking Point inside the OLT for BPON on FiOS.  The Operations people in Tampa looked at it and gasped.  They simply could not model that complexity and asked us to make VCCs to VLANs.  Before we had traffic class modeling and all kinds of complex scheduling algorithms (which we left in the product just removed from the OSS).  Part of the problem is that an actual customer issue needs to be isolated.  Highly mixed traffic at the edge of the network leads to problems in troublshooting.

Which is my 3rd piece of this rant.  All of this stuff exists in the IT domain.  VMs running on Intel Servers and IP on Ethernet.  Now.  Try calling Google from your home to complain about a problem with Gmail.  Now try Yahoo!.  I think if you have not gotten my point you will when you try to find the 24x7 support number.

So you transport and switching engineers - go get your hands dirty and knock on some residences and install some stuff.  Count the number of doors you knock on till somebody answers in the following way:

1- With a gun

2 - Drunk

3 - Nakes (normally a subset of 2)

*rant off*

seven

 
patricknmoore
50%
50%
patricknmoore,
User Rank: Lightning
5/18/2015 | 2:34:16 PM
Re: Testing?
You are correct, the physical plant will always be the physical plant...period. That physical layer will always require special chipsets, at least as long as the current technologies are what are in use. This is a fact.

There are functions that can be virtualized to "dumb down" that physical layer to some extent and allow for more dynamic management. This is a fact.

And, from experience, you left off the dog you get met by at times, either friendly that jumps on you with muddy paws and messes up your clothes or mean that tries to eat you. Been there done that, so don't assume none of us have.

You access engineers need to step back and consider the future a little, it is coming whether you want it to or not. Your points are variations of the same ones I have heard for virtual CPE (albeit the issues get more complicated on this level), and we are doing that today.
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
5/18/2015 | 3:40:03 PM
Re: Testing?
Virtual CPE has been around for a few years.  Just about every UMTS vendor has a version.

But the point is (and read Carol's comments) that essentially the rest of the OLT is an Ethernet switch.  Not a lot of specific management functionality at all.  So since you need to have a box with a processor, maybe you should figure out how much money you are going to save by having the same box with a processor there.

seven

 
Duh!
50%
50%
Duh!,
User Rank: Blogger
5/15/2015 | 4:49:56 PM
Re: Testing?
I'm trying to wrap my head around what a virtualized GPON OLT blade would look like. 

Obviously, there are a bunch of functions that have to reside in an ASSP (or FPGA?).  There are a generic Ethernet switch functions, which are obviously NFV applications.  There are some GPON-specific functions that seem to lend themselves to virtualization, like OMCI and the device management capabilities that it supports.   And there are few gray areas, like dynamic bandwidth allocation and some parts of Physical Layer OAM that have real-time components yet would benefit from virtualization.


Have to admit to a twinge of envy for the folks working on this.

 
Ben Saggio
50%
50%
Ben Saggio,
User Rank: Light Beer
5/15/2015 | 6:47:50 PM
OLT Virtualization
From looking at the details it seems that PMC Sierra (the virtual OLT part of this trial) did hell of a job here (in this virtualization concept).

As one of the commenters said - there's a lot behind the hoods of GPON which differentiate it from the much simpler Ethernet -- fragmentation and defragmentation of packets, PtMP and Dynamic TDM DBA algorithm, specific OMCI control language and QoS elements.

If I captured it correctly - PMC represented the GPON MAC as Etherenet and this is an interesting approach.

Virtualization of the OLT indeed means that the control plane runs now over VM, (nobody meant that the MAC will be emualated on a x86 CPU) but in fact it is much more than that -- it includes virtualization of the traffic engineering which was completely taken out of the OLT and moved to standard (cheaper :) SDN switches and routers.

It seems that AT&T approach to keep just the PHYs at the edge and take all the rest out enables high density blades.

I am not sure about all the details but it should be quite interesting new OLT hardware here.

 
cnwedit
50%
50%
cnwedit,
User Rank: Light Beer
5/15/2015 | 7:11:34 PM
Re: OLT Virtualization
Unfortunately, I don't have permission to share Tom Anschutz's slides publicly because that would make things clearer. This ia a public working demo within the OCP, however, so they aren't trying to hide anything. 
GregW333
50%
50%
GregW333,
User Rank: Lightning
5/18/2015 | 1:48:45 PM
vGPON
Anything that minimized the amount of intelligence in the CO is goodness.  Less intelligence equates to less people/less management/OPEX et al.  Plus, CO's, 20,000 of them in city centers across the U.S.A., are a huge hidden real estate asset.  
Featured Video
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events