& cplSiteName &

Timeline for NFV Still Hazy

Graham Finnie
Heavy Lifting Analyst Notes
Graham Finnie, Consulting Analyst
10/31/2013

In all the hullabaloo around software-defined networking (SDN) and network functions virtualization (NFV), with new "virtualized" telco products announced almost daily, it's chastening to note that virtually nothing virtual has actually been deployed by telcos to date, and that the timeline for this much-anticipated revolution remains, for now, very much a matter of opinion. While the logic behind NFV and SDN is unassailable, and the destination unarguable, at least for leading operators, that doesn't mean they have a clear view yet of how to get from A to B, or of how long it will take.

For the time being, and probably for most of the next year, "proof of concept" will be what it's all about for the majority, and the gains from virtualization in this interim phase will largely be tactical, entailing point solutions for a few of the most easily convertible functions, such as optimization or firewall software.

Getting beyond that and applying SDN and NFV to the rest of the network -- and to the OSS that goes with it -- is a different matter altogether. At last month's big SDN & OpenFlow World Congress in Germany, for example, one of the world's leading SDN and NFV evangelists, Axel Clauberg of Deutsche Telekom AG (NYSE: DT), noted bluntly that "There is no carrier-grade SDN today." Clauberg's team is charging ahead with SDN in its TeraStream architecture, but as for the legacy DT network, "This is an architecture for the end of the decade," he said.

It's easy to see why: There are some big holes in carrier SDN that still need filling, ETSI's NFV effort remains a work in progress, and there are some major controversies still to be resolved. Should the southbound interface between control and data layers focus on OpenFlow, as the ONF hopes, or is the more hybrid environment advocated by Open Daylight and some carriers the way to go? How far can carriers use open-source approaches -- especially OpenStack -- and should they opt to use some proprietary protocols and APIs to fill in gaps, at least in the short term? What happens to the existing standards environment and its interfaces? What can be shifted onto generic COTS hardware, and what must remain the domain of specialized hardware and chipsets? None of these issues has a clear answer yet, and there are plenty of others we could add.

If there was one message from the SDN event, though, it was that the debate has moved far beyond hardware economics to a much more ambitious vision of a radically simpler network environment that will, if deployed, transform the telco industry. For DT, the aim is a network "running on autopilot" like a modern airliner. For Telefónica SA (NYSE: TEF), likewise, it's about a network that's zero-touch and fully programmable. Most of all, it's about a complete reworking of OSS.

That will be a mammoth task for the average legacy telco. It will likely require a new type of organization with a different set of skills more akin to those needed in enterprise IT. But as one operator put it, "Only a few operators see radical simplification as the aim [at present]. The rest are still watching and waiting."

In that phrase lies the main dilemma for the supply industry. It's hardly an ideal situation for strapped-for-cash vendors trying to invest and plan for the supposed revolution. Jump too early, and they risk creating a large hole in their revenue lines for the next two to three years. Jump too late, and rivals could already be embedded at leading operators before they have a product to sell.

For the big vendors, the aim must be both to provide the proofs of concept and point solutions that will dominate the first phase through the next year or two and to have a clear roadmap in place to achieve the more radical vision being debated right now in more and more big telco boardrooms.

At Heavy Reading we'll continue to track opinion through our ongoing program of online network operator surveys and interviews, and hope to be able to report back with a firmer timeline soon. Watch this space.

— Graham Finnie, Chief Analyst, Heavy Reading

(10)  | 
Comment  | 
Print  | 
Related Stories
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
dapperdave
dapperdave
11/6/2013 | 5:51:48 PM
Re: Yet another vendor risk
There are, of course, service providers who have designed, built and deployed their own networks... google is the example. But the business culture within google is certainly different than the business culture within a traditional telco SP.

google had more of a vested interest in designing/buidling their own networks - rapid service delivery - that traditional telco SPs are not good at. While TSPs promote an interest in very rapid service delivery their ultimate execution has been non-existent.

There are progressive teams within these traditional TSPs that are hell-bent on avoiding their own internal ops team 24-36 month NPI service deployment cycles... but these are fighting a battle with powerful and ascendant operations orgs in the same TSP who REQUIRE the longer introduction cycle to be able to identify, manage and predict operational cost and reduce cost risk due to unknown or ill-formed operations.
victorblake
victorblake
11/1/2013 | 10:37:19 AM
Re: SPIT IS IT!!
Ray -- If you are referring to OSS standards (like TMN) then I would agree that the process is broken and it is hard to call those standards -- because they are hardly widely used. On the other hand if you intend to include Ethernet in the "standards" then I would disagree because the process works and works well. Ethernet is -- for example, the most broadly adopted telecom standard (along with other IEEE and IETF standards).


What's (at least in part) broken about the "OSS" standards is that they have been industry vertical (telco, etc.) and the concepts of "management" that come out of lower level standards (like Ethernet and IP -- aka IEEE and IETF) are relatively blind to the problems of service providers.

A great example is automation in Ethernet. Over the years various Ethernet ammendments have added automation and the IEEE and worked closely with service provider forms like MEF to better understand operators needs. Nonetheless the models do not essentially include basic functions like signaling security (authenticating devices before signaling requests are accepted), accounting (and I don't mean statistics, I mean accounting for billing purposes which distinguishes traffic by customer), and service activiation and testing.


As I said, organizations like MEF have accomplished alot to balance the input to standards to support service providers, but the world of OSS remains highly customized and lacks succesfull, broad, multiservice standards.
desiEngineer
desiEngineer
10/31/2013 | 2:54:37 PM
Re: Yet another vendor risk
"At least one,  maybe two, larger network operators are also warning vendors that if they don't deliver the NFV solutions quickly, the network operators will do themselves, in house."

ROFLMAO!

-desi
Ray@LR
[email protected]
10/31/2013 | 1:40:57 PM
SPIT IS IT!!
General comment - in February 2010 Light Reading first published the Service provider IT (SPIT) manifesto and said this was the BIG change that needed to be addressed.

Guess what? We were right. SDN and NFV is, essentially, the offspring of a telco + IT union/convergence, whatever you want to call it. And the biggest threat to all established players in this industry is to ignore the input of experts from the IT world, to whom the concepts of SDN and NFV are much easier to grasp.

As far as I am concerned, it's official - SPIT Is IT! 
Ray@LR
[email protected]
10/31/2013 | 1:36:54 PM
It's OSS Jim but not as we know it....
The thing that strikes me from having talked with various parties engaged in the development and testing of SDN-related and NFV-related tech is that the current/legacy concept of an OSS will not be so relevant and that the current concept of 'standards' -- most particularly the current telecom standards processes -- will not be 'fit for purpose'.

That's the change that I don't see being taken on board by the current providers of OSS systems. And that's the part of the process that will, most likely, end up going inhouse or be outsourced to a managed services provider. 
victorblake
victorblake
10/31/2013 | 11:03:45 AM
What does success mean ? Calling an old dog a new name ?
Operators have been talking about complete automation of their OSS for years. Certainly, having a different interface (I would hardly call most of them APIs) into each system or box (router, switch, DNS, accounting, etc.) makes that difficult -- and having a uniform set of APIs (standardized) would be useful. Such an approach -- is not what Software Defined Networks is aimed at. It might be what's left when the other pie in the sky visions fall aside (as some of them have begun to do) and further when there is a realization that you cannot gather data from a box in a network, deliver it to a controller, make a decision, and expect that to get back in anything even close to or remotely related to forwarding, real time, etc. Sure, you can create a policy that says X and automate distribution of that policy -- but (a) that has already been done before and (b) that is not what SDN is 'supposed to be.'


As for NFV -- it's just a fancy name for what we've called L4-7 and has been around for decades - literally. "NFV" will be successful because all of the vendors will take their existing products and rebrand them "NFV!" Brilliant. That accomplishes alot. It's like rebranding hosting "cloud." Gee -- it was on a remote server hosted by an SP before and now it is is on the same server doing the same thing and it is "cloud." Ohhh -- ahhh.


The REAL developments that are going on are standardized API's not only for cloud services, but also for network appliances to that the services can talk to those appliances. A simple example of this is the setup of "cloud" services that requires -- for example, server load balancing (SLB) or so called global server load balancing (GSLB) which just means location based rather than simply load based. In order to do this as VM's are spun up, we need a simple API to allow the cloud services controllers to signal to the SLBs to create new instances. This used to be done either by hand, but more likely by script. The real problem being that the scripts were custom for each SLB box (Brocade, A10, et. al.). If a simple standardized API -- were made availble (example NETconf was supposed to do this and still could in theory), the problem would be solved. If anyone wants to call that NFV -- that's fine, but 100% of the technology to do that and the idea existed before the term NFV -- which appears to me to be marketing tactic more than a technology.


All that said, the real job these organizations can do is to take the leverage and interest and develop and publish standard APIs for specific network functions. While this exists for services like DNS, web (HTTP), it does not for services in lower layers (session and transport) such as handled by SLB. This is only one example, clearly other APIs would be useful for security, etc.
Still the problem remains, when a new technology arises, a new API will have to be developed -- leading to a delay in the market while the API is published ... But at least it will be better than the legacy API -- MIBs.
Carol Wilson
Carol Wilson
10/31/2013 | 10:42:15 AM
Re: Yet another vendor risk
Victor,

I wasn't talking about the network operators building routers and switches, but I am refering to software and other things related to making NFV work and automating the processes. 

You raise some good points about the danger of centralizing control, but the reality for today's network operators is that they must find ways of developing services faster and make their network resources more flexible.

Moving to virtualized functions, and separately the data and control planes of their networks do represent one way to meet these challneges. I don't think any approach is unassailable but I don't think you can just dismiss NFV/SDN as the future of networks.
victorblake
victorblake
10/31/2013 | 10:37:11 AM
Re: Yet another vendor risk
In response to Carol - I highly doubt that "network operators" will build network gear (routers and switches) themselves. It requires engineering skills that differ very much from the skills required to design, build, and run networks. The only "network operator" I know of that would consider this has a show and tell network operation, but is not a "telecommunications service provider". It is the same operator  that used to run around with VGA monitors to manage its co-lo gear because they didn't want to spend the $ on KVMs. (Note -- I'm sure spending money on labor is far more cost effective than a KVM -- sarcastically of course).


As for the article -- glad to see some acknowledgement of the GAPING holes in not only technology (the ideas), but the products, and the theory of opertions. I'd go further and dispute your claim that the logic is unassailable. In fact I'd argue it's guit the opposite. It's extremely easy to attack and poke holes at the logic of NFV and SDN. In an environment where is EXTREMELY EVIDENT that centralized control can be and IS ABUSED (aka by governments, corporations, competitor corporations, users, and hackers) -- centralized control is the LAST thing you would want.


I'm all for centralized policy and automating the distribution of those policies, but that should be held in check with distributed routing, switching, and a subscription based model that allows distributed devices to subscribe to (or not) various 'centralized' policies. But centralized forwarding control cannot in theory even respond fast enough to changes in the network to meet the needs of fast re-routing and dynamic changes to state for applications like security.


It seems to me that a serious lack of understanding of the history of technology, mistakes made, and of control systems are conspiring for another terrible mistake (if anyone even recalls what that is).
DOShea
DOShea
10/31/2013 | 10:32:04 AM
Re: Yet another vendor risk
If they do it in-house, carriers also can probably stall these projects if needed without much blood loss. If they hire vendors and later slam on the brakes, there will be blood.

(Sorry, had to take the opportunity to throw in a movie reference where I could.)
Carol Wilson
Carol Wilson
10/31/2013 | 10:22:52 AM
Yet another vendor risk
At least one,  maybe two, larger network operators are also warning vendors that if they don't deliver the NFV solutions quickly, the network operators will do themselves, in house. 

That's not an option for eveyr network operator, certainly, but the biggest players could do it, and they are certain showing impatience. 
More Blogs from Heavy Lifting Analyst Notes
Heavy Reading's Gabriel Brown takes stock of the recent TIP Summit, praising the progress made by the industry body but highlighting some blind spots.
Sunrise Communications, the second-largest telecom operator in Switzerland, reported third-quarter service revenue growth of 2.0%.
With service providers installing so much fiber in their plant right now and seemingly no end in sight, the need for automated test solutions will only keep growing.
For the Australian incumbent, reskilling an entire workforce is the route to a successful network and IT transformation.
Four key themes emerged from a recent HPE and Intel reception, 'From NFV to Cloud Native: Future Proofing Your 5G investment.'
Featured Video
Upcoming Live Events
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events
Partner Perspectives - content from our sponsors
How China's 5G Launch Will Gear Up the Global 5G Industry
By Daisy Zhu, Head of Marketing Operations, Huawei Wireless Network
5G Business Case Revisited
By Hayim Porat, CTO, ECI
All Partner Perspectives