NFV Elements

Analysts Warn of Major NFV Gaps

Despite significant new activity this fall on the NFV specification front, two industry analysts who closely follow network functions virtualization are warning that there are still major gaps in how it is being defined that will slow NFV deployment and undermine its business case.

Caroline Chappell, principal analyst, Cloud and NFV at Heavy Reading , and Tom Nolle, president and founder, CIMI Corp. , both say there still is no clear definition of what orchestration functionality virtual network functions (VNFs) need to run within the network nor any clear assignment of management duties among the different layers of the NFV architecture. Without these key elements defined, network operators who want to deploy NFV will be reliant on vendor-proprietary solutions to plug those gaps.

The two analysts also agree that these two fundamental issues are not being addressed by the new open source group, the Open Platform for NFV Project Inc. , nor were they adequately addressed in the publication of a third white paper from 30 operators involved in the European Telecommunications Standards Institute (ETSI) NFV Industry Specifications Group. (See Open NFV Group Uncloaks Its Platform Plan and ETSI Group to Tackle Thorny Operations Issues.)

"Management is still the black hole of NFV," Nolle tells Light Reading, in his characteristically blunt manner. He believes that the ETSI ISG NFV process itself is flawed because instead of a top-down approach that started with the benefits sought, it has been building a bottom-up architecture, an opinion he has widely shared with the ETSI participants and in his CIMI blog.

Chappell sees the ETSI NFV ISG effort as a high-level affair, lacking necessary details, and is concerned the telecom industry is about to allow its history of failing to address those details repeat itself.

"If you want to be able to do the necessary level of automation and orchestration, there are the details you have to tackle," she comments. "As an industry, the telecom sector has been very bad at tackling details."

As a result, Chappell notes, it has been left to vendors to tackle the details, and that approach results in proprietary deployments that lack the plug and play and interoperability characteristics the network operators claim to be seeking in the move to virtualization.

VNFs left undefined
One of the most fundamental gaps that both analysts cite is in defining what the virtualized network functions will need to run in the expected automated fashion.

"There are still some fundamental questions that are open that are so fundamental it's often even hard to understand why they still exist," Nolle says. "You have to ask the question: What is it we provide to a VNF to allow it to run? How does that which we provide it get produced? How do limitations or characteristics of what we provide it influence the design of the VNF itself?"

There is also a lack of information about what software can be easily translated into being a VNF or a description of what VNF services are, the veteran analyst says. While there is something called a service orchestrator that can build services, there isn't a service data model and no clear method to build a service, he says.

Nolle himself tackled that topic twice, initially in the Cloud NFV vendor ecosystem and, after he left that effort, with his own ExperiaSphere. "I'm not telling anybody I have the right answer -- I have AN answer," he says. (See Answering the NFV Management Challenge and Analyst Unveils Open Source Model for NFV-SDN Management.)

Chappell sees the need for "a recipe book" of VNF requirements.

"I agree we need to understand more about what individual VNFs need if they are running on an infrastructure -- what does the basic requirement look like, how do they behave on that infrastructure?" she says. "What are the specific details of that? All the operators are trying to do it for themselves or through ecosystems. I haven't seen anything being published -- that is the kind of recipe book we need to get to."

The MANO dilemma
The other major challenge is in the network management and orchestration layer, where there are a number of issues still to be addressed.

Chappell has spent much of the last six months looking at the NFV process and particularly examining issues around what's called the MANO layer of the ETSI NFV ISG architecture for management and network orchestration for a report she will produce in November: "Managing and Orchestrating the Telco Cloud: Preparing the NFV Mano and OpenStack for Prime Time."

"We need a clear view of the specific responsibilities of each individual layer of orchestration, for example, what level of resource management is carried out by the VIM and what by the NFV Orchestrator and how the VNF Manager interplays with both -- the deep level of details that we need that is missing," Chappell says, referring to the ETSI NFV ISG architecture as shown below.

ETSI NFV ISG Architecture

The new Open Platform NFV open source group is looking at the VIM, as well as the NFV Infrastructure, but Chappell says their approach is limited.

"OPNFV is only looking at the VIM in the context of OpenStack (the open source cloud standard). But there are a whole lot of other NFV management considerations that OpenStack does not address, and which the community may never agree to include and these are included in scope," she notes. "At the moment, people are putting them in this bucket called 'NFV orchestrator' and there is no clarity whatsoever," the Heavy Reading analyst says. "If OPNFV is not looking at the entire managed stack, but only at the bottom level of it, where OpenStack resides, then I don't think these things are going to easily be resolved."

Looking at the same ETSI diagram, Nolle says there is no connection between the VIM and management: "The resources are part of NFV-I -- well, how do we manage anything?" he asks, leading up to his conclusion that management is still a black hole.

Cause and effect
Nolle and Chappell agree that the industry is facing a process problem. The growing number of groups addressing these complex issues is straining the ability of operators to participate, given their limited resources. Chappell says a number of operators have grumbled to her about the OPNFV -- not for its intent or purpose but because they don't have more people to send to more meetings.

The companies who can afford to participate to the greatest extent are vendors, since they stand to reap business benefits from both influencing and staying close to the development of new specifications. Chappell believes the notion that vendors are deliberately impeding the process "is a myth, it's untrue." But she and Nolle agree that contributing to either the ETSI NFV ISG or the OPNFV at a significant level requires resources that some operators and independent voices don't have.

Our 'OSS & BSS in the Era of SDN and NFV' event looks closely at this key transition. You can check it out here on Light Reading.

And Nolle admits he has widely shared his concerns with operators, so they aren't overlooking the issues he raises, just ignoring them. At the same time, he points to his own recent surveys of network operators which show none of them claim at this time to be able to justify on cost grounds NFV deployment in their networks.

That's a direct result of the way the bottom-up process has worked, Nolle says. He is less optimistic than Chappell on what the future holds for NFV. She believes that 2015 will be a pivotal year in which some of the details around VNFs and network management need to be resolved, while Nolle thinks next year should have been when NFV in general was able to flourish.

"By the end of 2015, we could have production NFV running anywhere we wanted to run it," he says. "But what we would need to do to make it happen would not appear to be part of the processes that are under way."

— Carol Wilson, Editor-at-Large, Light Reading

Page 1 / 2   >   >>
TomNolle 10/23/2014 | 7:35:03 AM
Re: Minding the gaps I agree that the TMF has a stake in this, as I said in my own blog on the topic.  I think the goals of Zoom are reasonable, but I can't honestly say that I believe it represents a top-down approach.  I had discussions with TMF leaders at the same time I was talking to the NFV ISG, made the same points to them in these discussions.  I proposed changes to the TMF models, primarily the introduction of what I called a "binding domain" that would reflect the dynamism in new networks created by agile service needs meeting agile resource architectures like SDN and NFV.  I've not seen any indication of progress on these issues, and in fact some of the changes being proposed in the latest round of TMF material actually remove things that I'd found helpful to virtual environments.  I'm also uncomfortable with having industry progress driven by bodies that charge for membership and charge for attending meetings--and not insiginficant charges.  The NFV ISG is at least a body that's truly open to all.

I'd love to see the TMF take a lead in aligning the future of management and orchestration, but I think they need to step up their own game to do that.
carlpiva 10/23/2014 | 5:37:06 AM
Re: Minding the gaps

Hi Carol, Tom and others,

interesting blog and follow-up discussion. Wanted to comment on it from a TM Forum perspective, as we along with our membership are heavily invested into the management and orchestration aspects of NFV. 

As most of you know the Forum has a ZOOM program (short for Zero-touch Orchestration, Operations and Management, please refer to http://www.tmforum.org/zoom for more info), where we have a community of 100+ companies and 900+ individuals who are either participating in the program or tracking its progress.

We are working on 12 different topics supporting three major themes: DevOps Transformation Framework for the Digital Ecosystem, Blueprint for End-to-end Management and NFV Procurement and Operations Readiness including hybrid management of current and virtualised networks. 

We aim to deliver substantial progress leading up to our Digital Disruption event in December in San Jose. For instance, we will provide a number of gap filler definitions and a "snapshot" information model, with an associated Policy Management model addressing how the NFV pieces actually relate to each other from a management and orchestration perspective and integrate with current networks.

We are working to define the rules and specifications on how to produce lego blocks to create virtualized services, so when they joined up, they interoperate and form a consistent end-to-end view.  This will allow service providers and vendors to assemble their own solutions.

We have tried very hard to take a top-down approach, as we believe this is needed in order to solve e.g. the challenge of end-to-end management across administration boundaries. This is one of the reasons why we haven't yet fully addressed the orchestration challenge, as we believe we need to have an information and policy model in good standing, and the practical experience from our DD13 catalysts in order to effectively address the orchestration and policy space. 

Now, will we have all the answers? No. Will everybody agree with the ZOOM conclusions and recommendations? Probably not. Will we have taken a significant step forward to address these issues? Well, we certainly think so, but the "proof is in the pudding" and we will leave that to future catalyst PoC projects, analysts and industry leaders to judge.

Trying to solve these challenges is a sobering experience, we need to be humble in our approach but also strong in our conviction - we do have many of the right prerequisites in place to lead the wider industry through these challenges.   

To Tom's point, standards will define software, but equally, software will define standards. We are trying to learn from both worlds and yes it is a painful experience. We had five NFV catalysts leading up to Nice in June 2014 with members working on four additional catalysts leading up to Digital Disruption in December (DD14), all of which have contributed immeasurably  to our technical specification work. These Catalyst Proof of Concepts  span areas such as:

- Policy controlled management and operations (including information framework integration, user interfaces to provide role-based views and policy-based management for security, optimization, compliance and governance),

- Data-driven network performance optimization for NFV and SON (building a closed loop using KPIs to enable network changes, optimization and healing),

- Harnessing the benefits of NFV to maximize profitability (exploring NFV policy orchestrator taking in fluctuating costs and metering to maximize operating margins)

- Multi-cloud, multi-administration SDN-NFV service orchestration (seamless management of services relying on virtualized resources in a public cloud).

 In addition to this, there are a number of other PoCs being run and other open source initiatives that will likely have an impact on future standards development. Sometimes I think about these things as communicating vessels.

We would love to see the key stakeholders in the NFV movement join us at Digital Disruption (we are also hosting a full day workshop where the ZOOM program will present its results and where we will also discuss next steps). Going into December we will also be launching three eBooks, one for each theme above, where we summarize our progress and conclusions.


TomNolle 10/22/2014 | 9:47:18 PM
Re: Minding the gaps I'd enjoy that chat, Seven, and I'll let you know next time I'm out there!

Your approach was identical to my approach in framing my open-source models for NFV (both in CloudNFV and ExperiaSphere).  The idea is to use more of a message-based approach with XML-structured data.  Each "object" to be orchestrated is responsible for defining its own data needs and relating them to the messages it can accept--transformations in short.  This is pretty compatible with the Linked USDL and TOSCA approach I mentioned.

I also agree on the control network notion.  One of the rules I suggested in my documents and emails on the NFV boards was that there's an axiom here; thou shalt never allow an element of infrastructure to appear or be addressable in the service data plane.  If you do you can kiss stability and security goodby.  My solution in CloudNFV was to use RFC 1918 addresses for all the internal stuff, segmenting the 10.x.x.x space as needed.  I'm sure there are more elegant ways of doing that but this served its purpose and leverages more of the available software tools.

brooks7 10/22/2014 | 9:25:16 PM
Re: Minding the gaps Tom,

Sounds like someday we should talk and if you ever make it to the Bay Area let me know.  The biggest problem that I found as we did these kind of things in our Mail Service was that we were cobbling together disparately implemented technologies.  Having hard APIs that expected to be software programmable turned out to be rather troublesome.

What we started to do is define XML files to exchange and then moved onto JSON data.  Thus we moved away from strict coding APIs and onto essentially higher level exchanges.  About the only other mechanism that we used was SQL for certain remote database operations, but those were older and we ended up wanting the level of indirection.

I agree wholeheartedly with davidfoote that direct API calls end up being problematic.  Eventually you want to be able to replace one major element with another.  As long as you can have an extensible interface where you can easily ignore deprecated information you can migrate.  My experience is with many Software Packages is that they tend to redo APIs between versions.  

It isn't clear from the architecture diagram but the other thing that you may want to ponder in it is the use of an overlay network for control portion of the network.  Service Registration and DNS Publishing can get tricky if you try to use a regular network structure.  That is I assume you are planning to use regular web structures for pub/sub model to be able to figure out what goes where and allow clean entrance/exit to the information flow.


TomNolle 10/22/2014 | 8:32:46 PM
Re: Minding the gaps What you're describing is what I suggested to the operators in the fall of 2012 after the NFV Call for Action and before the ISG was formed.  I said then that IMHO it would be difficult to get something like this working if the focus didn't turn very quickly to prototyping.  I followed that with a PowerPoint to the ISG in the spring of 2013 that made the same points in more detail.

I'm of the view that the basic architecture of NFV is set by its mission and that as you suggest it would be fairly straightforward to assemble existing elements (even from open source) and build something that would fit that mission.  This could be used to experiment with approaches, uncover issues, etc.  I still think that's possible even now.  Cloud tools like TOSCA have been combined with Linked USDL and SugarCRM by Jorge Cardoso in Portugal to create a proto-order-management and deployment system for the cloud.  It could be adapted to NFV easily (as I've suggested in my ExperiaSphere project).

The biggest benefit of prototype-based approaches is that they're not abstract.  You can show people what happens, how things work, and that aids considerably in the visualization of issues and opportunities.  It also helps expose fallacies in early assumptions or weaknesses in approach, letting you fine-tune quickly.  In short, it can engage people quickly.  If the problems NFV is supposed to solve are as seroius as operators say, we need to promote that engagement before the issues run away from us.
brooks7 10/22/2014 | 6:27:13 PM
Re: Minding the gaps I guess my view is very simplified.

The IT folks just went out and tried some implementations.  When they found what worked for them they scaled it (see VMWare).  Things that are new here are old hat there and I still think we could all take a page out of that book to see what we could steal from that world.

I think this would save considerable time and money in trials.  I think this notion that you are going to be able to pre-do architectures for this kind of work is not going to be fruitful.  In the US, we had all of Bell Labs to make architecture who then gave it to Telcordia to specify who then gave it to Western Electric to make.  

Expecting a hodge podge of groups to make a coherent architecture just seems unlikely.  If I look at the IETF as an example, most of what gets published there dies an ugly death.  And what works gets broadly adopted.  Except for IPv6 (I kid, I kid).  

I think the only way this is going to get off the ground is grass roots in the carriers.  Somebody just starting to implement.  There are things that could be done today and they aren't for FUD reasons.


davidfoote 10/22/2014 | 5:49:38 PM
Re: Minding the gaps Thanks for the thought provoking article and discussion.

I would like to add some additional thoughts or expansion of previous comments.

Even though APIs (which are protocols/interfaces in the software world) may not have as much inertia as they do in the hardware world, they will probably eventually have significant inertia in NFV implementations. NFV moves the network towards a large, complex interconnected, modular software system. So as NFV implementations mature and broaden, more and more software modules will be using the defined APIs such that changing those APIs (which may be faster and easier to do in the software world than changing protocols/interfaces in the hardware world) the impact to the total system becomes quite significant and thus creates a different kind of inertia. So I would argue that defining these types of interfaces up front is very important. And thererfore, corroborates the article's emphasis on pointing out the potential negative impact just because certain aspects like management are not yet defined sufficiently in the industry associations/standards.

Also, the article points out the potential negative impact of proprietary solutions. To that issue, even though we are talking mostly about software, industry associations are where non-proprietary solutions are typically defined. (of course, some small club of companies can go define their own mechanism or build their own open source or commercial solution but usually that results in several different clubs doing the same thing and confusing the market until competition eventually decides a winner). So the association/standards process is still quite valuable even when dealing with primarily software.

Finally, the impact and inertia will expand in another dimension that is not so obvious but equally important. That dimension is related to who is using the APIs (interacting with VNFs). The most obvious aspect is how APIs and VNFs work within a single service provider's network. And, as pointed out in this article, there is still some work to be done in defining APIs for certain aspects of that case (i.e. management). However, there will also be cases where service providers access each other's VNFs (analogous to today's MVNO or Roaming commonly used capabilities). So having defined APIs, including for some management aspects, will be very important for that case as well. And finally, there will be the case where non-service providers access virtual functions, say for example, some type of content provider. Again, this emphasizes the need for well thought out and defined, standard APIs that can be used to interact with virtual functions (VNFs) both within a service providers network but also outside of it.
cnwedit 10/22/2014 | 2:25:59 PM
Re: Minding the gaps Excellent point, Tom, and nice addition to what I covered in the article. 
TomNolle 10/22/2014 | 2:24:39 PM
Re: Minding the gaps I think one problem that the ISG has faced, and that other bodies like the TMF are also facing, is that we're not used to thinking of standards that define software.  As we move into our virtual future--however we get there--there's not much dispute that software plays the key role.  That's a whole different game than standards that were aimed at harmonizing hardware.  We used to worry about protocols and interfaces because both were hard to change on devices.  Now it's a world of APIs that have much less inertia.  There are many examples of how standards processes have to change to accommodate a software-centric world, but the best thing to say in summary is that software is produced by software projects, not standards or specification groups.  We'll have to decide how to accommodate that reality at some point.
cnwedit 10/22/2014 | 12:30:23 PM
Re: Minding the gaps Well, there's open and there's "open."  I had thought what was different about the ETSI NFV ISG was that it was launched by operators and continued to be led by them. But as Tom and Carolyn point out, that doesn't matter all that much when the operators don't have the resources or talent to push the process forward or note where it's lacking.

It's also notable that some key people who were part of the process at the outset - Pradip Sen and Andy Malis of Verizon, now at HP and Huawei respectively, and Don Clarke of BT, now at CableLabs -- have changed jobs and, perhaps, priorities. 
Page 1 / 2   >   >>
Sign In