NFV (Network functions virtualization)

Time for a Telecom Reboot

May you live in interesting times.

That apocryphal Chinese curse provides a fitting label for the 2017 telecom market, as we find ourselves mired in an unprecedented mess -- one that's bad for business for everyone in the entire supply chain of next-gen communications.

The root cause? Virtualization! (In both its NFV and SDN flavors.) For years, the industry has been "bigging up" virtualization as the next great telecom revolution, a way to build cost-effective software-based networks to support the emerging digital global economy. But so far the only CSPs that have been brave -- or foolish -- enough to deploy virtualization have found it to be an expensive and time-consuming exercise in frustration, one that has yet to deliver on either cost savings and or new revenues.

To fix this situation we first need to understand what's gone wrong: For 150 years, the telecom industry has owed its success to a clearly defined and proven process for standardizing new network technologies. Then along came virtualization and for some reason we got it into our heads that it would be a great idea to just up and abandon our standards-based best practices altogether, instead developing virtualization by punting it over the fence into a baffling array of open source groups and industry consortia (each loaded with a full boat of personal, corporate, and political agendas, incidentally).

This was a particularly inauspicious start -- and here we are, after four years of intensive work on NFV, and we're really no closer to figuring this thing out than when we started -- with no strong industry standard defining it. (ETSI's efforts don't count; being akin to the "Code" in Pirates of the Caribbean… more of a guideline really.)

Why is this a problem? Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability service providers are in the same place they were in the 1990s: locked in to buying overpriced proprietary solutions from incumbent equipment vendors.

Further, without the competitive kick in the pants created by a competitive, heterogeneous, interoperable environment, vendors have failed to deliver solutions that are fit for service providers' purpose. Today, it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry's leading infrastructure vendors to work -- not really what anyone would call "out of the box" software. And typically service providers have to pay the systems integration division of the NFVi vendor to do that work… talk about jobs for the boys.

So that's half a year of expensive integration work just to get the NFV plumbing to work.

But wait -- there's worse! The value of virtualized networks is supposed to lie in the magnificent variety of new services that run over them. But the fact is that NFVi vendors have been paying lip service to a vague concept of "openness" while simultaneously maintaining the proprietary software and hardware hooks that have kept them profitably in business for the last two decades. Which means that at the end of that six-month installation period, carriers are finding that the only services they can run over that infrastructure are the VNFs that are sold by -- correct! -- the same company that sold them the infrastructure.

This is:

1. Not good enough!

2. The exact opposite of what service providers were told to expect from NFV.

Making things worse, issues within the virtualization sector are having a corrosive effect on the commercial prospects for the wider telecom industry. Growing frustration on the part of CSPs with the shoddy state of the technology has prompted them to push their virtualization plans back -- or postpone them altogether. That's left mega-corporations like Nokia, Cisco, and Ericsson with a big fat hole in their bookings ledgers where those sales of virtualization technology and services were supposed to sit. And that, in turn, has sent an icy chill through the rest of the telecom ecosystem, derailing the growth and sending the industry into what is starting to feel more like a death spiral than a market correction.

So who's to blame for this situation?

Let's start with the open source community. With its insanely complicated, quasi-anarchic, happy-clappy, "we don't like standards" approach, open source's approach to developing code works fine if you're crowd-coding an Internet browser (or a spot of freeware Donkey Kong) but the effect of the open source process on the telecom market has been toxic. What we need to do is take the innovation and the ideas from open source and then superimpose telecom's unique brand of discipline and standards over them -- something which simply has not happened yet.

Let's take a moment to wag the giant foam finger of admonishment at enterprise vendors, also. They leapt -- like a salmon! -- on virtualization as an opportunity to compete with their telco vendor counterparts to start building carrier networks, arguing that what we're really talking about here is building virtualized cloud networks. "Nobody builds cloud networks like we do," they Trumped. The problem with that line is that enterprise cloud and telco cloud have totally different requirements, and it turns out that enterprise vendors are actually a bit shit at building telco nets (see Telefónica Ditches HPE as Virtualization Lead). (This should not, perhaps, come as a huge surprise. HPE has been a bit shit at doing anything other than being a bit shit for as long as most of us can remember. The idea that it could suddenly and overnight -- hey presto! -- become not shit at building the largest and most demanding telecom networks in the world was always questionable.)

Trade publishers (ahem), analyst firms (sorry) and so-called experts in general also should be called out for hyping the possibilities without paying more attention to the realities.

But the demographic that must take most of the blame for the current very real virtualization cataclysm is, of course, the telecom community as a whole -- for allowing ourselves to be wooed by the promise of virtualization and abandoning the first principles that have successfully guided us, as an industry, since the 19th century. How do we get back on track from here? As an industry, we need to stop the crazy train and get back to basics.

That process starts with defining realistic goals. I've heard a lot of hoo-hah over the last few years about how the end point for service providers is a DevOps environment like the one found in enterprise networks. This is, to use a technical term, complete bollocks! Excepting OTTs, the vast majority of service providers and CSPs have neither the culture nor the skill set to implement DevOps -- even if they wanted to. And they don't. One of the supposed benefits of a DevOps environment is that it allows constant changes to be made to network services. That's fine in an enterprise world, if you like that kind of thing (and enterprise IT departments seem to live for it) but on a telecom network it's just about the last thing CSPs want to deal with.

What service providers actually want is what they were promised when NFV first emerged: specifically, to be able to implement the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of "best in class" services and applications, developed by specialist third parties, in the sure and certain knowledge that these VNFs are absolutely guaranteed to run over the NFV infrastructure they have in place. Creating this virtualization free market technology economy is not as hard as we've made it look. But it will require us, as an industry, to pick one API (one, yes? As in, less than two, FFS!) between the NFVi layer and the VNFs that run over it, and standardize on it.

FYI, for the last six months, working alongside the not for profit New IP Agency (NIA), I've been reaching out behind scenes of the telecom industry to gather the support required to standardize on just such an API specification, and launch an independent certification program based on it.

I'll be sharing more information about the NIA's plan in my column here in a couple of weeks' time but I can promise you that it will be very good news for the industry -- probably the best news we've had in this very challenging year, inshallah.

— Stephen Saunders, Founder and CEO, Light Reading

Page 1 / 5   >   >>
brooks7 4/28/2017 | 12:17:34 PM
Re: Its Not Virtualization -- but the Foundation @Duh!,

Good one. 

Duh! 4/27/2017 | 6:57:21 PM
Re: Its Not Virtualization -- but the Foundation "Lets go back to 1993, and reimagine how IP routing could and should work."

We did.  It was called ATM. The Betamax of networking technologies.
Kevin Mitchell 4/26/2017 | 9:21:54 PM
Focus on Business Outcomes in Choosing Your Virtualization Path If the goal is more revenue and lower cost by implementing "the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of 'best in class' services and applications", then buying NFVI this and integrating VNF that isn't necessarily the way to do it. 

We've been saying for years that operators need to look at business outcomes and evaluate the various paths to virtualization. Cloud building or cloud sourcing are the choices; look at the application in question and which path best maximizes the chance at realizing those outcomes. In many cases it will make sense to build a new virtualized network to support multiple applications.

Given the state and future direction of voice and UC we believe that, for the vast majority of operators, that cloud sourcing is the best way forward (yes, the global top 100 will build an NFV IMS network and complete that in 2023 or thereabouts). Why cloud source virtualized VoIP? It's lower cost (and no CAPEX) with a success-based business model, operationally simple, and it's tremendously most agile than software on premises. Oh, and CSPs get to spend dollars and people focus on other initiatives while still owning and delivering a modern communications services for its customers.

Yes, this CSP VoIP cloud sourcing is what we do (and first to do it). But the traditional VoIP players are getting into this game too (BroadSoft BroadCloud is the fastest growing part of its business, GENBAND is a KANDY junkie and Metaswitch has joined the club).

A Heavy Reading study confirmed this too: 83% of CSP respondents said that it was somewhat or very likely that they'd use a XaaS option for replacing or augmenting network infrastructure. And, along with building VoIP, cloud voice platforms was a top voice network evolution path. Read more here -> Heavy Reading: Cloud Defines a New Voice Strategy.


gregw33 4/26/2017 | 12:36:26 PM
Gaming Open Source and SDO's Standards Development Organizations (De Jure and De Facto) are extremely easy to "game" ...

I hope readers don't think Open Source Software initiatives cannot be "gamed" as well...

They are...


scanlanavia 4/26/2017 | 9:42:41 AM
Frustrated Analyst .. there No doubt the author fell out the wrong side of the bed when this was written.

However I tend to agree with most of what was said. Four years down the road and nothing much achieved except plethoras of white papers and marketing hype.

Can you tell me what on earth is the issue with poor old HPE .. They are not alone in stonewalling the exercises.

DevOps, DevOps. Devops..... this is the mantra  .. being showered at CSPs.

I agree entirely CSPs will NEVER successully put Devops in place... it's like the promise of the Agile Manifesto which likewise never worked for large organisations.. It's fine if you're making a small consumer app... But you cannot scale DevOPS likewise and hope that the daily update is going to smoothly avoid the occasional HLR meltdown for instance once in a while..  ( a catastrophe in otherwords )

CSPs cannot afford happy clappy failures..

I actually thing ETSI/NFV got off to a good start with their NFV showcases , usecases and proof of concepts...... but seemed to run out of inertia to finish the job off..

The Open Source is a like a  hamper of shiny chocolote wrappers each offering their own distinct piece of the jig saw. However the ecosystem is now so complex that even the PHD's can't quite figure out where to start anymore..

Well done for waking up the industry a bit... but I think it will take more than this to steer the industry from going over the cliff...
rommelb 4/25/2017 | 2:42:24 PM
Interesting article, but.... Disclaimer: This is my view.  I am constantly learning, and would love to understand other thoughts.

Interesting article, and I do have to agree there is a culture mismatch  But there are things that can be done today.  Just like ATT in its early inception, google had a journey as well.  ATT built analog switches to move away from cord boards, because there was nothing there to help them grow and optimize.  Just as the OTT players had to use software to build more efficiently optimized networks and services because there was nothing there to shore up their growth.

Building blocks for changing how to optimize business and operational process are there, just as interface standards are there for interop.  Can it be better? yes, but there are areas to start.  Boiling the ocean of orchestration across a full customer facing service stack is difficult if the small successes of automating anything done twice, hasnt been built as foundation on the infrastructure. Automated outage recovery is difficult without understanding how the network is modeled and an understanding of types of outages and blast radius of those types.  Abstracting and automation of resource facing services on the network both PNF and VNF has to be done in order to orchestrate the NF relationships.  This all has to be done before one can even activate services on that infrastructure for customer consumption

My bottom line recommendation; Understand the processes, risks and impacts, automate everything, abstract the information so that automation is reusable,  orchestrate lifecycle where it makes sense, understand your data and optimize with that data using the automated tools built.  A meal like that is best taken in bite sized chunks.  

Start with training courses on software like Ansible, maybe get programs in place like automation of the month or quarter for the network engineers that create something with some gains.  The simple win of creating a culture of automation and optimization can lead to operations efficiencies that can pay back.
reimagine_networking 4/25/2017 | 9:49:42 AM
Its Not Virtualization -- but the Foundation You are correct that we need standards for VNF's to inter-operate, but the problems are in the routing plane. We are relying on tunnels, ethernet and broadcast domains for security (L2) instead of L3. We need to rethink how IP networking works. The correct foundation for IP services is not L2, but rather L3. VLANs are not routable, are not accessible at the application layer, and do not scale. VxLANs are advisory, and utilize waseful encapuslation. I agree there should be a call for standards. Lets go back to 1993, and reimagine how IP routing could and should work. Lets end layer violations by always resorting to ethernet solutions.
mhhf1ve 4/24/2017 | 7:14:23 PM
Re: Diagnosis > "The model used in Telcos is that nobody can be fired for screwing up except at the bottom.  Committtees make decisions.  Those committees do detailed studies.  They debate fine points.  Nobody can be blamed for a bad decision.  The accountability is too diluted."

That's the stick.. Where's the carrot? If smaller telcos had the incentives to actually try to do new things (ie. there was an upside in sight), there might be organizations that could avoid committee-decisions and just go. But when municipal telcos can't even get off the ground, and smaller telcos are struggling to cobble together contiguous networks... perhaps there's a larger problem at hand?
Voluntee24126 4/21/2017 | 5:15:35 PM
Re: Diagnosis The issue the Telcos have is that if Facebook or Google have an outage, it might hit the internet news if it is big enough.  A major outage by a Telco hits all of the news outlets and if it is a multi-region one, their CEO and/or CTO end up testifying before Congress.  The stakes are much higher for the Telcos than Facebook and Google.  Fault recognition and switch to protect time in the core telecom network is 51ms.  As you have noted, there are few specifications in the Facebook/Google world.  The ones that I have seen reported are 5 or more minutes for fault recognition.  I have run across no time limit specs for protection switching for the social media or even cloud vendors. 

The key here is competing with the OTT application vendors while maintainng the basic reliability of the core network.  Not an easy task.

creynolds32701 4/21/2017 | 9:22:05 AM
Re: Diagnosis Hi Carol - Please reach out to me. I want your advice on some new products we are introducing to help in this area. 




Page 1 / 5   >   >>
Sign In