NFV (Network functions virtualization)

Time for a Telecom Reboot

May you live in interesting times.

That apocryphal Chinese curse provides a fitting label for the 2017 telecom market, as we find ourselves mired in an unprecedented mess -- one that's bad for business for everyone in the entire supply chain of next-gen communications.

The root cause? Virtualization! (In both its NFV and SDN flavors.) For years, the industry has been "bigging up" virtualization as the next great telecom revolution, a way to build cost-effective software-based networks to support the emerging digital global economy. But so far the only CSPs that have been brave -- or foolish -- enough to deploy virtualization have found it to be an expensive and time-consuming exercise in frustration, one that has yet to deliver on either cost savings and or new revenues.

To fix this situation we first need to understand what's gone wrong: For 150 years, the telecom industry has owed its success to a clearly defined and proven process for standardizing new network technologies. Then along came virtualization and for some reason we got it into our heads that it would be a great idea to just up and abandon our standards-based best practices altogether, instead developing virtualization by punting it over the fence into a baffling array of open source groups and industry consortia (each loaded with a full boat of personal, corporate, and political agendas, incidentally).

This was a particularly inauspicious start -- and here we are, after four years of intensive work on NFV, and we're really no closer to figuring this thing out than when we started -- with no strong industry standard defining it. (ETSI's efforts don't count; being akin to the "Code" in Pirates of the Caribbean… more of a guideline really.)

Why is this a problem? Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability service providers are in the same place they were in the 1990s: locked in to buying overpriced proprietary solutions from incumbent equipment vendors.

Further, without the competitive kick in the pants created by a competitive, heterogeneous, interoperable environment, vendors have failed to deliver solutions that are fit for service providers' purpose. Today, it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry's leading infrastructure vendors to work -- not really what anyone would call "out of the box" software. And typically service providers have to pay the systems integration division of the NFVi vendor to do that work… talk about jobs for the boys.

So that's half a year of expensive integration work just to get the NFV plumbing to work.

But wait -- there's worse! The value of virtualized networks is supposed to lie in the magnificent variety of new services that run over them. But the fact is that NFVi vendors have been paying lip service to a vague concept of "openness" while simultaneously maintaining the proprietary software and hardware hooks that have kept them profitably in business for the last two decades. Which means that at the end of that six-month installation period, carriers are finding that the only services they can run over that infrastructure are the VNFs that are sold by -- correct! -- the same company that sold them the infrastructure.

This is:

1. Not good enough!

2. The exact opposite of what service providers were told to expect from NFV.

Making things worse, issues within the virtualization sector are having a corrosive effect on the commercial prospects for the wider telecom industry. Growing frustration on the part of CSPs with the shoddy state of the technology has prompted them to push their virtualization plans back -- or postpone them altogether. That's left mega-corporations like Nokia, Cisco, and Ericsson with a big fat hole in their bookings ledgers where those sales of virtualization technology and services were supposed to sit. And that, in turn, has sent an icy chill through the rest of the telecom ecosystem, derailing the growth and sending the industry into what is starting to feel more like a death spiral than a market correction.

So who's to blame for this situation?

Let's start with the open source community. With its insanely complicated, quasi-anarchic, happy-clappy, "we don't like standards" approach, open source's approach to developing code works fine if you're crowd-coding an Internet browser (or a spot of freeware Donkey Kong) but the effect of the open source process on the telecom market has been toxic. What we need to do is take the innovation and the ideas from open source and then superimpose telecom's unique brand of discipline and standards over them -- something which simply has not happened yet.

Let's take a moment to wag the giant foam finger of admonishment at enterprise vendors, also. They leapt -- like a salmon! -- on virtualization as an opportunity to compete with their telco vendor counterparts to start building carrier networks, arguing that what we're really talking about here is building virtualized cloud networks. "Nobody builds cloud networks like we do," they Trumped. The problem with that line is that enterprise cloud and telco cloud have totally different requirements, and it turns out that enterprise vendors are actually a bit shit at building telco nets (see Telefónica Ditches HPE as Virtualization Lead). (This should not, perhaps, come as a huge surprise. HPE has been a bit shit at doing anything other than being a bit shit for as long as most of us can remember. The idea that it could suddenly and overnight -- hey presto! -- become not shit at building the largest and most demanding telecom networks in the world was always questionable.)

Trade publishers (ahem), analyst firms (sorry) and so-called experts in general also should be called out for hyping the possibilities without paying more attention to the realities.

But the demographic that must take most of the blame for the current very real virtualization cataclysm is, of course, the telecom community as a whole -- for allowing ourselves to be wooed by the promise of virtualization and abandoning the first principles that have successfully guided us, as an industry, since the 19th century. How do we get back on track from here? As an industry, we need to stop the crazy train and get back to basics.

That process starts with defining realistic goals. I've heard a lot of hoo-hah over the last few years about how the end point for service providers is a DevOps environment like the one found in enterprise networks. This is, to use a technical term, complete bollocks! Excepting OTTs, the vast majority of service providers and CSPs have neither the culture nor the skill set to implement DevOps -- even if they wanted to. And they don't. One of the supposed benefits of a DevOps environment is that it allows constant changes to be made to network services. That's fine in an enterprise world, if you like that kind of thing (and enterprise IT departments seem to live for it) but on a telecom network it's just about the last thing CSPs want to deal with.

What service providers actually want is what they were promised when NFV first emerged: specifically, to be able to implement the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of "best in class" services and applications, developed by specialist third parties, in the sure and certain knowledge that these VNFs are absolutely guaranteed to run over the NFV infrastructure they have in place. Creating this virtualization free market technology economy is not as hard as we've made it look. But it will require us, as an industry, to pick one API (one, yes? As in, less than two, FFS!) between the NFVi layer and the VNFs that run over it, and standardize on it.

FYI, for the last six months, working alongside the not for profit New IP Agency (NIA), I've been reaching out behind scenes of the telecom industry to gather the support required to standardize on just such an API specification, and launch an independent certification program based on it.

I'll be sharing more information about the NIA's plan in my column here in a couple of weeks' time but I can promise you that it will be very good news for the industry -- probably the best news we've had in this very challenging year, inshallah.

— Stephen Saunders, Founder and CEO, Light Reading

<<   <   Page 2 / 5   >   >>
creynolds32701 4/21/2017 | 9:17:45 AM
re: Help with your initiative Hi Stephen - Please reach out to me. We have some technology that may help with the standards and getting everyone to share and use the IP developed. 

Chuck Reynolds



arifhrashid 4/21/2017 | 1:40:46 AM
Time for Telecom Reboot True , not a simple reboot but a deeplevel reformat is required.
brooks7 4/20/2017 | 4:57:32 PM
Re: Diagnosis Carol,

I think they might be interested in scaling their networks.  So,  Google and Facebook have done it.  There is existence proof that NOTHING is stopping anybody from doing it.  So stop waiting and do it.  There are no barriers.  The telcos are inventing barriers to do it.

Is hiring hard?  Well, yes it is for everyone in the US.  Gosh.  What have they been doing for the last 5 years? 

My point is that the culture prevents them from starting today.  There was no barrier for them to start 5 years ago.  Stop every one of the standards groups, buy some VMWare licenses, Pick some Products and start coding.  Today.  Don't wait.  Do not do any product evaluations.  Don't bother with RFPs.  Just go.  Many reasons can be invented to not have a group of services created (let's set a goal of 200 new services per company per year).  But the real reason is culture. 

To be specific about the cultural issue, it has to do with accountability.  The model used in Telcos is that nobody can be fired for screwing up except at the bottom.  Committtees make decisions.  Those committees do detailed studies.  They debate fine points.  Nobody can be blamed for a bad decision.  The accountability is too diluted.  The growth portion of the product life cycle of these services is shoter than the decision cycle in a carrier.

Now, the question to me is why do SPs bother with this when it is like Johnny Cochran said, "If it doesn't fit, you must acquit."


briansoloducha 4/20/2017 | 4:40:47 PM
Re: Diagnosis "Facebook has no fear of pushing out new software, and if there are problems, rolling it back and fixing it in the next release, no big deal. How do you explain that to people for whom the blood/brain barrier between engineering and ops has always been a given?"

Better question - how do you try and bring that into your company when your company has a history of FIRING PEOPLE who do this? We need approvals up to VP-level to do changes planned 3-weeks in advance for a mid-week, middle-of-the-night change. If people circumvent this approval process, they do so at risk of immediate dismissal. And yet, management here wants to change the culture....

Thank you for discussing this topic.
Duh! 4/20/2017 | 4:23:38 PM
Re: Diagnosis I also concur, with a few further points.

Cargo cults (look it up) are an apt metaphor for imagining that virtualization will make CSPs like Google et. al. A few of the trappings of the scale-out datacenter are necessary, but not sufficient. Taken out of their full context, they are risible. Kind of like those mock airports in Micronesia.

The cargo cult posits that massive cultural change can make it all work. I think they fail to realize depth of the culture change that is needed, and the institutional immune responses to be overcome.  Google's engineers think like computer scientists; CSP engineers think like telecom engineers. Ordinary minds will revert to the way of thinking they grew up with. Change isn't a simple matter of courseware and threats.

DevOps and CI/CD, for example. Facebook has no fear of pushing out new software, and if there are problems, rolling it back and fixing it in the next release, no big deal. How do you explain that to people for whom the blood/brain barrier between engineering and ops has always been a given? We don't need regression testing anymore? Field trials involving unproven code get conducted with live, not-necessarily-friendly customers?

Speaking of thinking like computer scientists, consider ETSI's NFV model. Lots of functional boxes, connected by a hodge-podge of numerous and disparate interfaces. Computer science notions like abstraction, recursion, remote procedure calls, and inheritance are apparently absent. Would or could such a model be drawn for a scale-out datacenter?

On the other hand, never underestimate the determination of this industry, once it's been sold a panacea, to make it work. They'll muddle along until it does.
saints4454 4/20/2017 | 4:20:15 PM
Re: Virtualize or Automate? I agree Carol.  ONAP definitely has the potential to be beneficial for SP's.


Carol Wilson 4/20/2017 | 3:09:18 PM
Re: Virtualize or Automate? Brad,

Your comment is perfectly in line with what every major service provider says. It is the reason the Open Network Automation Platform project exists. 


Carol Wilson 4/20/2017 | 3:07:01 PM
Re: Diagnosis Seven,

There was a variety of companies at the Open Networking Summit - smaller ones including startups along with Google, Facebook, AT&T, etc. - the one thing on which they all agreed is that there is much more demand for software engineers than exist today on the planet. Are a lot of the smarter ones more interested in starting their own companies than working in the bowels of a telecom giant? I'm sure that's true.

For telecom players to try to scale their networks in the way that Facebook and Google do isn't ludicrous and it isn't about them trying to be an OTT player. It is about trying to meet bandwidth demand in a way that scales. And the traditional approaches don't do that. 

But this discussion seems to be much more about bashing telecom in the most outrageous way for other purposes. 
Steve Saunders 4/20/2017 | 2:43:20 PM
Re: Virtualize or Automate? very interesting point, Brad!
saints4454 4/20/2017 | 2:03:20 PM
Virtualize or Automate? I really liked your article and agree with many aspects.  For the past few years, I've stressed the need for investment more so in Automation of Telecom networks vs. the virtualization of network functions.

Dramatic efficiencies in Operations can be realized by starting with transforming how SP's operate their networks.  

Over the past 4 years, if 50% of the virtualization investments would have been made in automation initiatives, I believe that the SP's capabilities to adopt virutalization would be far better and they would be in a better position to compete with the innovation being seen from OTT players.

Automate and Augment first.

<<   <   Page 2 / 5   >   >>
Sign In