& cplSiteName &

Time for a Telecom Reboot

Steve Saunders

May you live in interesting times.

That apocryphal Chinese curse provides a fitting label for the 2017 telecom market, as we find ourselves mired in an unprecedented mess -- one that's bad for business for everyone in the entire supply chain of next-gen communications.

The root cause? Virtualization! (In both its NFV and SDN flavors.) For years, the industry has been "bigging up" virtualization as the next great telecom revolution, a way to build cost-effective software-based networks to support the emerging digital global economy. But so far the only CSPs that have been brave -- or foolish -- enough to deploy virtualization have found it to be an expensive and time-consuming exercise in frustration, one that has yet to deliver on either cost savings and or new revenues.

To fix this situation we first need to understand what's gone wrong: For 150 years, the telecom industry has owed its success to a clearly defined and proven process for standardizing new network technologies. Then along came virtualization and for some reason we got it into our heads that it would be a great idea to just up and abandon our standards-based best practices altogether, instead developing virtualization by punting it over the fence into a baffling array of open source groups and industry consortia (each loaded with a full boat of personal, corporate, and political agendas, incidentally).

This was a particularly inauspicious start -- and here we are, after four years of intensive work on NFV, and we're really no closer to figuring this thing out than when we started -- with no strong industry standard defining it. (ETSI's efforts don't count; being akin to the "Code" in Pirates of the Caribbean… more of a guideline really.)

Why is this a problem? Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability service providers are in the same place they were in the 1990s: locked in to buying overpriced proprietary solutions from incumbent equipment vendors.

Further, without the competitive kick in the pants created by a competitive, heterogeneous, interoperable environment, vendors have failed to deliver solutions that are fit for service providers' purpose. Today, it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry's leading infrastructure vendors to work -- not really what anyone would call "out of the box" software. And typically service providers have to pay the systems integration division of the NFVi vendor to do that work… talk about jobs for the boys.

So that's half a year of expensive integration work just to get the NFV plumbing to work.

But wait -- there's worse! The value of virtualized networks is supposed to lie in the magnificent variety of new services that run over them. But the fact is that NFVi vendors have been paying lip service to a vague concept of "openness" while simultaneously maintaining the proprietary software and hardware hooks that have kept them profitably in business for the last two decades. Which means that at the end of that six-month installation period, carriers are finding that the only services they can run over that infrastructure are the VNFs that are sold by -- correct! -- the same company that sold them the infrastructure.

This is:

1. Not good enough!

2. The exact opposite of what service providers were told to expect from NFV.

Making things worse, issues within the virtualization sector are having a corrosive effect on the commercial prospects for the wider telecom industry. Growing frustration on the part of CSPs with the shoddy state of the technology has prompted them to push their virtualization plans back -- or postpone them altogether. That's left mega-corporations like Nokia, Cisco, and Ericsson with a big fat hole in their bookings ledgers where those sales of virtualization technology and services were supposed to sit. And that, in turn, has sent an icy chill through the rest of the telecom ecosystem, derailing the growth and sending the industry into what is starting to feel more like a death spiral than a market correction.

So who's to blame for this situation?

Let's start with the open source community. With its insanely complicated, quasi-anarchic, happy-clappy, "we don't like standards" approach, open source's approach to developing code works fine if you're crowd-coding an Internet browser (or a spot of freeware Donkey Kong) but the effect of the open source process on the telecom market has been toxic. What we need to do is take the innovation and the ideas from open source and then superimpose telecom's unique brand of discipline and standards over them -- something which simply has not happened yet.

Let's take a moment to wag the giant foam finger of admonishment at enterprise vendors, also. They leapt -- like a salmon! -- on virtualization as an opportunity to compete with their telco vendor counterparts to start building carrier networks, arguing that what we're really talking about here is building virtualized cloud networks. "Nobody builds cloud networks like we do," they Trumped. The problem with that line is that enterprise cloud and telco cloud have totally different requirements, and it turns out that enterprise vendors are actually a bit shit at building telco nets (see Telefónica Ditches HPE as Virtualization Lead). (This should not, perhaps, come as a huge surprise. HPE has been a bit shit at doing anything other than being a bit shit for as long as most of us can remember. The idea that it could suddenly and overnight -- hey presto! -- become not shit at building the largest and most demanding telecom networks in the world was always questionable.)

Trade publishers (ahem), analyst firms (sorry) and so-called experts in general also should be called out for hyping the possibilities without paying more attention to the realities.

But the demographic that must take most of the blame for the current very real virtualization cataclysm is, of course, the telecom community as a whole -- for allowing ourselves to be wooed by the promise of virtualization and abandoning the first principles that have successfully guided us, as an industry, since the 19th century. How do we get back on track from here? As an industry, we need to stop the crazy train and get back to basics.

That process starts with defining realistic goals. I've heard a lot of hoo-hah over the last few years about how the end point for service providers is a DevOps environment like the one found in enterprise networks. This is, to use a technical term, complete bollocks! Excepting OTTs, the vast majority of service providers and CSPs have neither the culture nor the skill set to implement DevOps -- even if they wanted to. And they don't. One of the supposed benefits of a DevOps environment is that it allows constant changes to be made to network services. That's fine in an enterprise world, if you like that kind of thing (and enterprise IT departments seem to live for it) but on a telecom network it's just about the last thing CSPs want to deal with.

What service providers actually want is what they were promised when NFV first emerged: specifically, to be able to implement the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of "best in class" services and applications, developed by specialist third parties, in the sure and certain knowledge that these VNFs are absolutely guaranteed to run over the NFV infrastructure they have in place. Creating this virtualization free market technology economy is not as hard as we've made it look. But it will require us, as an industry, to pick one API (one, yes? As in, less than two, FFS!) between the NFVi layer and the VNFs that run over it, and standardize on it.

FYI, for the last six months, working alongside the not for profit New IP Agency (NIA), I've been reaching out behind scenes of the telecom industry to gather the support required to standardize on just such an API specification, and launch an independent certification program based on it.

I'll be sharing more information about the NIA's plan in my column here in a couple of weeks' time but I can promise you that it will be very good news for the industry -- probably the best news we've had in this very challenging year, inshallah.

— Stephen Saunders, Founder and CEO, Light Reading

(45)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 3 / 5   >   >>
User Rank: Light Beer
4/20/2017 | 12:59:53 PM
History repeating?
Like the boy seeing the naked emperor, I can't shake the impression that SDN/NFV is, in the most general sense, another management interface standard.  How many times have we tried and failed to truly standardize the management interface?  We get most of the way there, but there's always some indispensible needs that are not standardized, that prevent true interoperability.  We despair at the last high hurdles before the finish line.  We start over from scratch, with unrealistic confidence that there will be no last high hurdles this time around.
Steve Saunders
Steve Saunders,
User Rank: Blogger
4/20/2017 | 12:04:31 PM
Re: Diagnosis
"I actually disagree.  All the major CSPs are making money.  As far as I can tell, they are mostly concerned that somebody else (read Google) is making more money than they are and they are mad about it.  But as far as I can tell in the US that Verizon, AT&T, Comcast, Charter, Centurylink, Cox and Frontier are all quite profitable.  The little carriers will have problems with USF exits stage left and many CLECs are problematic (like Integra).  But there are plenty of small ISPs, WISPs and small carriers in general that are doing quite well."




I shall have to ask the LR editors about the profitability or otherwise of the big CSPs!
Martin Morgan
Martin Morgan,
User Rank: Light Beer
4/20/2017 | 10:27:43 AM
Virtualisation and vendor lock-in at BSS level
Virtualisation is supposed to deliver a more open ecosystem that allows operators to use best of breed solutions and steer clear of vendor lock in. At the OSS and BSS level many vendors are adopting virtualisation to enable this – while some are trying to maintain lock-in and supply everything to operators. On the BSS side there's more than one way to get away from vendor lock in. What we've seen is the use of adjunct systems, e.g. real-time OCS implemented in front of legacy billing systems, work well. Most specialist vendors want open and easily interoperable systems – any industry advances against vendor lock in closed shops are to be welcomed.
Steve Saunders
Steve Saunders,
User Rank: Blogger
4/20/2017 | 9:38:49 AM
Re: Diagnosis
"As for standards bodies, there is much greater engagement with open source groups today - every major operator I know is sending developers to groups like OpenDaylight, ONAP, TIP and more. "


The service providers i talk to are absolutely furious about the number and variety of open source groups that they have to engage with. I totally agree. It's jobs for the geeks. 

Self perpetuating geek oligarchy. 

Reminds me of  the high speed LAN wars of the nineties. Bunch of crap. Pick one guys, and then get a real job.  

User Rank: Light Beer
4/20/2017 | 3:18:41 AM
What is a service?
Once you compare the likes of the OTT's and the SP and their ability to use or drive value from DevOps a few mistakes are often seen: 


1) OTT is targeting the whole world, not just possible customers that are tied to a speciific access network. This means scaling matters and consumer uptake is from a much wider pool of possible subscribers. This is a matter of risk managment when a new service is introduced. 

2) OTT does not generally charge the end customer for the service but rely on ad-funding. This means that charging subsystems are much different and scale of the complexity is much different. e.g. this is a significant Opex and Capex different. 

3) As OTT generally does not charge the end customer expectations of customer support etc is very different. This is significant in means of Opex. 

4) A OTT can in many cases build their full stack them self as this is relatively self-confined. It is at least a order of magnitude less complex to build for example a video server compared to a HSS or a MME or a RNC. The amount of regression testing required on a telco node compared to a OTT node is very very significant, even with CI/CD in a automated manner this process take a long time to execute and once you move into a multi vendor scenario things get even more complex.

Does it make sense to even consider daily relesaes of a RAN or a EPC or is this without any value? Should the focus be on the specific services instead? 

5) Many people seem to have forgotten that NTT DoCoMo to a large extent have managed their own internal SW development on top of a set of HW provided by their family vendors. If we want to evaluate the cost of internal SW development in a SP environment I believeNTTis the most appropriate place to start to benchmark. 

6) WAP was the 1990 leap of fait for service revenue, it failed in all markets where the operator did not choose to embrace revenue share and co-development/risk sharing models, i-Mode was the only one (that I'm aware off) that both offered a set of useful and attractive services and generated revenue. 

7) Some of the industry groups are now looking at how we can target a zero-touch network management model that would significantly offset the Opex requirements. The work is just starting but it is clear that the target is not to improve just 10 or 15% but to go to a whole new model with zero-touch. This is similar to how SON was developed to eliminate certain very time consuming tasks in RAN planning and basic optimization. The challenge is to see to what extent this needs and can be deployed in hybrid scenarios vs. only in green-field deployments with all new interfaces and capabilities. 

So with this in mind does it make sense for a SP to build services that they only offer to subscribers of their own access networks or should they go head2head with OTT players and design and build global services that might have certain extra value add on their own access networks? OR should the SP's offer a strong revenue share model and actually open up and provide further value to their partners in the eco system and via this de-risk and co-create services? 


User Rank: Light Sabre
4/20/2017 | 2:10:31 AM
Re: Diagnosis

My point is that the entire IT world doesn't just expect its OS vendors and other players to conform to a bunch of interoperable standards at the level that the Carriers are trying to do with NFV.  What they have done is hire people and make the software that they have work for them.  They patch over issues with software and other off the shelf packages. 

My view has been for a long time is that the right way for the SP providers to move forward is to dump almost all the folks that work on this kind of stuff and simplify their networks.  Eliminate the barriers inside and try to reduce the technologies involved by about 80%.  If that means you don't do stuff, well then you don't do it.  That kind of streamlining will add much more to the bottom line than anything else that they can do.  They can't turn the network into something other than a commodity.  That ship has sailed and they (and the vendor community) have wasted Billions of dollars trying to change that.


User Rank: Light Beer
4/20/2017 | 12:11:24 AM
The Cloud IS the Commodity
Have always thought the industry published a decent NFV white paper in October of 2012 and then failed to follow its own plan. The white paper talked about leveraging the capabilities of the hyper-scale IT cloud platform operators but then the industry decided to reinvent the cloud platform.

The industry lost track of the distinct separation that needs to be maintained between Tenant VNF / Service Orchestration Layer and Platform Resource Management / Orchestration Layer. As a result, Network Function vendors focused on solutions that encompassed both their version of a cloud platform and their version of Software. This was exacerbated by the absence of standards. The resulting VNF software was not truly "cloud aware" and not portable across even more mature "commodity cloud" platforms. This becomes a huge problem in a digital era when virtually all content delivery is a multi-cloud, multi-service provider endeavor.

All Network Functions are not the same. They do not impose the same low-latency and networking requirements. The industry could have chosen to start with those network functions most easily re-architected to be cloud aware so as to run optimally as SaaS on a multi-tenant "commodity cloud". The more difficult, low-latency, Control Plane VNFs could have been addressed in a second or third wave. Not too late to make this course correction.

As for Cloud Platforms, the closed loop, AI enhanced, data analytics driven, software defined, resource graph/state management capabilities of a globally distributed cloud platform like Microsoft Azure significantly exceed the vision of ETSI MANO or Open MANO. The TM Forum has attempted to fill gaps with the ZOOM initiative and OSS Future Mode of Operations work but even that has difficulty keeping up with the accelerating advancement of hyper-scale cloud platforms and the latest software architecture and DevOps trends. 

The telecom NFV ecosystem focus on commodity x86 hardware and combining it with Open Source software got everyone off track. It was fully engineered and commercially stable Cloud Platforms that were becoming the actual commodity. The hyper-scale cloud developers and operators were far more advanced with VIM and NFVI than the telecom industry realized. They had already abstracted hardware lifecycle management from software lifecycle management to create apparent unlimited scalability. This may have been difficult to recognize in 2012 but it is becoming very clear in 2017.
User Rank: Light Beer
4/19/2017 | 9:10:57 PM
There's another way to look at NFV...
If you look at NFV as a decade old strategic sales development effort by the Intel x86 marketing team, executed extraordinarily well through many points of action in the industry, the current state (which is certainly not business as usual in the telco industry) is not a disaster, it's just an intermediate state through which one would expect a change of this magnitude to progress.

The industry is simply sorting out what's off the shelf commodity (x86, NICs, and virtualization), what's purpose built for telco (the VFs), and what is curated commodity (like what Red Hat does for Linux).  The hardware is clear (at least for the moment) and it is in the software space that the telco goal of using COTS/commodity for CAPEX reasons meets the various-vendor goal of getting additional gross margin from purpose-building or at least curating.  

May the marketplace reach a suitable outcome, in the true spirit of capitalism!
User Rank: Light Beer
4/19/2017 | 7:16:17 PM
Re: Diagnosis
seven: "It turns out there are hundreds of startups in SF.  So...how can they have lots of SW folks but the SPs can't find them?  Facebook, Apple, Amazon, Google, etc...ALL have software teams."

But the problem is that an internal development organization is a difficult beast to manage.  You have to be built for that, like Google or Facebook, i.e., to eat your own cooking.  You have no leverage, the historical advantage of the SP over the vendor (how many times have I heard the "but vendor X says..." or "I guess I'll have to go the RFP route..." if I object to some dumb SP request.

That's why an OSS system built in-house takes an act of God to modify to release a new feature.

And the insane belief at SPs that all software should be free, and only hardware costs money.

SPs don't know diddly squat about how to build software.  They wouldn't know what to do with the software engineers if they poked them in the collective SP eye.

User Rank: Light Sabre
4/19/2017 | 7:13:52 PM
Re: Diagnosis

I actually disagree.  All the major CSPs are making money.  As far as I can tell, they are mostly concerned that somebody else (read Google) is making more money than they are and they are mad about it.  But as far as I can tell in the US that Verizon, AT&T, Comcast, Charter, Centurylink, Cox and Frontier are all quite profitable.  The little carriers will have problems with USF exits stage left and many CLECs are problematic (like Integra).  But there are plenty of small ISPs, WISPs and small carriers in general that are doing quite well.

So here is my list of things that need to change: (Null Set).

Vendors on the other hand are in a depression and have been since the collapse of the CLEC bubble in 2000.  Just think about how much infrastructure between Wireless and Fiber is being installed.  Now tell me why Bell Labs belongs to Nokia?  The equipment vendors have failed to adapt in the changes post the bubble.  They are led around by their "equipment" by the Service Providers and develop TONS of things that go nowhere.  If you go into an SP, you will find dozens of people who are happy to talk about all the things that they are going to do. 

Now, should Service Providers form a division that does some applications and such.  Well, maybe.  But they can not evaluate the same way they do Network Investments.  Other than that, they will start change when they are losing money.  Until then, it will be like installing FTTH in Europe.  I was on panels in 2003 where people were telling me that this was the year.  Its 2017, and maybe next year will be the year.


<<   <   Page 3 / 5   >   >>
More Blogs from From the Founder
After almost two decades at Light Reading, it's time for a different optical adventure.
John Chambers is still as passionate about business and innovation as he ever was at Cisco, finds Steve Saunders.
Light Reading founder Steve Saunders talks with VMware's Shekar Ayyar, who explains why cloud architectures are becoming more distributed, what that means for workloads, and why telcos can still be significant cloud services players.
Light Reading's recent Automation Everywhere conference provided invaluable guidance and insights for network operators figuring out their automation strategies.
Ngena's global 'network of networks' solves a problem that the telecom vendors promised us would never exist. That doesn't mean its new service isn't a really good idea.
Featured Video
Flash Poll
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 2-22, 2019, Los Angeles, CA
October 10, 2019, New York, New York
November 5, 2019, London, England
November 7, 2019, London, UK
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
All Upcoming Live Events