& cplSiteName &

Time for a Telecom Reboot

Steve Saunders
4/19/2017
92%
8%

May you live in interesting times.

That apocryphal Chinese curse provides a fitting label for the 2017 telecom market, as we find ourselves mired in an unprecedented mess -- one that's bad for business for everyone in the entire supply chain of next-gen communications.

The root cause? Virtualization! (In both its NFV and SDN flavors.) For years, the industry has been "bigging up" virtualization as the next great telecom revolution, a way to build cost-effective software-based networks to support the emerging digital global economy. But so far the only CSPs that have been brave -- or foolish -- enough to deploy virtualization have found it to be an expensive and time-consuming exercise in frustration, one that has yet to deliver on either cost savings and or new revenues.

To fix this situation we first need to understand what's gone wrong: For 150 years, the telecom industry has owed its success to a clearly defined and proven process for standardizing new network technologies. Then along came virtualization and for some reason we got it into our heads that it would be a great idea to just up and abandon our standards-based best practices altogether, instead developing virtualization by punting it over the fence into a baffling array of open source groups and industry consortia (each loaded with a full boat of personal, corporate, and political agendas, incidentally).

This was a particularly inauspicious start -- and here we are, after four years of intensive work on NFV, and we're really no closer to figuring this thing out than when we started -- with no strong industry standard defining it. (ETSI's efforts don't count; being akin to the "Code" in Pirates of the Caribbean… more of a guideline really.)

Why is this a problem? Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability service providers are in the same place they were in the 1990s: locked in to buying overpriced proprietary solutions from incumbent equipment vendors.

Further, without the competitive kick in the pants created by a competitive, heterogeneous, interoperable environment, vendors have failed to deliver solutions that are fit for service providers' purpose. Today, it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry's leading infrastructure vendors to work -- not really what anyone would call "out of the box" software. And typically service providers have to pay the systems integration division of the NFVi vendor to do that work… talk about jobs for the boys.

So that's half a year of expensive integration work just to get the NFV plumbing to work.

But wait -- there's worse! The value of virtualized networks is supposed to lie in the magnificent variety of new services that run over them. But the fact is that NFVi vendors have been paying lip service to a vague concept of "openness" while simultaneously maintaining the proprietary software and hardware hooks that have kept them profitably in business for the last two decades. Which means that at the end of that six-month installation period, carriers are finding that the only services they can run over that infrastructure are the VNFs that are sold by -- correct! -- the same company that sold them the infrastructure.

This is:

1. Not good enough!

2. The exact opposite of what service providers were told to expect from NFV.

Making things worse, issues within the virtualization sector are having a corrosive effect on the commercial prospects for the wider telecom industry. Growing frustration on the part of CSPs with the shoddy state of the technology has prompted them to push their virtualization plans back -- or postpone them altogether. That's left mega-corporations like Nokia, Cisco, and Ericsson with a big fat hole in their bookings ledgers where those sales of virtualization technology and services were supposed to sit. And that, in turn, has sent an icy chill through the rest of the telecom ecosystem, derailing the growth and sending the industry into what is starting to feel more like a death spiral than a market correction.

So who's to blame for this situation?

Let's start with the open source community. With its insanely complicated, quasi-anarchic, happy-clappy, "we don't like standards" approach, open source's approach to developing code works fine if you're crowd-coding an Internet browser (or a spot of freeware Donkey Kong) but the effect of the open source process on the telecom market has been toxic. What we need to do is take the innovation and the ideas from open source and then superimpose telecom's unique brand of discipline and standards over them -- something which simply has not happened yet.

Let's take a moment to wag the giant foam finger of admonishment at enterprise vendors, also. They leapt -- like a salmon! -- on virtualization as an opportunity to compete with their telco vendor counterparts to start building carrier networks, arguing that what we're really talking about here is building virtualized cloud networks. "Nobody builds cloud networks like we do," they Trumped. The problem with that line is that enterprise cloud and telco cloud have totally different requirements, and it turns out that enterprise vendors are actually a bit shit at building telco nets (see Telefónica Ditches HPE as Virtualization Lead). (This should not, perhaps, come as a huge surprise. HPE has been a bit shit at doing anything other than being a bit shit for as long as most of us can remember. The idea that it could suddenly and overnight -- hey presto! -- become not shit at building the largest and most demanding telecom networks in the world was always questionable.)

Trade publishers (ahem), analyst firms (sorry) and so-called experts in general also should be called out for hyping the possibilities without paying more attention to the realities.

But the demographic that must take most of the blame for the current very real virtualization cataclysm is, of course, the telecom community as a whole -- for allowing ourselves to be wooed by the promise of virtualization and abandoning the first principles that have successfully guided us, as an industry, since the 19th century. How do we get back on track from here? As an industry, we need to stop the crazy train and get back to basics.

That process starts with defining realistic goals. I've heard a lot of hoo-hah over the last few years about how the end point for service providers is a DevOps environment like the one found in enterprise networks. This is, to use a technical term, complete bollocks! Excepting OTTs, the vast majority of service providers and CSPs have neither the culture nor the skill set to implement DevOps -- even if they wanted to. And they don't. One of the supposed benefits of a DevOps environment is that it allows constant changes to be made to network services. That's fine in an enterprise world, if you like that kind of thing (and enterprise IT departments seem to live for it) but on a telecom network it's just about the last thing CSPs want to deal with.

What service providers actually want is what they were promised when NFV first emerged: specifically, to be able to implement the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of "best in class" services and applications, developed by specialist third parties, in the sure and certain knowledge that these VNFs are absolutely guaranteed to run over the NFV infrastructure they have in place. Creating this virtualization free market technology economy is not as hard as we've made it look. But it will require us, as an industry, to pick one API (one, yes? As in, less than two, FFS!) between the NFVi layer and the VNFs that run over it, and standardize on it.

FYI, for the last six months, working alongside the not for profit New IP Agency (NIA), I've been reaching out behind scenes of the telecom industry to gather the support required to standardize on just such an API specification, and launch an independent certification program based on it.

I'll be sharing more information about the NIA's plan in my column here in a couple of weeks' time but I can promise you that it will be very good news for the industry -- probably the best news we've had in this very challenging year, inshallah.

— Stephen Saunders, Founder and CEO, Light Reading

(45)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 5 / 5
Carol Wilson
100%
0%
Carol Wilson,
User Rank: Blogger
4/19/2017 | 2:01:00 PM
Re: Diagnosis
I get what you are saying but the work that AT&T, Vodafone and others are doing is pushing in the direction of Facebook and Google. 

I disagree that telcos want open source primarily to reduce capex and what they are spending on equipment - that's a goal, to be sure, but what I hear over and over from them is the need to scale more rapidly to meet bandwidth demands and to introduce new services faster. Not a single major CTO with whom I've spoken thinks virtualization will reduce their capex budgets any time soon. 

 
brooks7
33%
67%
brooks7,
User Rank: Light Sabre
4/19/2017 | 1:41:45 PM
Re: Diagnosis
 

Carol,

I think you misunderstood my daily updates.  Ask Verizon if they could introduce new packages from their vendors in 24 hours and put them live to their market.  The time to test and approve things is months not hours.  Very different timeframes.  The Scrum methodology produces 2 week product release cycles for new features.  Facebook and Google basically did what I said...picked their vendors (sometimes it was them) and just DID.  They didn't wait for standards development.  They did try to get what they were doing standardized, but only at the very bottom end of things (like component specs).

By the way, I don't think that this has anything to do with what the telcos want.  What they want is to reduce capex.  They want open source because they would like to eliminate the spending on equipment.  That is NOT why the IT groups do virtualization.  They do it for fast service creation and flexibility.  Facebook's (and Google and Amazon) have the load on their network change daily with the movement around the Earth by the Sun.  It is either be flexible or double the amount of equipment in each data center.  THAT is their CAPEX reduction.   The say AT&T equivalent is that they have 4 voice switches in the US and they use the West Coast Data Center to cover overflow on the East Coast Data Center every day from 8 - 11AM.   Now scale that up and voila...now you are starting to talk.  Of course, none of this works if you have to have special network arrangements or any oddball protocols.  You want to use DNS publishing and load balancers to make this work.  Imagine if your local phone call was actually handled in the early morning by a switch that was based in California.  That is pretty much like making the equivalent of loading up google.com and thinking you know where the server you are actually connected to for that TCP connection is located.

seven

 
Carol Wilson
100%
0%
Carol Wilson,
User Rank: Blogger
4/19/2017 | 11:39:30 AM
Re: Diagnosis
Brooks7,

How do you view the way Facebook and Google are using virtualization - I see CSPs much more interested in emulating their processes, which do include daily updates. 

I totally agree on the vendor issue - what I am hearing from operators is that the vendors are resisting the kinds of pricing restructuring that is needed because it forces them to completely redo their business plans. 

It was mentioned many times at ONS a few weeks back - the folks being expected to enact the change have no incentive to do so. 
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
4/19/2017 | 10:50:58 AM
Diagnosis
 

I think there is exactly one problem.  You can not take a technology and in fact an entire infrasructure that was built to be used in one way and then decide that the entire industry would change to do it another. 

The virtualization deployment that we see in the IT world grew up organically and all the stuff that you talk about that is hard or expensive is already addressed.  The problem is that this model was completely rejected.  I have run a SaaS operation and it works nothing like a carrier.  I think people need to think Scrum and Agile.  10 groups have products out before the Telcos have an approved business plan.

This is not a world where standardization and interoperability matter in the least.  Neither does bugs in Open Source.  You start with those things and then your software team patches the Open Source and integrates things into a product.  The fact there is a standards body or 2 or 3 means that this whole effort is a failure. 

Then there is a second problem.  What is the gain for the telco vendors to make their products even lower price?  To take full advantage of a Virtual Environment, vendors would have to redo their products from scratch.  How is that supposed to work economically?

So if I back up to what worked so well on the IT front, essentially a couple of services are built this way out of a common set of core blocks that are used by lots of folks (LAMP Stack + VMware).  On top of the web scale providers do their own thing to make it work in their specific situation.  Each of these services are run and implemented in a way that is very different than what telcos are used to.  I have personally updated the software that operated the service we ran more than 1 time in a day (we had about 3.5M end users).  Imagine that in Verizon or AT&T.

seven

 

 
mendyk
100%
0%
mendyk,
User Rank: Light Sabre
4/19/2017 | 10:09:48 AM
Woeful
The telecom industry is at the heart of the 21st-century global economy. There's no Google, no Amazon, no nothin' without CSPs enabling all this digital transformation stuff. And yet right now this sector gives off a distinct Cleveland Browns vibe. Woe is us. We can't win. We're stuck in a two-bit role. Nobody likes us, and nobody is helping us. Enough already. Transformation is hard. It requires work, and commitment, and patience, and money. Those are all factors that CSPs can do something about. Whining accomplishes nothing positive.
<<   <   Page 5 / 5
More Blogs from From the Founder
After almost two decades at Light Reading, it's time for a different optical adventure.
John Chambers is still as passionate about business and innovation as he ever was at Cisco, finds Steve Saunders.
Light Reading founder Steve Saunders talks with VMware's Shekar Ayyar, who explains why cloud architectures are becoming more distributed, what that means for workloads, and why telcos can still be significant cloud services players.
Light Reading's recent Automation Everywhere conference provided invaluable guidance and insights for network operators figuring out their automation strategies.
Ngena's global 'network of networks' solves a problem that the telecom vendors promised us would never exist. That doesn't mean its new service isn't a really good idea.
Featured Video
Flash Poll
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1, 2019, New Orleans, Louisiana
October 2-22, 2019, Los Angeles, CA
October 10, 2019, New York, New York
November 5, 2019, London, England
November 7, 2019, London, UK
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
All Upcoming Live Events
Partner Perspectives - content from our sponsors
Transform Beyond Borders to Lead the Innovation
By Ben Zhou, CEO, Whale Cloud
Reject Limits. Build the Future.
By David Wang, Huawei
China Telecom & Huawei Jointly Complete the World's First End-to-End 5G SA Voice & Video Call
By Jay Liu, Senior Marketing Manager, Cloud Core Product Line, Huawei Technologies
All Partner Perspectives