Featured Story
Deutsche Telekom boss is wrong about 5G
Europe's biggest operator boasts success on both sides of the Atlantic, but there is scant evidence it is down to 5G.
Either we perform a complete 'factory reset' on the way the telecom industry creates and deploys virtualization, or we face the consequences.
May you live in interesting times.
That apocryphal Chinese curse provides a fitting label for the 2017 telecom market, as we find ourselves mired in an unprecedented mess -- one that's bad for business for everyone in the entire supply chain of next-gen communications.
The root cause? Virtualization! (In both its NFV and SDN flavors.) For years, the industry has been "bigging up" virtualization as the next great telecom revolution, a way to build cost-effective software-based networks to support the emerging digital global economy. But so far the only CSPs that have been brave -- or foolish -- enough to deploy virtualization have found it to be an expensive and time-consuming exercise in frustration, one that has yet to deliver on either cost savings and or new revenues.
To fix this situation we first need to understand what's gone wrong: For 150 years, the telecom industry has owed its success to a clearly defined and proven process for standardizing new network technologies. Then along came virtualization and for some reason we got it into our heads that it would be a great idea to just up and abandon our standards-based best practices altogether, instead developing virtualization by punting it over the fence into a baffling array of open source groups and industry consortia (each loaded with a full boat of personal, corporate, and political agendas, incidentally).
This was a particularly inauspicious start -- and here we are, after four years of intensive work on NFV, and we're really no closer to figuring this thing out than when we started -- with no strong industry standard defining it. (ETSI's efforts don't count; being akin to the "Code" in Pirates of the Caribbean… more of a guideline really.)
Why is this a problem? Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability service providers are in the same place they were in the 1990s: locked in to buying overpriced proprietary solutions from incumbent equipment vendors.
Further, without the competitive kick in the pants created by a competitive, heterogeneous, interoperable environment, vendors have failed to deliver solutions that are fit for service providers' purpose. Today, it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry's leading infrastructure vendors to work -- not really what anyone would call "out of the box" software. And typically service providers have to pay the systems integration division of the NFVi vendor to do that work… talk about jobs for the boys.
So that's half a year of expensive integration work just to get the NFV plumbing to work.
But wait -- there's worse! The value of virtualized networks is supposed to lie in the magnificent variety of new services that run over them. But the fact is that NFVi vendors have been paying lip service to a vague concept of "openness" while simultaneously maintaining the proprietary software and hardware hooks that have kept them profitably in business for the last two decades. Which means that at the end of that six-month installation period, carriers are finding that the only services they can run over that infrastructure are the VNFs that are sold by -- correct! -- the same company that sold them the infrastructure.
This is:
1. Not good enough!
2. The exact opposite of what service providers were told to expect from NFV.
Making things worse, issues within the virtualization sector are having a corrosive effect on the commercial prospects for the wider telecom industry. Growing frustration on the part of CSPs with the shoddy state of the technology has prompted them to push their virtualization plans back -- or postpone them altogether. That's left mega-corporations like Nokia, Cisco, and Ericsson with a big fat hole in their bookings ledgers where those sales of virtualization technology and services were supposed to sit. And that, in turn, has sent an icy chill through the rest of the telecom ecosystem, derailing the growth and sending the industry into what is starting to feel more like a death spiral than a market correction.
So who's to blame for this situation?
Let's start with the open source community. With its insanely complicated, quasi-anarchic, happy-clappy, "we don't like standards" approach, open source's approach to developing code works fine if you're crowd-coding an Internet browser (or a spot of freeware Donkey Kong) but the effect of the open source process on the telecom market has been toxic. What we need to do is take the innovation and the ideas from open source and then superimpose telecom's unique brand of discipline and standards over them -- something which simply has not happened yet.
Let's take a moment to wag the giant foam finger of admonishment at enterprise vendors, also. They leapt -- like a salmon! -- on virtualization as an opportunity to compete with their telco vendor counterparts to start building carrier networks, arguing that what we're really talking about here is building virtualized cloud networks. "Nobody builds cloud networks like we do," they Trumped. The problem with that line is that enterprise cloud and telco cloud have totally different requirements, and it turns out that enterprise vendors are actually a bit shit at building telco nets (see Telefónica Ditches HPE as Virtualization Lead). (This should not, perhaps, come as a huge surprise. HPE has been a bit shit at doing anything other than being a bit shit for as long as most of us can remember. The idea that it could suddenly and overnight -- hey presto! -- become not shit at building the largest and most demanding telecom networks in the world was always questionable.)
Trade publishers (ahem), analyst firms (sorry) and so-called experts in general also should be called out for hyping the possibilities without paying more attention to the realities.
But the demographic that must take most of the blame for the current very real virtualization cataclysm is, of course, the telecom community as a whole -- for allowing ourselves to be wooed by the promise of virtualization and abandoning the first principles that have successfully guided us, as an industry, since the 19th century. How do we get back on track from here? As an industry, we need to stop the crazy train and get back to basics.
That process starts with defining realistic goals. I've heard a lot of hoo-hah over the last few years about how the end point for service providers is a DevOps environment like the one found in enterprise networks. This is, to use a technical term, complete bollocks! Excepting OTTs, the vast majority of service providers and CSPs have neither the culture nor the skill set to implement DevOps -- even if they wanted to. And they don't. One of the supposed benefits of a DevOps environment is that it allows constant changes to be made to network services. That's fine in an enterprise world, if you like that kind of thing (and enterprise IT departments seem to live for it) but on a telecom network it's just about the last thing CSPs want to deal with.
What service providers actually want is what they were promised when NFV first emerged: specifically, to be able to implement the best and most popular (and most profitable) mix of services by picking and choosing from an online marketplace chock full of "best in class" services and applications, developed by specialist third parties, in the sure and certain knowledge that these VNFs are absolutely guaranteed to run over the NFV infrastructure they have in place. Creating this virtualization free market technology economy is not as hard as we've made it look. But it will require us, as an industry, to pick one API (one, yes? As in, less than two, FFS!) between the NFVi layer and the VNFs that run over it, and standardize on it.
FYI, for the last six months, working alongside the not for profit New IP Agency (NIA), I've been reaching out behind scenes of the telecom industry to gather the support required to standardize on just such an API specification, and launch an independent certification program based on it.
I'll be sharing more information about the NIA's plan in my column here in a couple of weeks' time but I can promise you that it will be very good news for the industry -- probably the best news we've had in this very challenging year, inshallah.
— Stephen Saunders, Founder and CEO, Light Reading
You May Also Like