& cplSiteName &

Why Isn't Virtualization Working?

Scott Sumner
4/14/2017
50%
50%

"No one can actually implement a virtualized heterogeneous network at commercial scale." True or false? (See Just Dirty.)

This is a good time for four-valued logic, because the answer is… both. In other words, it depends.

True: No one has implemented a commercial-scale, virtualized network using off-the-shelf, open source code, or even vendor-supplied software. Likely, no one ever will.

False: Despite that, many service providers that took the burden of this task on themselves have actually succeeded.

There have been numerous examples of virtualized heterogeneous networks at commercial scale: SK Telecom (Nasdaq: SKM), China Mobile Ltd. (NYSE: CHL), NTT DoCoMo Inc. (NYSE: DCM), AT&T Inc. (NYSE: T) and Telefónica have all implemented virtualization over multivendor networks. They simply had to create their own management and network orchestration (MANO) and software-defined networking (SDN) platforms. Piece of cake, right?

The point is that these operators weren't prepared to wait. They cobbled together components, code and concepts to create control and orchestration platforms that talked to legacy and virtualized infrastructure. For sure, creating your own network control and management systems is a lot of hard work. But service providers are used to that. They've been stitching together operations systems for decades.

This wasn't quite the same, though, because virtualized networks don't behave like the networks they will (eventually) render obsolete. In a way, virtualized networks behave like cats. They are little angels while you are in the room, but then shred your sofa when you turn your back. They purr nicely at first before tearing apart all your carefully laid plans when they start to grow.

In a virtualized network, the equivalent of turning your back is losing visibility. And this has been one of the key challenges for service providers: finding a way to monitor performance and user experience in these new networks. That has definitely not come easily. Operators discovered, often late in the process, that their tried and true methods simply didn't "measure up" in this new context.

That's because these networks are highly dynamic, compared with our static networks of old. Virtual network functions (VNFs) interact in ways that traditional hardware components don't. They make decisions together, often outside the view of orchestrators. The network becomes a living, breathing ecosystem, where minute changes or impairments can cascade across the network, making problems hard to find.


For more NFV-related coverage and insights, check out our dedicated NFV content channel here on Light Reading.


The service providers that succeeded at building virtualized networks stood out from their peers by understanding that they were entering an unknown space, and being prepared to see and do things differently, early on. And they recognized that standards bodies, open source initiatives and vendor ecosystems weren't going to help. A completely new approach to monitoring had to be conceived. It was the operators that made it happen.

The result was virtualized monitoring: a simple, scalable way to regain visibility. It tests the actual network from physical or virtual vantage points, without concern for the path the data will take along the way.

This is critical, as any form of tapping or localized testing can't follow the dynamic routing and network slices in a virtualized network. Instead, tests need to be conducted over every segment in the service chain, and then pieced together using analytics to form a complete picture.

Virtualized monitoring solutions were developed by vendors actually working in service provider DevOps war rooms. They were aided by operators' integration skills, insights and appetite to virtualize. It is exactly these kinds of collaboration that formed the foundation for the first virtualized heterogeneous networks.

Service providers going it alone take on the responsibility of making predictions about the future, shepherding vendors, standards committees and their own staff along the way. But they can also achieve transformative performance gains, and gain a critical advantage by delivering an exceptional user experience. Those waiting for an "easy button" to appear might just find customer churn is right around the corner.

Likewise, vendors will succeed by building relationships with innovative, fearless operators. They must be willing to move beyond simply "selling" to those companies and start to act like partners. That will further challenges the operators to innovate beyond their needs, and take additional risks of their own.

— Scott Sumner, Director, Business Analytics, Accedian

(4)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
afshin1.esmaeili@gmail.com
50%
50%
afshin1.esmaeili@gmail.com,
User Rank: Light Beer
4/24/2017 | 1:29:03 PM
Maturity Required
First, long live virtualization. That established, let's now not be religiously pro or against virtualization. Telecom industry is a five nine reliability industry and if you were to be really mean you could say the IT industry has emerged from a nine five reliability (not exactly true though). They are melting up to form the ICT industry (of course it will not be the kind of meltup that Andromeda and Milky way are to face!). The question is how many nines of reliability do we want /need in ICT.

IoT offers the greatest growth opportunities that the Service Providers have seen since the mobile telephony took off in the early 90s. Much of it calls for the kind of reliability required by mission critical operations, be it self-driving cars or that young surgeon in white uniform standing by the patient's side with his tools (dagger) in his hand and waiting to receive his instructions from the "clinic cloud". Besides the current over seven billion mobile users will not tolerate any less reliability then they experience right now, and frankly there is lots of room for improvement too.

Finally if we want something as fancy as network slicing, of outmost mportance in my view, to work reliably and smoothly, we need to adapt a realistic and experience based approach to network evolution. So the history and past experience from many of Telecom projects that introduced some new technologies, services or network upgrades have had substantial hiccups, some even disastrous. That happened despite the fact that the Telecom's well established processes were followed including well defined and standardized interfaces and multi-vendor verification, interoperability etc. It will take many years before ICT finds its new form and we need to patiently grow with it. Some growing pain is to be expected but we should avoid disastrous network outages or major service unavailability in the mean time.

 
JA1994
50%
50%
JA1994,
User Rank: Light Beer
4/14/2017 | 2:39:27 PM
Great
I absolutely agree with his point of view. It just can't work!
brooks7
100%
0%
brooks7,
User Rank: Light Sabre
4/14/2017 | 12:34:55 PM
Re: Early
Umm...I think there are several examples of the networks described:  Google, Facebook, Microsoft and Amazon all come to mind straight away.  So not only is virtualizaton working it has been completely solved for almost 10 years.  The basic problem is that traditional service providers have a Systems Integration challenge to operate the way that those companies do.  To date, they have not done so.  They are trying to run a traditional process in an area of complete disruption and that is not working.  And it won't work, because by the time it gets ready it will be completely obsolete.  So, if I were say AT&T I would buy some Vmware licenses, hire a bunch of coders/netops people and get started.  No need to wait.  No need to expect perfect interoperability.  No need to standardize on common APIs.  Pick your products and Go.  Make.  Do.  Stop waiting for the IT world to do things your way.

seven

 
danielcawrey
0%
100%
danielcawrey,
User Rank: Light Sabre
4/14/2017 | 11:32:49 AM
Early
Even if it is early days, I think we're going to see virtualization take over networks. The big difference on the networking side versus what has happened with servers is clear: There are a lot more moving parts that need migration. But I have confidence some vendors will put this all together. 
More Blogs from Column
Now that communications service providers have reached a crossroads, they must choose quickly to survive.
A merger between Sprint and T-Mobile could help to address the gap between the US and its global peers on mobile broadband speeds.
Mobile networks will transform from now through 2020, more than since the inception of 2G. New 4G capabilities will trigger some of that, however, 5G both enables and encourages more fundamental change.
The definition of voice services is widening beyond phone calls, even into IoT. That calls for a more open approach to product development, Ian Maclean of Metaswitch argues.
The shift to cloud is turning unified communications into the next hot service for enterprises as the UCaaS market continues to expand.
Featured Video
From The Founder
Light Reading is spending much of this year digging into the details of how automation technology will impact the comms market, but let's take a moment to also look at how automation is set to overturn the current world order by the middle of the century.
Flash Poll
Upcoming Live Events
November 30, 2017, The Westin Times Square
December 5-7, 2017, The Intercontinental Prague
March 20-22, 2018, Denver Marriott Tech Center
May 14-17, 2018, Austin Convention Center
All Upcoming Live Events
Infographics
SmartNICs aren't just about achieving scale. They also have a major impact in reducing CAPEX and OPEX requirements.
Hot Topics
Juniper's New Contrail VP Hails From Google
Craig Matsumoto, Editor-in-Chief, Light Reading, 11/15/2017
Eurobites: Telefónica Reckons Plastic Is Fantastic for FTTH
Paul Rainford, Assistant Editor, Europe, 11/15/2017
AT&T's Lurie Leaps to Synchronoss as New CEO
Dan Jones, Mobile Editor, 11/17/2017
Animals with Phones
Why Cats Don't Run Tech Support Click Here
Live Digital Audio

Understanding the full experience of women in technology requires starting at the collegiate level (or sooner) and studying the technologies women are involved with, company cultures they're part of and personal experiences of individuals.

During this WiC radio show, we will talk with Nicole Engelbert, the director of Research & Analysis for Ovum Technology and a 23-year telecom industry veteran, about her experiences and perspectives on women in tech. Engelbert covers infrastructure, applications and industries for Ovum, but she is also involved in the research firm's higher education team and has helped colleges and universities globally leverage technology as a strategy for improving recruitment, retention and graduation performance.

She will share her unique insight into the collegiate level, where women pursuing engineering and STEM-related degrees is dwindling. Engelbert will also reveal new, original Ovum research on the topics of artificial intelligence, the Internet of Things, security and augmented reality, as well as discuss what each of those technologies might mean for women in our field. As always, we'll also leave plenty of time to answer all your questions live on the air and chat board.

Like Us on Facebook
Twitter Feed
Partner Perspectives - content from our sponsors
The Mobile Broadband Road Ahead
By Kevin Taylor, for Huawei
All Partner Perspectives