"No one can actually implement a virtualized heterogeneous network at commercial scale." True or false? (See Just Dirty.)
This is a good time for four-valued logic, because the answer is… both. In other words, it depends.
True: No one has implemented a commercial-scale, virtualized network using off-the-shelf, open source code, or even vendor-supplied software. Likely, no one ever will.
False: Despite that, many service providers that took the burden of this task on themselves have actually succeeded.
There have been numerous examples of virtualized heterogeneous networks at commercial scale: SK Telecom (Nasdaq: SKM), China Mobile Ltd. (NYSE: CHL), NTT DoCoMo Inc. (NYSE: DCM), AT&T Inc. (NYSE: T) and Telefónica have all implemented virtualization over multivendor networks. They simply had to create their own management and network orchestration (MANO) and software-defined networking (SDN) platforms. Piece of cake, right?
The point is that these operators weren't prepared to wait. They cobbled together components, code and concepts to create control and orchestration platforms that talked to legacy and virtualized infrastructure. For sure, creating your own network control and management systems is a lot of hard work. But service providers are used to that. They've been stitching together operations systems for decades.
This wasn't quite the same, though, because virtualized networks don't behave like the networks they will (eventually) render obsolete. In a way, virtualized networks behave like cats. They are little angels while you are in the room, but then shred your sofa when you turn your back. They purr nicely at first before tearing apart all your carefully laid plans when they start to grow.
In a virtualized network, the equivalent of turning your back is losing visibility. And this has been one of the key challenges for service providers: finding a way to monitor performance and user experience in these new networks. That has definitely not come easily. Operators discovered, often late in the process, that their tried and true methods simply didn't "measure up" in this new context.
That's because these networks are highly dynamic, compared with our static networks of old. Virtual network functions (VNFs) interact in ways that traditional hardware components don't. They make decisions together, often outside the view of orchestrators. The network becomes a living, breathing ecosystem, where minute changes or impairments can cascade across the network, making problems hard to find.
The service providers that succeeded at building virtualized networks stood out from their peers by understanding that they were entering an unknown space, and being prepared to see and do things differently, early on. And they recognized that standards bodies, open source initiatives and vendor ecosystems weren't going to help. A completely new approach to monitoring had to be conceived. It was the operators that made it happen.
The result was virtualized monitoring: a simple, scalable way to regain visibility. It tests the actual network from physical or virtual vantage points, without concern for the path the data will take along the way.
This is critical, as any form of tapping or localized testing can't follow the dynamic routing and network slices in a virtualized network. Instead, tests need to be conducted over every segment in the service chain, and then pieced together using analytics to form a complete picture.
Virtualized monitoring solutions were developed by vendors actually working in service provider DevOps war rooms. They were aided by operators' integration skills, insights and appetite to virtualize. It is exactly these kinds of collaboration that formed the foundation for the first virtualized heterogeneous networks.
Service providers going it alone take on the responsibility of making predictions about the future, shepherding vendors, standards committees and their own staff along the way. But they can also achieve transformative performance gains, and gain a critical advantage by delivering an exceptional user experience. Those waiting for an "easy button" to appear might just find customer churn is right around the corner.
Likewise, vendors will succeed by building relationships with innovative, fearless operators. They must be willing to move beyond simply "selling" to those companies and start to act like partners. That will further challenges the operators to innovate beyond their needs, and take additional risks of their own.
— Scott Sumner, Director, Business Analytics, Accedian
IoT offers the greatest growth opportunities that the Service Providers have seen since the mobile telephony took off in the early 90s. Much of it calls for the kind of reliability required by mission critical operations, be it self-driving cars or that young surgeon in white uniform standing by the patient's side with his tools (dagger) in his hand and waiting to receive his instructions from the "clinic cloud". Besides the current over seven billion mobile users will not tolerate any less reliability then they experience right now, and frankly there is lots of room for improvement too.
Finally if we want something as fancy as network slicing, of outmost mportance in my view, to work reliably and smoothly, we need to adapt a realistic and experience based approach to network evolution. So the history and past experience from many of Telecom projects that introduced some new technologies, services or network upgrades have had substantial hiccups, some even disastrous. That happened despite the fact that the Telecom's well established processes were followed including well defined and standardized interfaces and multi-vendor verification, interoperability etc. It will take many years before ICT finds its new form and we need to patiently grow with it. Some growing pain is to be expected but we should avoid disastrous network outages or major service unavailability in the mean time.