At Light Reading's recent OSS in the Era of NFV/SDN event in London, I moderated a panel discussion on "Analytics, Machine Learning & AI in Next-Gen OSS/BSS" and wanted to share some key insights from the speakers:
- Dima Alkin, VP Service Assurance, Teoco Corp.
- Oliver Cantor, Business Network and Security Solutions, Verizon Communications Inc. (NYSE: VZ)
- Ignacio Mas, Senior Expert in Programmable Network Architecture, Ericsson AB (Nasdaq: ERIC)
- Mark Pendred, Control and Orchestration Lead, BT Media & Broadcast
- Jay Perrett, Founder and CTO, Aria Networks Ltd.
Machine learning and AI can help, but set the right expectations
Machines can solve problems, learn what's happening and act on those learnings -- but they have to be programmed how to learn and they have to be taught rules and outcomes (e.g., how to route traffic through a network). However, to utilize machine learning (ML), companies first have to overcome barriers such as organizational distrust and a general resistance to ML and AI. Dima Alkin from TEOCO explained that in the case of service assurance, we are very forgiving of human error in root cause analysis and trouble-ticketing systems, but we expect machines to be absolutely right every time, even if it's unrealistic. The onus is still on us to give machines the requirements and get objectives right -- what are you trying to do?
Oliver Cantor from Verizon was cautious, and said the hardest part is applying machine learning to the multi-dimensional network layer; operators are not bad at the customer side of things. He advocated getting the data in one place and starting with machine learning to understand patterns and what the data is telling us.
Ignacio Mas from Ericsson, a network engineer at heart, discussed the issue of maturity of using machine learning and AI models in the network, stating that the models are already there and being used by operators to crunch data to predict customer behavior and prevent churn. With networks becoming programmable and software-based with SDN/NFV, operators should try these models out.
Dima Alkin also discussed why big data analytics projects have failed in the past. One issue was the amount of time and resources spent on reprocessing, normalizing and standardizing data to make it perfect and to make analytics work. He advocates consuming available data as is and moving as close as possible to live network environments to try out models, rather than wasting time in proof-of-concept trials.
AI – top down, bottom up or both?
There was debate and disagreement on this topic -- whether AI algorithms in areas such as policy and closed loop control needed to be top down -- decided by an intent or information model -- or bottom up -- through optimizing control loops at the infrastructure layers.
Jay Perrett, founder and CTO of Aria Networks, was adamant that networks should be a commodity -- an intelligent Ethernet cable that decides how to get from A to B -- and that network optimization should be handled top down based on what the service needs and what's right for the business.
Ignacio Mas from Ericsson argued we need both -- a top-down and bottom-up approach -- to create a network that will support services and make changes we want, and react to customer needs/what we need from the network.
Perrett also discussed how AI is important as it can scale, operate automatically (e.g., self-healing VNFs) and adapt to new requirements/changes in the model (e.g., to support new use cases). With an information model and plug-and-play algorithm system (e.g., topology optimization, route optimization), you can probe and change so that when a new use case comes on, it can adapt to the environment. This is important, as you cannot anticipate use cases that will be important in three years, for example.
Next page: The business value of AI