Snakes on a Control Plane Scare Telefónica
PARIS -- AI Net -- When Google unleashed its DeepMind artificial intelligence (AI) system and set it loose on the ancient game of Go, experts were perplexed by some of its unorthodox moves. It was like watching a soccer player kick the ball out of the stadium, said a Google Cloud executive who witnessed the project. Bizarre as it all seemed, no critical systems were at risk, and AlphaGo eventually triumphed over the world's best player. But what if AI systems in telecom networks opt for similarly unusual strategies?
It's an issue that is consuming the attention of researchers trying to extract telecom value from AI. And time is not on their side. The launch of 5G technology and services will introduce so much complexity into the telecom network that humans may struggle to cope. In a rush to build supporting systems, operators risk developing an AI that kicks a metaphorical ball out of the stadium and creates insurmountable problems.
Diego Lopez, a senior technology expert for Spain's Telefónica, emphasizes the need to make AI systems understandable to humans from the very beginning. "It is true that they make take decisions that look strange," he said during a panel session at AI Net, a conference taking place in Paris this week. "It is important to have a way in which AI can explain the whys -- to have a language and ontology and mechanisms for expressing intent."
Despite his apparent concern, Lopez thinks most of the ideas about the "singularity," the point at which an almighty AI will supposedly overtake human intelligence and run amok, are still extremely far-fetched. What telcos are likely to end up with in the next few years are "not-so-smart animals" that can be trained on relatively simple and repetitive tasks. Think of these creations as "AI dogs and bees" that perform useful activities, he told conference attendees in Paris. But unless telcos remain alert, there is a danger something more damaging takes shape. "Snakes are harmful and we have to be careful about them," said Lopez.
Major vendors similarly urge caution. Juniper Networks, which coined the "self-driving network" expression, and regularly draws parallels between the autonomous network and the driverless car, is one of several companies developing so-called "intent-based" systems that hide underlying complexity from network engineers. The basic premise is that an engineer issues an instruction and the network carries it out: The engineer does not have to get his hands dirty on the nuts and bolts. Intent is years away from realizing its full potential, says Kireeti Kompella, the chief technology officer of engineering for Juniper. But systems might eventually seize the initiative from humans.
"There is a creepy aspect to this," acknowledges Kompella in a discussion with Light Reading on the sidelines of the Paris event. "What if I don't ask you what you want but follow your behavior for a month and then prepare the network to your behavior?" It sounds like the network equivalent of Click, a comedy movie in which a remote-control device allows the protagonist to skip unpleasant chapters of his life. The system turns out to be self-learning and fast forwards uncontrollably through entire years based on certain triggers.
Rather like Lopez, Kompella calls for the development of a clear rules-based system for AI in telecom networks. A good starting point, he thinks, would be some basic principles akin to the three laws of robotics conceived by Isaac Asimov, the famous writer of science fiction novels: Essentially, a robot must not hurt a human; a robot must obey a human (unless that means hurting one); and a robot must protect its own existence (unless that means hurting or disobeying a human).
On a practical level, the rules should provide what Kompella describes as "guardrails" for the technology. Juniper took this approach when building a tool it calls the HealthBot, which monitors the status of network devices. "The way we did it was to start with a rules-based system," says Kompella. "We talked to engineers building it, support staff and service providers and came up with KPIs [key performance indicators]. Then we take the human knowledge and expertise and put it into a system that captures the rules. The next stage is replacing the rules with a machine learning system."
Telefónica's Lopez backs the same kind of structured approach and says it would need to be set up so that additional points can be added to original rules as circumstances might dictate. For the staff developing these systems, such rigor should help to cut the time spent on debugging software code in future. It is also something lawyers will demand in the event of a problem, he says: With the right schema, they should be able to better identify what happened, why and who might be to blame.
But there may be some major barriers to the development of an industry standard in this area. In the field of machine learning, a subset of AI, the accumulation of data is of paramount importance. In drawing up a rules-based system, one of the initial steps might involve a thorough examination of the various datasets that machine learning technologies would use. Yet telcos may be unwilling to pool their data resources, says Daniel Bar-Lev of the MEF, an industry association.
"A PoC [proof of concept] we did with service providers had as a first exercise the gathering and sharing of data to do basic analysis, and they said tell us what you want and we'll check with legal," he said in Paris. "The distribution of data is a big issue." The alternative to a standardized industry approach, however, might be that operators go off and build their own bespoke machine learning systems, replicating effort and hindering progress.
If those hurdles can be overcome, a systemic approach to the definition and storage of rules might also address some anxiety about the loss of skills as engineers are taken out of the workforce. If intent-based systems abstract away the technical complexity, and the staff who previously got their hands dirty are either laid off or given new roles, then telcos could find themselves in a worrying position where an AI tool knows more about the network than any humans do.
It is for such reasons that telcos may be reluctant to lose technical employees, even if AI can replace them in daily activities. "The rules won't go away but they will get augmented with machine learning, and then all of that gets augmented with humans -- as long as you still have them," says Kompella.
For Colt, a telco focused on the enterprise and wholesale sector, the whole notion of "closed loop automation," whereby humans no longer feature in the process of monitoring and optimizing network performance, remains a relatively distant vision after a series of AI trials that started in late 2018. Valery Augais, a senior network architect involved with those trials, is adamant there will be "no full automation," despite all the current excitement that surrounds AI and intent-based networking.
Francois Jezequel of Orange Labs, the research part of French telecom incumbent Orange, similarly thinks people will continue to provide an important back-up in areas where AI takes hold. He draws a connection with chatbots, the voice-based AI systems now used in many customer service activities. "When the machine doesn't answer properly a human takes control, and the network is the same," he said. "We have to think about solutions to fall back on."
- AI Threat Is Tech's Fart in the Room
- Juniper Launches 'Bots' for Self-Driving Networks
- Colt's AI Trials Provide Good News for Humans
- Google Has Intent to Cut Humans Out of Network
- The Zero-Person Network Operations Center Is Here (in Finland)
— Iain Morris, International Editor, Light Reading