Operators need to tread carefully amid widespread suspicion and warnings that more advanced AI systems will trigger upheaval.

Iain Morris, International Editor

November 6, 2019

5 Min Read
'Real AI' & Racist Robots Preoccupy Telco Tech Heads

LONDON -- Telco AI Summit -- Last year, when Susan Wegner, Deutsche Telekom's chief data officer, gave a short presentation about artificial intelligence to managers at the German operator, the immediate reaction was mistrust. "Some thought it was like the Terminator," she said, referring to the movie franchise in which AI runs amok and starts killing people.

No one expects the tools used in tomorrow's telecom networks to prove quite so destructive, but Wegner's experience reflects growing industry concern about the impact AI systems could have on the workforce, shareholders and society. Anxiety has grown with suggestions at this week's Telco AI Summit in London that next-generation AI systems will be far more disruptive than the machine-learning tools used today. If they are not careful, service providers could face a modern-day version of the Luddites, the nineteenth-century textile workers who wrecked equipment in protest about technological change.

One issue is a shift from so-called "white box" AI technologies, which generate more understandable results, to "black box" systems based on more complex neural networks. Natali Delic, a senior technology executive in the A1 Telekom Austria Group, calls these neural networks "real AI" and warns of the cultural impact on her organization. "Right now, we are still in the white box domain with machine learning, and so it is explainable. But even that is a big change," she said during a presentation here in London. "There is a limit to how fast people can adapt. That becomes a challenge for the organization."

It is a pressing concern for A1 as it rolls out SARA, an AI platform developed in-house to support mobile network planning and maintenance. SARA has already stopped Vip mobile, A1's Serbian business, from overselling a mobile broadband service and running into congestion problems. In the future, it could prove critical by reducing the equipment bill for deploying 5G networks. But some employees see it as a threat to their livelihoods.

"Network engineers not part of the team that built this had issues on why they should use it," says Delic. "Some thought they could do the same in Excel. Some were afraid they would lose their position. Some were reluctant and didn't want to change."

Resistance looks set to rise as the workings of the platform become less transparent to employees. Thomas Hodi, a software engineer, involved in SARA's development, says he could probably explain outputs today, even if that took him an hour. But as more sophisticated neural networks are introduced into the mix, SARA will become less "explainable," he tells Light Reading on the sidelines of this week's event. The complexity of neural networks will essentially make SARA's decisions harder to figure out.

For all the attractions of these super-intelligent systems, their imminent arrival is a prospective nightmare for A1 and other service providers. "We are talking about a huge potential impact on society and employees and shareholders," says Delic. "If the algorithms make bad decisions, we might even start losing money. That is what we need to think about."

Want to know more about 5G? Check out our dedicated 5G content channel here on
Light Reading.

She personally recommends a wave of organizational changes in the telco industry. The first is the recruitment of chief AI officers, separate from chief digital officers, by service providers planning to make use of AI systems. Ideally, that person should be someone with business and technical expertise who can define strategy and act as an important intermediary between data science teams and business stakeholders, she says.

An AI ethics board is also needed, according to Delic. Its purpose would be to assess projects to prevent any bias and ensure there is some degree of transparency for stakeholders. Operators further need AI centers of excellence that supervise data governance and provide training for staff throughout the organization. Those centers could also take responsibility for resource allocation and AI investments.

Deutsche Telekom's Wegner is similarly concerned about bias in the algorithms that underpin AI platforms. Outside the telecom industry, there is already negative publicity about tools that predict whether someone might be involved in criminal activity, for instance. Research findings have shown that algorithms attach greater risk to people from certain ethnic backgrounds, says Wegner. "We have to be careful what data we are taking and where there might be bias."

Like A1's executives, she warns that deep neural networks will make it impossible to understand results. One possible solution is to use black box results to train white box technology and produce something explainable. A model called LIME -- for "local interpretable model-agnostic explanations" -- could help in some cases by using a mixture of fake and original data in the training process. "It is more robust if you train it with fake data and original data," says Wegner.

Fear of AI is sometimes irrational, she says. Any minor accident involving a self-driving car gets media attention, and yet more than a million people are killed by cars with human drivers every year. With their medical expertise, doctors are black boxes to most patients, but they generally attract little of the mistrust that surrounds AI. Irrational as current perceptions may be, making AI look as trustworthy as the average doctor could be a long process.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

Europe

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like