Sponsored By

Vodafone CTO warns of Big Tech lock-in with generative AI

Scott Petty alerts the telecom industry to a generative AI 'arms race' with some vendors trying to gain full control over their customers.

Iain Morris

February 9, 2024

5 Min Read
Vodafone CTO Scott Petty on stage
Scott Petty, Vodafone's CTO, at a press conference in 2019.(Source: Iain Morris/Light Reading)

"Better than humans" is not a sloganeering bit of self-promotion by artificial intelligence (AI) but Scott Petty's honest assessment of how it performed when asked to summarize call histories in customer services. Yet the Vodafone chief technology officer and his team are still approaching AI – and especially the generative flavor of it – with the caution of a zookeeper toward a slumbering tiger. And the potentially dangerous animal, in this case, is Big Tech.

Being trapped by a single public cloud and unable to escape is one of the main telco fears about the Internet giants. It is partly why Vodafone avoids overreliance on any one of them, dividing the effort between AWS, Google Cloud and Microsoft Azure, with which it recently struck a $1.5 billion deal.

Yet Petty was still complaining in August 2023 about the "significant engineering work to move workloads between different hyperscalers." Like other telcos, Vodafone has also resisted putting its 5G network in the public cloud. "Our view would be that's too risky and that you are almost outsourcing a core competency," he told Light Reading in October 2021.

But as Vodafone explores the use of generative artificial intelligence (AI), the same fears about "lock-in" have loomed. The large language models (LLMs) that underpin generative AI have been funded by technology giants and trained in their data centers at a cost estimated by Shankar Arumugavelu, the chief information officer of Verizon, to be "hundreds of millions" of dollars. Petty has ruled out any development from scratch of an LLM by Vodafone. That means relying on third-party LLMs.

"It is all about the data," he said at a press briefing in London this week organized by the TM Forum, an industry association. "If you want to use multiple LLMs and they were too tightly coupled to the hyperscaler that provides the LLM, you would have to replicate the data for every hyperscaler you wanted to use. We have to have a model where you choose to put the data where you best want to put it and have openness in the way that LLMs work."

Unwelcome moves

Early concerns have not stopped Vodafone from pushing ahead with generative AI. Its $1.5 billion deal with Microsoft largely entails moving customer relationship management and other IT applications into the Azure public cloud, and off the "tens of thousands" of on-premises x86 servers that currently host them. But Vodafone is also using Copilot, a Microsoft-developed chatbot, and running call records generated by TOBI, its own customer-facing chatbot, through Microsoft-backed OpenAI to summarize call histories and derive insights. "A generative AI summarization engine is super accurate," said Petty, believing it outperforms people at this task.

Additionally, the operator has worked with Google to create what Petty calls a "data ocean," a 16-petabyte repository pooling cleaned-up information from multiple in-house sources. "We've been quite successful working with Microsoft and Google to have applications that use Microsoft's LLM but consume data from Google's GCP cloud," said Petty. "It's possible, but you need the right architecture and the right models to make that happen, and of course you want flexibility and commercial control."

Nevertheless, he spies some unwelcome moves within the generative AI community. "We are in a little bit of an arms race with generative AI at the moment," he told reporters and analysts gathered at the TM Forum's London office. "Some vendors would love to get full end-to-end lock-in."

While Petty did not provide examples, his remarks come as Big Tech players make multi-billion-dollar bets on specific generative AI companies. In October, Google was widely reported to have made a $2 billion investment in Anthropic, while Microsoft is believed to have pumped about $10 billion into OpenAI, perhaps the best known of the various generative AI startups.

Meanwhile, authorities in the UK, where Vodafone is headquartered, have launched an investigation into the cloud services market out of concern about anticompetitive behavior by the Internet giants. In its response to a "call for inputs" issued by Ofcom, the telecom regulator, national incumbent BT noted that "Microsoft has recently been accused of using Windows and Office to feed the growth of Azure" through discounts.

ODA smells good

The issue of generative AI lock-in is now high on the agenda of the TM Forum itself. Nik Willetts, its CEO, is positioning future versions of Open Digital Architecture (ODA), a standardized framework for telco IT systems, as a means of developing genuinely "AI-native" systems from the outset. Petty sees it as a potential answer to the lock-in problem. "You need an architecture like ODA to give you the flexibility that you need," he said.

"We really adopted ODA because it gave us a lot of flexibility between our traditional telco systems," Petty added. "As we built digital layers and applications on top of that, we could build a much more scalable architecture. It really helped us move to cloud services and gave us much more choice and interoperability."

Not all Petty's experiences of generative AI have been as positive as the call-summarization project. A separate effort that involved developing FAQs for customers on topics like changing batteries in iPhones or roaming tariffs in certain countries was an apparent disaster. "Most of the LLMs were rubbish," said Petty. "They were hallucinating and making stuff up because the data they were based on was totally inaccurate and had been written for human consumption."

It means Vodafone is still not ready to expose generative AI to customers. "We think generative AI is very good at being an assistant to a human and helping them do their job better, but we're not really convinced it's ready for direct interface with customers until we have accurate-enough data and the ability to guarantee how an LLM came to the answer that it gave," he said. "We've got more work to do as an industry to understand the software."

Read more about:

AI

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like