Orange network on 'cusp of something massive' in AI, says CTO

Bruno Zerbib, the French operator's chief technology officer, is trying to work out how to build the best network for AI and make money from it.

Iain Morris, International Editor

October 3, 2024

6 Min Read
Orange data center at Val-de-Reuil
Orange's data centers are not yet humming with Nvidia's chips.(Source: Orange)

Bruno Zerbib, Orange's chief technology officer, is not among those telco people convinced artificial intelligence (AI) will have almost no impact on the network. "Today, AI is not much in terms of traffic yet," he told reporters and analysts at a press update in London this week. "But we know we are on the cusp of something that is going to be massive."

An art lover, he envisages himself in a Picasso museum, using his phone or even Meta glasses to request information from an AI-powered virtual assistant about the Spanish painter's work. In one possible scenario, this sends network traffic "through the roof," said Zerbib.

This would be the inevitable outcome if Zerbib video streamed the Picasso experience and relied on "non-stop inferencing" back to the cloud. It might never happen. In the vision preferred by Apple, the AI processing happens mainly on the device. The iPhone 16, Apple's latest model, is already as powerful as a MacBook equipped with Apple's M1 chip was four years ago, he reckons. In a different scenario, Zerbib merely uploads a photo of the painting. The iPhone is sent a textual response it converts to audio for his convenience. "Traffic is minimal," he said.

But he cannot be sure, and that is forcing him to think more about the future design of Orange's networks. "When we talk with those companies, we realize it is not a very efficient model for them, for us, for anyone, and so there is going to be a need for some edge computing, where you have something in the middle." Processing would happen not on the device or in a huge data center but in smaller facilities of the kind Orange uses for its network.

These edge facilities would feasibly be able to support some flavor of large language model (LLM), used in the training of generative AI. Today, Zerbib thinks a consumer device can probably handle an LLM of about 10 billion parameters. One hosted in a hyperscaler's data center might include a trillion, he said. At the edge, 100 billion is imaginable.

Chip choices

Yet the investments needed to cope with a traffic explosion, along with edge-based AI, seem unlikely to be small. "We will need to come up with a business model that makes sense and there is no scenario where we build LLMs for free," said Zerbib. Nor is he ready to make a huge investment in Nvidia's graphics processing units (GPUs), the expensive chips linked to AI.

Partly, that is because of some optimism about emerging competition and falling prices. "I am quite sure, looking at what is going on in the Valley, and with all the hyperscalers spending a huge amount of money, that we are going to have massive competition, and so it is going to bring prices down, and that is the reason why right now we don't want to spend money on GPUs," he said.

Future options for Orange could include GPUs from AMD, the tensor processing units (TPUs) built by Google and new silicon products from Google and AWS, according to Zerbib. His bigger concern is about the lack of manufacturing capacity for chips. Production of the most advanced semiconductors is dominated by TSMC, a Taiwanese foundry, and much of its existing capacity is dedicated to Nvidia and Apple. Unless that changes, other vendors capable of designing GPUs may struggle to achieve scale.

Zerbib and Laurent Leboucher, Orange's group chief technology officer, also doubt Orange would ever build an LLM from scratch or even need one that is "telco-specific." So far, generic LLMs have been good enough for network applications such as summarizing the tickets and alerts that technical staff investigate. Meanwhile, the costs of training LLMs have escalated fast and become "unmanageable," said Zerbib. Instead, he is attracted to the concept of "fine tuning" LLMs that already exist.

For this reason, Orange maintains close relationships with various model builders, including OpenAI, Google (with Gemini) and France's Mistral. Llama, the "open source" model developed by Meta, also holds interest for Zerbib. Building from scratch "is going to be expensive unless you settle for smaller models," he said. "But having a big generic AI that can support customers and having a ChatGPT application that's optimized for French customers is not viable economically."

A smarter approach to selling connectivity

Why, then, would Orange need to invest in capacity and edge computing? The rationale is partly that Orange might need to support AI services requiring much lower "latency," a measure in milliseconds of the time it takes for a signal to complete its journey around the network, along with highly reliable connections. Today, Orange is "trying to figure out how we are going to deliver 100% availability with extremely low latency with incredibly demanding expectations in terms of uploading traffic from the device all the way to the cloud," said Zerbib.

The natural concern for any telco investor is that spending on the network will not be matched by an increase in revenues. This is, after all, what has largely happened with the rollout of 5G so far. But Zerbib thinks Orange can adopt a much smarter approach to the sale of connectivity. In the Picasso museum example, a customer wearing Meta glasses could pay for a two-hour service to correspond with an AI assistant over a video link and experience no disruption. On exiting the museum, this dynamically provisioned "slice" of the network would deactivate and normal service would resume.

If all goes to plan, software developers will be able to write code based on these improved network features through industry-standard application programming interfaces. It explains Orange's membership of a new joint venture (JV) involving 11 other big telcos and Ericsson, as a 50% partner. The goal of that JV is to act as a preferred marketplace of network APIs compliant with CAMARA, a standards initiative overseen by the Linux Foundation. API differences between telcos were previously a disincentive to app developers seeking a global audience. The hope is that standardization will be an enticement.

"Really, the reason this whole JV is a big deal is because it is a catalyst to get started," said Zerbib. "Once developers have started to make that investment at scale, you don't go back to the old world." For this reason, he says he is not worried about the risk of fragmentation if Nokia, Ericsson's chief rival, decided to sponsor a rival JV. Indeed, his remarks imply he would find the competition welcome. "We don't want the JV to create a lock-up situation for customers."

Much could yet go wrong. Critics see limited value in network APIs and doubt customers will necessarily pay for higher levels of service and lower latency. Others are dubious that AI will be the long-awaited spur for edge computing, a market opportunity that has been discussed for many years in the telecom sector.

The worst-case scenario, though, is that AI forces telcos to invest heavily in their networks while making no difference to their sales. Zerbib and his peers clearly hope they can avoid any such outcome by monetizing APIs and dynamic slices of the network. If they fail, those outings to the local Picasso museum could mark a very blue period.

Read more about:

AIEurope

About the Author

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like