A new AI-RAN Alliance includes Nvidia and a host of other intriguing members, and it could have major ramifications for traditional telecom players.

Iain Morris, International Editor

February 26, 2024

7 Min Read
Nvidia building in Santa Clara
(Source: Nvidia)

Nvidia, as everyone knows, makes very expensive chips that have recently flown off the shelves and landed in the data centers owned by the likes of AWS, Google Cloud and Microsoft Azure. These graphical processing units (GPUs) have there been set to work on training the large language models (LLMs) that underpin generative AI technologies like ChatGPT. But Nvidia is also pitching them at other companies that don't own hyperscaler facilities and don't train LLMs. And telcos are high on the list.

That's partly because of their long interest in edge computing, which aims to bring IT resources into smaller telco facilities, much closer to end users. But telcos are also considering new silicon options for the radio access network (RAN). Nvidia reckons a superchip branded Grace Hopper can multitask, supporting new AI applications at the edge as well as RAN software.

To stimulate interest in and further development of this AI-cum-RAN story, it is now putting its muscle behind a new initiative dubbed the AI-RAN Alliance. The list of other companies described as "founding members" by Ronnie Vasishta, Nvidia's senior vice president of telecom, is intriguing. It includes Ericsson, Nokia and Samsung, the three big RAN kit makers that don't hail from China, along with Arm, the UK-based chip designer now trying to weaken the grip of Intel and x86 architecture in both data center and virtual RAN (vRAN) markets. Its telco members are T-Mobile and Softbank, which is already investing in Nvidia's GPUs for its Japanese network, while AWS features from among the hyperscalers.

Robot surgeons in the RAN

It seems highly improbable, as Nvidia seems to recognize, that any telco would invest in Grace Hopper purely to support its RAN. The Grace part of the superchip is an Arm-based central processing unit (CPU) that may be suitable for less resource-hungry RAN functions (Layers 2 and 3 of the software stack). Indeed, it may hold attractions over x86 in a vRAN, given the much-discussed energy efficiency of Arm's tech.

But Hopper, the GPU part, looks like a power hog, and there seem to be far more economical options for Layer 1 (or L1), the most gluttonous RAN software. They include keeping all or most Layer 1 software on the CPU, an approach unsurprisingly backed by CPU giant Intel, and the use of other, more energy-efficient, custom silicon. Marvell and Qualcomm are among the most prominent developers of merchant silicon for the RAN.

Where Nvidia's appeal grows is as a facilitator of AI applications hosted in the same infrastructure as the RAN. By moving GPUs toward the edge, companies could reduce latency – a measure in milliseconds of the roundtrip journey time for a data signal on the network – and support new latency-sensitive AI applications. "Moving it closer to the point of use, so that the application resides in the telecom infrastructure, enables a user experience and user service platform in terms of servicing many of these apps," said Vasishta.

He breaks the business case down into three broad categories: AI and RAN; AI on RAN; and AI for RAN. The first, as the name implies, is essentially about the notion of collocating workloads on the same infrastructure and basically getting AI benefits to offset RAN costs. With "AI on RAN," Nvidia imagines the applications that edge infrastructure could support in the age of AI "inference," when live data is fed into pretrained LLMs. Think of real-time interaction with robots, autonomous warehouses and, yes, the often-mocked idea of robot surgeons ("surgery assistance" is the expression Nvidia prefers).

"AI for RAN," meanwhile, would put GPU-aided technologies to work on things like spectral efficiency, inter-cell coordination and futuristic RAN concepts including enhanced distributed MIMO. With spectrum in limited supply, AI could hold considerable value from an engineering perspective. "There are opportunities to incorporate AI and improve the efficiency of the stack," said Vasishta.

If much of this sounds far-fetched, some big telcos outside Japan are listening. "I think that platform is really interesting," said Mark Henry, the director of network strategy for the UK's BT, at a recent meeting with reporters. "It is probably a little hint of what a 6G basestation might look like." Nor did he sound put off by the expense. "The RAN is really expensive. We already have 18,000 basestations with loads of hardware acceleration on them today, but this is in no way disaggregated or abstracted, so we can't use it for anything else."

In a cloud RAN world, investments could, however, be driven by the same companies that have spent heavily on GPUs so far. "Hyperscalers have a very rich technology stack and can help telcos accelerate this transformation," said Vasishta, answering follow-up questions by email. He does, though, point out that telcos "sometimes prefer to own the entire technology stack" due to "sovereignty issues." Outside Japan, several operators, including Swisscom, Singtel and France's Iliad, have started investing in what Nvidia calls "AI factories."

But the hyperscaler identified as a founding member of the AI-RAN Alliance clearly spies opportunity. "Our approach in this space has been to build and manage infrastructure for telco network workloads," said Ishwar Parulkar, the chief technology officer for telecom and edge cloud at AWS, when asked about his company's involvement in the new group. "It's not just providing servers. It's providing all these other applications that will become even more important as we build networks."

Chip choices

One question is what this means for RAN kit vendors. Nokia, which has already run demos showing its RAN technology on an AWS platform, named Nvidia as a new partner just last week. But it's still unclear how that arrangement would work in practice. In Layers 2 and above, the same software written to work on Intel-based servers could be redeployed on Nvidia's Grace chips instead, said the Finnish kit vendor by email.

But this cannot happen at Layer 1. Nvidia comes with its own GPU software, branded Aerial. Nokia's code is tailored specifically for a custom chip provided by Marvell, hosted on a smartNIC (a network interface card). Despite saying in its release that Nvidia's GPU will be used for "AI applications and vRAN acceleration," Nokia denies there is any plan to replace Marvell. "[There] is no change to Layer 1 acceleration plans," said a spokesperson by email. "We will continue with the current L1 plans." Vasishta, meanwhile, says the GPU can also be used to accelerate RAN-related AI algorithms, relevant in areas such as beamforming and the Layer 2 scheduler.

The situation is perhaps even more complicated for Ericsson, another AI-RAN Alliance founding member. Its virtual RAN strategy today is built around x86 and heavier reliance on general-purpose processors. Publicly, it attaches great importance to "software portability" – having one set of code for multiple hardware platforms. Like Nokia, Ericsson could probably run software written for Layers 2 and above on Grace as well as Intel chips. But its current Layer 1 code would probably not fit an Nvidia GPU.

Amid speculation that Ericsson is in talks with Nvidia, the Swedish vendor's top mobile executive insisted there would be no abandonment of this portability principle. "Over time, that means if we are going to have software fluidity with respect to Nvidia now, we need to have portability of our software to satisfy various underlying infrastructures," said Fredrik Jejdling, the head of Ericsson's networks business group. "These are long developments and lead times to do everything, and that is the direction we need to go."

Nvidia's entry into the global RAN market, then, would have major ramifications for telcos, hyperscalers and equipment vendors. It could mark a retreat, a ceding of ground, by traditional players in telecom, while Internet companies and the world's most valuable chipmaker continue their advance. And still hard to imagine is that any telco will make money from the AI applications being discussed. For many, finding that growth story is still the number-one concern.

Read more about:

MWC24AI

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like