Sponsored By

Docomo picks Nvidia for 5G and AI, but its plan is a head scratcherDocomo picks Nvidia for 5G and AI, but its plan is a head scratcher

Docomo expects to save money by investing in the costliest and most power-hungry chips around. Just don't expect a rapid rollout.

Iain Morris

September 27, 2023

6 Min Read
NTT Docomo sign
(Source: NTT Docomo)

Japan's NTT Docomo has long dabbled in open radio access network (RAN) technology, mixing baseband and radio vendors like a telecom alchemist searching for network gold. But it's never had a widely deployed virtual RAN, where software sits on general-purpose equipment and can be operated in the cloud.

After hinting at its interest, it has now picked Nvidia – a company that has successfully turned chips into gold – as the key partner for a virtual RAN deployment. The goal is to exploit Nvidia's graphical processing units (GPUs) for both RAN functionality and artificial intelligence (AI). GPUs cost a small fortune and are a "scarce resource," according to Hesham Fahmy, the CIO of Canada's Telus. But the economics stack up for a telco with this combination of use cases, insists Nvidia. Docomo has evidently bought in.

Or has it? A press conference with reporters and analysts was remarkably thin on detail, and questions about the extent of the planned rollout went unanswered. Nor will Docomo be using the latest products Nvidia has been showing off for telcos – an all-in-one "superchip," branded Grace Hopper, that combines an Arm-based central processing unit (CPU) with Nvidia's data processing unit (DPU) and GPU technology.

Instead, it will use x86-based server equipment in conjunction with an Nvidia PCIe card, featuring the DPU and GPU technology. The idea, for now, is to run less computationally intensive RAN software (Layers 2 and 3 of the stack) on the x86 CPU and shift the demanding Layer 1 functions to the PCIe card, whose GPU could simultaneously prop up AI applications. Docomo boasts cost savings as a big attraction. Many others will be unconvinced.

A schooling in pooling

For starters, the figures shared during the press conference and in a presentation seem to be about open and virtual RAN more generally, rather than Nvidia in particular. In short, Docomo thinks it can cut total cost of ownership by 30% and power consumption by 50%, where the point of comparison appears to be a legacy network.

The savings stem partly from taking the baseband units normally installed at sites, consolidating these in small data facilities that would serve a bigger number of sites and then using installed fiber networks to support the fronthaul links between facilities and sites. Yet this could be done without Nvidia or, indeed, any virtualization whatsoever, and Docomo has already pooled some of its baseband resources in this manner.

The appeal of RAN virtualization is that it would supposedly allow an operator to run software on the same cloud platform as other network and IT workloads, cutting out silos and reducing complexity and cost. But Nvidia's preference for offloading all Layer 1 software onto a separate "inline" accelerator – the GPU, in its case – has major detractors, including Intel, the main provider of those x86 CPUs.

"You can't have pieces of the network be cloud-native and pieces of the network not be cloud-native because then you end up with a mess for the operator, where they have to build two management systems, where they have to hire two sets of people who understand two different systems," said Sachin Katti, the general manager of the network and edge group at Intel, during a recent interview.

Ericsson, the world's biggest maker of 5G networks outside China, was even more specific in its criticisms this week. Matteo Fiorani, the head of Ericsson's cloud RAN product line, champions an alternative form of acceleration called "lookaside," where nearly all functions remain on the CPU, over the inline technique chosen by Nvidia and others. "If you go with the full Layer 1, then you lock yourself in because the Marvell card is very different from the Qualcomm card or an Nvidia GPU, so you can't really port the code," he said. "You have to redo it all the time, and that's not affordable."

Unlike some chip developers, Nvidia supplies the Layer 1 code as well as the silicon. Aerial, as the software is branded, cannot be deployed on another company's hardware, Ronnie Vasishta, Nvidia's senior vice president of telecom, freely admits. As for the software handling the Layer 2 and 3 functions on the x86 CPU, this would come from Fujitsu. Already, there are multiple suppliers in the mix where a traditional RAN would have one. That could also make cost savings hard to realize.

What seems most counterintuitive is the notion that Nvidia's chips could lead to energy savings. GPUs are widely regarded as power hogs and the specs for the accelerator cards in question put the maximum power at a range of between 230 and 350 watts. Qualcomm boasts a comparable inline accelerator card that can support high-bandwidth services with peak power consumption of less than 40 watts.

Of course, Qualcomm's card is not designed for AI, as well. It is feasible that an operator making heavy use of edge-based AI applications as well as 5G RAN acceleration could see cost benefits. But what AI applications would require GPUs in this part of the network? Asked that question, Masaki Taniguchi, the head of Fujitsu's mobile systems business unit, said an operator using AI would be able to allocate resources to users, boosting throughput and lowering cost. But it's doubtful this – as opposed to investment in large language models – demands edge-based GPUs.

Brave steps and genuine disruption

Telcos are frequently criticized for being insufficiently brave and Docomo may be laughing at more timid peers if its GPU charge does pay off. This time last year, no one would have expected the tech conversation a few months from then to be dominated by generative AI. The telcos investing in the ability to support that on their own premises may see opportunities that others do not.

Undoubtedly, there are also some intriguing features of the Nvidia chip from a purely RAN perspective. "The Nvidia GPU is probably over-spec'd for a classic distributed RAN deployment," said Gabriel Brown, a principal analyst with Omdia (a Light Reading sister company) in comments shared via Teams. "Not a problem per se (what'll do a lot will do a little), but this solution might really shine in more centralized cloud RAN deployments. The power-saving comments seemed to refer to this C-RAN scenario where you can get some multiplexing benefit."

Nvidia has also talked about extending its platform into the Layer 2 area, where it could also help as an accelerator for certain functions. "When it comes to the Grace Hopper solution with Layer 2 acceleration, we're potentially talking about something genuinely disruptive," said Brown. "For example, this will really shine in C-RAN deployments where you want/need 'network MIMO' to schedule traffic to a user device from spatially separated radio sites. That's a way off in the standards and in implementation, but it indicates Nvidia is really thinking long term."

The key uncertainty today is how far this will go in the Docomo network. "We have already introduced a commercial 5G network," said Sadayuki Abeta, Docomo's global head of open RAN, during the press conference. "We will gradually expand our vRAN network. Our network is very much vendor-interoperable and so anywhere we can install vRAN and don't need to replace anything."

But it seems unlikely to be that easy. If the network is not virtualized today, it will not be using x86-based, common-off-the-shelf servers, for one thing. Layer 1 functions are not currently supported by Nvidia's software on Nvidia's GPUs. It is unclear if site radios would need replacing, too. All that would entail considerable expense. Docomo has more explaining to do.

Read more about:


About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like