Sponsored By

Nvidia 'superchip' for SoftBank strikes at Intel, boosts Arm in open RANNvidia 'superchip' for SoftBank strikes at Intel, boosts Arm in open RAN

The hottest chipmaker in the world right now has a product for managing radio access network and AI workloads and one of Japan's biggest telcos has bitten.

Iain Morris

May 30, 2023

8 Min Read
Nvidia 'superchip' for SoftBank strikes at Intel, boosts Arm in open RAN

Even into her late seventies, Grace Hopper had a fearsome, not-to-be-trifled-with look. Nvidia's new "superchip," named after the legendary female computer scientist and rear admiral, will appear just as intimidating to the chip company's various rivals, and mainly Intel, the dominant supplier of central processing units (CPU) for data centers and related sectors. Grace Hopper is arguably the most serious threat Intel has faced so far.

Launched the week after Nvidia's share price climbed 24% in one day, amid excitement about the implications of an AI boom for Nvidia, Grace Hopper unites CPU and graphical processing unit (GPU) technology in a single product. GPUs, Nvidia's speciality, have turned out to be much better than Intel's x86-based CPUs at supporting the multitudinous faces of artificial intelligence (AI). Nvidia is now combining them with its own CPU (branded Grace minus the GPU), which uses the blueprints of Arm, the UK-headquartered company it had been trying to buy from Japanese owner Softbank until regulators sank that deal in late 2021.

Throw in another goodie called BlueField, a so-called data processing unit (DPU), and Nvidia has a formidable CPU/GPU/DPU product to upset Intel. While the bigger opportunity lies in the broad data center and AI markets, Grace Hopper plus BlueField could also be used in virtual and open radio access network (RAN) technology, a small but growing sector Intel has so far dominated as the world's biggest supplier of general-purpose processors. In Softbank, Nvidia already has a client that plans to use its latest chips to handle both AI and 5G workloads.

Figure 1: Nvidia CEO Jensen Huang is riding an AI and accelerated computing wave. (Source: Nvidia) Nvidia CEO Jensen Huang is riding an AI and accelerated computing wave.
(Source: Nvidia)

How would this work from a RAN perspective? In most of today's virtual RANs, Intel's CPUs directly handle a big chunk of software (all the functions categorized as Layers 2 and 3, or L2+). Because they are more computationally demanding, some of or all the baseband functions (Layer 1) often use accelerators, extra silicon dedicated to that task.

Two acceleration techniques have surfaced: lookaside, which continues to rely on the CPU for all but a few functions; and inline, which offloads everything and typically puts the silicon on a separate card that can be slotted into a server. Intel backs lookaside, while Arm's licensees tend to favor inline.

Grace would replace the x86 chip entirely at L2+, with Hopper (the GPU) used for inline acceleration at Layer 1, said Ronnie Vashista, Nvidia's senior vice president of telecom, answering questions via email. The BlueField DPU, meanwhile, would run timing synchronization for open fronthaul 7.2, an interface between baseband and radios. Developed by the O-RAN Alliance, a specifications body, it is supposed to guarantee interoperability between different suppliers. That was missing from older specifications, forcing operators to buy all their RAN products from the same vendor.

The net result, boasts Nvidia, is a product able to support throughput of 36 Gbit/s on the downlink, a level that operators have struggled to achieve with industry-standard (read Intel-based) servers, according to the company. Thanks to Arm's designs and Nvidia's software, it is also two-and-a-half times more power efficient than competing products, claims Nvidia.

Figure 2: A close-up of Nvidia's latest Grace Hopper-branded superchip. (Source: Nvidia) A close-up of Nvidia's latest Grace Hopper-branded superchip.
(Source: Nvidia)

Grace under pressure

None of this means Nvidia is guaranteed to eclipse Intel as the virtual RAN daddy. Just as Intel has failed to make headway in the smartphone sector that Arm dominates, so Arm's licensees have struggled to make data-center inroads for years. In 2022, they had a combined market share of less than 10%, according to Counterpoint Research, with Intel and AMD (which also uses x86 architecture) serving the rest.

While lauded for their energy efficiency, Arm's chips are less powerful than x86 processors and therefore tend to require a higher number of cores, the building blocks of a CPU, to perform the same tasks. Grace comes with 72 Arm cores, whereas server maker HPE is used to dealing with 32-core x86-based CPUs for the telco network. Arm, then, might entail higher upfront costs.

That said, Geetha Ram, HPE's head of telco compute, recently told Light Reading that 128-core CPUs sold by Ampere Computing, another Arm licensee, cost her about the same as 32-core CPUs from Intel and brought a 35% saving in power when used to support the user plane function in a telco network.

Figure 3: A good year: Nvidia's share price ($) (Source: Google Finance) (Source: Google Finance)

A much bigger obstacle is probably the lack of a supporting ecosystem for Arm. Intel has cultivated much closer relationships with server makers like Dell and HPE as well as cloud-computing specialists such as Red Hat, VMware and Wind River, with their containers-as-a-service (CaaS) platforms.

On the RAN side, specifically, it has spent years developing its own reference architecture, called FlexRAN, for Layer 1 software. Ericsson, one of the world's biggest RAN software developers, now has a fully deployable RAN stack that can be used with x86 chips from either Intel or AMD.

Porting this to Arm-based hardware would probably not be doable without some reengineering. "The same logic would apply but you'd have to rewrite the code to suit the processor," said Joe Madden, the founder and president of analyst firm Mobile Experts, during a recent conversation with Light Reading. "I really believe it's economic forces that bring those barriers down. Once you're at a level where it's just nuances of different processors, then when the market gets big enough those nuances go away."

Aerial takes flight

Nvidia's answer to FlexRAN appears to be something it calls Aerial. It was previously described to Light Reading (by Nvidia) as a Layer 1 software platform that could be combined with third-party products addressing higher layers. RAN software specialists including Altran (owned by Capgemini), Mavenir and Radisys (owned by India's Reliance) were identified as Nvidia partners back in September 2021.

"We have a number of ISV [independent software vendor] partners that are developing L2+ on the Arm ecosystem and on Grace," said Vashista. "We are not disclosing for Grace specifically at this stage but we have shown already a Radisys L2+ stack with an Nvidia accelerated Layer 1 hosted on an Arm CPU at MWC-B [Mobile World Congress Barcelona]."

Nvidia also has support from server makers, said Vashista, without disclosing names. On the cloud side, meanwhile, it is currently collaborating with Red Hat and Wind River. "Our goal is to enable every CaaS player," said the Nvidia executive.

Japan's Fujitsu is another RAN partner, and Greg Manganello, its network services boss, provided a strong endorsement of Nvidia's AI-plus-RAN capability during an interview at MWC. "If you want high-performance edge, we are going to pitch Nvidia, and it can do the analytics and has enormous capacity," he said.

Figure 4: Japan's Softbank will be an early customer of Nvidia's superchip. (Source: knowmadic media on Flickr CC 2.0) Japan's Softbank will be an early customer of Nvidia's superchip.
(Source: knowmadic media on Flickr CC 2.0)

This seems to be how Softbank plans to use Nvidia. While relatively few details have been made available so far, the Japanese operator has said it wants to build numerous edge data centers that can host AI as well as wireless applications on a common server platform for cost and power efficiency.

This means the 5G part effectively comes free of charge, said Vashista. "The server is there for AI. It's the multi-use aspect and completely software-defined 5G RAN that is so unique, and it's combined with the performance of the RAN stack."

What's more, unlike the products being marketed by chip rivals Marvell and Qualcomm, Nvidia's Hopper GPU accelerator does not have to fit on a separate card because it is integrated with the Grace CPU. That addresses a concern of executives such as Tareq Amin, the boss of Japanese telco Rakuten Mobile, who views cards as an extra cost and complication and has ruled out buying them. For telcos that do not share his worries, Nvidia happens to have an accelerator card (the AX800) up its sleeve that can be used with an x86-based server.

Still, Nvidia's pitch implies the economics would be far less favorable if a customer had zero interest in AI and were using its products in a RAN-only scenario. How Nvidia measures up here on a like-for-like basis is unclear, but GPUs have been criticized before now as a power-hungry choice for Layer 1 acceleration.

When it comes to Softbank, there is no information on whose RAN software will be combined with Nvidia chips during rollout, which vendor is contributing the radios that can synchronize with this software and whose cloud underpins it all. Nor has anything been said about the pace of deployment and what it means for Softbank's existing RAN partners. Keep watching this space.

Related posts: Nvidia crashes Intel's open RAN monopoly with x86 bypass HPE and Ampere take aim at Intel with vision of Arm-based open RAN server Intel's CPU decline may portend its RAN fate Ericsson and Nokia go opposite ways on open RAN Rakuten's Tareq Amin: RAN chips biz needs a massive shake-up

— Iain Morris, International Editor, Light Reading

Read more about:


About the Author(s)

Iain Morris

International Editor

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like