The virtual RAN benefits are as clear as mud, unless you're Intel
Why any telco would bother to virtualize its radio access network (RAN) was not immediately obvious to Nokia's Tommi Uitto. "I have always asked the same question and now I think I know the answer," said the head of the mobile networks business group, Nokia's biggest by sales, at the InterContinental Barcelona, where the Finnish vendor had just revealed a new logo of half letters and sharp angles before this year's Mobile World Congress.
Ever since RAN executives first began sniffing around the concept of virtualization, the industry seems to have been intent on exploring the how rather than the why. In a traditional RAN, computing hardware and software is tightly coupled in servers usually installed at every mobile site. Virtualization would end this cozy relationship, allowing RAN software to sit on general-purpose processors (GPPs) rather than custom-made chips. For Intel, the dominant force in GPPs, the attractions are clear. For most other players, including the operators of these networks, the benefits are hard to see.
To start with, not a single sane person in the industry, including virtualization's cheerleaders, thinks a virtual RAN will offer a performance boost over a traditional one. Just about everyone thinks the opposite, in fact. Even Intel is introducing hardware "accelerators" to overcome the drawbacks of putting RAN software on GPPs. But using accelerators based on customized chips means abandoning virtualization, according to critics.
Supporters of the concept used to argue that a virtual RAN would be cheaper to build and operate. One idea seemed to be that GPPs, with their vast economies of scale, would cost less than customized RAN silicon. But that silicon also ships in huge quantities thanks to technology standards and years of consolidation.
Meanwhile, the GPP market looks even less competitive than the RAN industry does. Last year, Intel still held a 71% share of the market for central processing units (CPUs) in data centers, according to Counterpoint Research, making Ericsson's 39% share of the RAN market outside China appear small by comparison. In 2020, before more recent investment challenges, the Intel unit responsible for those CPUs reported an operating margin of 47%. At Ericsson's networks business that year, it was 18.6%.
The performance shortcomings of Intel's GPPs essentially mean they score low marks for power efficiency when compared with customized chips. In September last year, Japan's NTT Docomo and NEC said they had realized a 72% power saving in the 5G core by using customized silicon, based on the designs of UK-headquartered Arm, rather than Intel's x86 architecture. "When you see numbers like that, when sustainability and power efficiency have bubbled up to the top and with energy costs being what they are today, Arm suddenly becomes very attractive," said Panch Chandrasekaran, the head of Arm's 5G carrier infrastructure business.
So why attempt RAN virtualization at all? A big reason is to ensure that software in the central units (CUs) and distributed units (DUs) of the RAN can be hosted on the same underlying cloud platform as other network and IT workloads, according to Nokia's Uitto. This shared container-as-a-service (CaaS) layer would naturally cost less than maintaining multiple platforms. "If you can use the same CaaS layer for many workloads, including the CU and DU, it is possible to get savings compared with a situation where it is purpose-built radio," said Uitto.
After virtualization, operators could also realize savings by pooling their RAN resources and owning less equipment. A CU, supporting the so-called Layer 2 and 3 parts of the RAN software stack that are not too latency-sensitive, would be able to serve multiple DUs in a rearchitecting that Uitto likes. "With a central site like a baseband hotel, there could be pooling gains in that the same server can support a large number of DUs," he said.
But accelerators risk spoiling a part of this story. Most of Intel's semiconductor rivals have opted for a technique called inline acceleration, which introduces network interface cards (NICs) featuring customized Arm-based silicon to handle the compute-intensive Layer 1 baseband software. Once software has been "hard-coded" for this silicon, those Layer 1 functions are no longer virtualized, according to Sachin Katti, the head of Intel's network and edge group.
Essentially, that means an operator would not be able to upgrade this software with the same Kubernetes tools used elsewhere (Kubernetes being the main open-source platform for managing containers). There could be other drawbacks, too. In a fully virtualized or cloud-native network, IT resources can be shared among different workloads. "You can use server resources for other applications when it is quiet, and there is a problem at the moment with overprovisioning," said Gabriel Brown, a principal analyst with Heavy Reading (a Light Reading sister company). NICs would make that difficult if not impossible.
Inline's supporters reject this criticism. Some, including Uitto, doubt there will be much if any centralization of the DUs. "The DU is Layer 1 and real-time-sensitive, which you must do close to the radio," he said. If DUs remain at or close to radio sites because of these performance constraints, the computing resources they host are unlikely to be used for anything except baseband processing.
Uitto also says that hyperscalers have readily embraced NICs, despite Intel's opposition to them as a complicating factor in cloud-native settings. "Every day there are some workloads that require network interface cards," he said, reporting on his recent interactions with these companies. Thanks to a standard interface called PCIe (for peripheral component interconnect express), any compatible NIC can be slotted into any compatible server. "It is not going in the direction of general-purpose processors in the data center," said Uitto. "It is going in the custom direction, which was a bit of a surprise to me."
Brilliant, if you've got fiber
Yet many operators remain unpersuaded by the various arguments about acceleration. Today, introducing these chips means "you end up with a destination environment that costs more than your current solution," said Howard Watson, the chief security and networks officer for BT. Another concern for the UK telco executive is the feasibility of baseband resource pooling. "Architecturally, that makes some sense, but it only really works brilliantly in a market that has dark fiber."
The need to have fiber lines between the CUs, DUs and radio units (RUs) in a rearchitected virtual RAN explains why some operators remain apathetic. While resource pooling might cut expenditure on equipment and operations, telcos would first have to spend heavily on laying fiber and preparing data-center facilities. It could be many years before they are able to realize any savings.
In fiber-rich Japan, NTT Docomo has already pooled some of its baseband resources. It has also opened the interfaces between DUs and RUs so that it can buy each part separately and not in a package from one big vendor, as operators usually do. Yet Docomo's network is still not virtualized owing to previous concerns about performance and power consumption, according to Sadayuki Abeta, the operator's global head of open RAN solutions.
Docomo is now exploring the virtual RAN options and hopes to make its first moves this year. One company view continues to be that buying common-off-the-shelf servers in bulk for numerous purposes could work out more economically than shopping around for hardware dedicated to specific tasks. But the evidence is still not there, and Docomo is undecided about acceleration. Trials of inline acceleration have been carried out with AMD, Nvidia and Qualcomm, but an Intel-only solution is also in the mix, said Abeta.
Given the hesitancy in technologically adventurous Japan, it is no wonder that virtual RAN still accounts for just a low, single-digit percentage of the global RAN market. To accept inline as a deployment option, telco executives like BT's Watson need to be convinced there are no costly trade-offs of the sort highlighted by Intel's Katti. But few operators will be prepared to compromise on performance. Katti confidently predicts Intel's GPPs featuring its own form of integrated acceleration will beat custom silicon in future. The question then is how telcos align their push for supplier diversity with the prospect of a sector dominated by Intel.
- Good luck building a virtual, open RAN – there's no such thing
- Intel boasts open RAN monopoly as Nokia turns to others
- Ericsson networks boss shares open RAN hopes and fears
- Intel risks losing Arm wrestle as open RAN splits into rival camps
- Vodafone slams Intel and its chip rivals on standardization
— Iain Morris, International Editor, Light Reading