Religious divides can have lasting effects. The Great Schism of 1054 broke the Christian church in two, creating the Catholic and Orthodox denominations that still exist today. In the world of the radio access network (RAN), Sweden's Ericsson and Finland's Nokia are usually found worshipping at the same altar, regularly meeting up in the temple of the 3GPP to coordinate rites. But on one point of observance – the silicon used in new virtual and open RAN technology – they seem poles apart. Arcane as it might seem to the layperson, the schism could determine their fortunes.
Most networks today rely on application-specific integrated circuits to process signals (baseband or Layer 1, in industry lingo). As the name implies, these are customized chips with tightly integrated hardware and software features. But telecom operators are pushing for networks to be more open and virtualized. This would allow them to separate the software from the hardware and run it on the same general-purpose processors (GPPs) used for other network functions and IT resources.
The problem is that GPPs cope poorly with the computationally intensive needs of Layer 1. To address that, equipment vendors have proposed the use of hardware accelerators, additional silicon to relieve the central processing unit of its RAN burden. Where Ericsson and Nokia diverge is on the nature of these accelerators.
Figure 1: A silicon wafer used to make chips inside a fab owned by Taiwan's TSMC.
(Source: TSMC)
Two broad choices have emerged: lookaside (also called a selected function HW accelerator) and inline (also known as a full L1 accelerator). Lookaside offloads only select Layer 1 functions to the accelerator for processing, and processed data is returned to the CPU before it continues its journey to the network core. Unsurprisingly, given its heavy reliance on the CPU, lookaside is backed by Intel, the world's largest provider of GPPs.
With inline, by contrast, the CPU is barely needed for Layer 1 at all. Instead, functions are transferred to the accelerator, minimizing the need for an expensive CPU in the distributed unit, the physical part of the RAN where most Layer 1 processing takes place. Just as Intel backs lookaside, so inline is championed by a group of Intel's competitors whose silicon designs use the blueprints of UK-based Arm. They include Marvell Technology, Nvidia and Qualcomm.
Finnline
Nokia came out strongly in favor of inline technology around the time of this year's Mobile World Congress. "We have looked at the different alternatives and concluded that inline delivers better performance than lookaside," said Tommi Uitto, the head of Nokia's mobile networks business group, during an interview with Light Reading in Barcelona.
The reasons were explained in a white paper Nokia published at the time. In his accompanying blog, Uitto pointed out that inline accelerators tend to use the same Arm-based silicon found in traditional RAN technology, prized for its power efficiency. An endorsement of that comes from Geetha Ram, the head of telco compute for HPE, which has already examined the impact of using Arm-based GPPs made by Ampere Computing to run the user plane function (UPF) in the core.
Figure 2: Nokia's Tommi Uitto talks to customers at Mobile World Congress.
(Source: Nokia)
"One of our customers said we need a power plant to run the UPF on today's x86 architecture and can you see what can be done on Arm," Ram told Light Reading. A study she carried out showed Arm technology used on a like-for-like basis would result in power savings of 35% and lower operating costs.
Another big attraction for Nokia is that inline accelerators are supplied on separate network interface cards (NICs or SmartNICs). Provided there is compatibility with PCIe, an industry standard, these can be slotted into servers as and when needed, meaning Layer 1 capacity can be added independently of the CPUs. With lookaside, Nokia complains, this simply isn't possible. Adding capacity means increasing the number of CPUs, it says.
Nixing the SmartNICs
But Ericsson insists lookaside is the superior and more economical choice. In a white paper it co-authored with Verizon last year, it is as disparaging about inline as Nokia is about lookaside. In the latest versions, lookaside can be integrated on the same motherboard as the CPU, obviating the need for a separate accelerator card. That's not doable with inline, and its PCIe cards are power hogs, argues Ericsson.
Support on that point comes from Tareq Amin, who runs the telco subsidiaries (Mobile and Symphony) of Japan's Rakuten. "I have a good relationship with Qualcomm, but where I diverge is that I'll never buy a PCIe card for acceleration," he told Light Reading during a recent interview. "It is extremely inefficient. If I can buy one CPU that does the job of this accelerator that sits somewhere else, it is cheaper."
Figure 3: Energy efficiency of lookaside versus inline, according to Ericsson (Source: Ericsson, Verizon)
(Note: The 1x baseline is based on measurements on a distributed unit workload supporting 600MHz x 4 layers on a single CPU with selected function HW accelerator)
Yet Nokia hits back. Putting the accelerator on the same board as the CPU would mean it is no longer a GPP but a "custom SoC [system-on-a-chip] for cloud RAN," said the Finnish equipment maker in its own white paper. This fake GPP would be costly and carry overheads for power consumption "in any other application use case."
HPE, which makes servers for open and virtual RAN deployment, wants to avoid taking sides in the dispute. Ram's view is that inline can make sense in the most demanding conditions even though lookaside is usually good enough. An inline PCIe card she examined was much pricier than she expected it to be, she told Light Reading.
But she rejects a claim that PCIe cards must necessarily come with higher upfront costs, and says putting all the components on the same board is "almost going back to a traditional-RAN, proprietary type of architecture." Ram points out that most of the basic components are needed regardless of whether they show up on the motherboard or a PCIe card. She also believes costs are harder to bury or inflate when products are more disaggregated.
Missionary zeal
Both Ericsson and Intel have also championed lookaside as a more portable and cloud-native option. Last month, Ericsson claimed its Layer 1 software could be deployed on AMD as well as Intel chips without modification. Understood to be working closely with Marvell on its inline accelerators, Nokia has not boasted similar portability so far. But Ericsson is typically scathing about PCIe cards here, saying they "often require software specifically developed for their hardware or chipset, which eliminates the possibility to create a common cloud compute [infrastructure]."
Yet SmartNICs are commonly used in cloud computing today. Thanks to abstraction layers developed by hyperscalers as well as cloud specialists including Red Hat, VMware and Wind River, they can be managed with the same Kubernetes tools used elsewhere (Kubernetes being the main open-source technology for managing cloud-native software), according to various sources.
Figure 4: A traditional Ericsson basestation on top of a building.
(Source: Ericsson)
The schism could pit Ericsson and Intel on one side against Nokia and the Arm camp on the other, while companies like HPE attempt to remain agnostic. That could matter. While virtual and open RAN technology accounts for only a small percentage of today's market, most analysts expect its share to grow dramatically in the next few years.
As it does, the technology decisions made by the world's operators could mean choosing between those rival camps. By co-authoring a pro-lookaside white paper with Ericsson, Verizon already seems to have picked a sect and publicized the decision, and Intel's head start on open RAN is likely to put lookaside in front today. Expect to witness a lot more inline preaching by Nokia and some religious attacks from the other side.
Related posts:
— Iain Morris, International Editor, Light Reading