New cost problems emerge for open RAN

Telco operations teams want to put two servers at each mobile site, with one CTO saying this would kill the business case, according to HPE.

Iain Morris, International Editor

October 3, 2023

6 Min Read
Hewlett Packard Enterprise building
The integration of semiconductor components with the server motherboard would unnecessarily drive up costs, argues Hewlett Packard Enterprise.(Source: Michael Vi/Alamy Stock Photo)

Three is more competitive than four, insist desperate-to-merge Vodafone and Three in the UK's four-player mobile market. But when it comes to suppliers of network products, four is most certainly better than three, the same companies would argue. In a zero-growth market, sharing revenues among too many telcos limits returns, crimps investment and ultimately hurts customers, goes the rationale about network numbers. Yet Ericsson, which turned loss-making last quarter, could make a similar case for its own business. The equipment sector is surely better with two or three strong vendors than a herd of weaklings?

It's an awkward question for merger-mad telcos that love open radio access network (RAN) technology, a push to bring more suppliers into the market by standardizing interoperability between different network elements. In a purpose-built RAN, those interfaces are closed, forcing operators to buy all products from one vendor's system, or so they complain. With standardized interfaces, operators could theoretically mix different suppliers at the same mobile site. Specialists would have a chance to compete.

One of the initial claims was that open RAN would lead to cost savings for telcos. Competition would lower prices, argued fans. Yet critics pointed out that dealing with a bigger number of suppliers in a multivendor network usually results in higher costs. Meanwhile, former specialists have given up specializing. The expansion of companies like Mavenir and Rakuten Symphony across the supply chain might help telcos avoid introducing too many vendors. But it naturally weakens the research-and-development focus and rather defeats one purpose of open RAN.

It's far from being the sole cost problem for the fledgling concept. Besides standardizing interfaces, most open RAN advocates also want to have a cloud or virtual RAN. Instead of using customized equipment – so-called appliances – they would deploy network software on common, off-the-shelf (COTS) servers, tapping into data center economies of scale. But the approach some operators have taken could turn out to be far more expensive than building a traditional RAN.

Double the hardware

That's the observation of Geetha Ram, who, as head of RAN compute for server maker HPE, knows a thing or two about COTS hardware. The cost problem stems from the desire of some telco operations teams to put two servers at a mobile site for baseband processing, where they would previously have had just one appliance. One CTO for a Tier 1 telco told Ram that having two servers per site would destroy the business case.

Ram blames cultural factors. Operations teams are not familiar with deploying IT equipment outside data center facilities, where neighboring servers offer a backup in the event of a technical fault. Concerned about outages if a site server goes down, they want second servers for redundancy. Their mistake, according to Ram, is to think the servers built for telco operations are the same as those HPE sells to companies such as Walmart for IT workloads.

"What they don't understand is that our servers are telco-hardened," Ram told Light Reading during a recent conversation at HPE's offices in central London. "The ProLiant telco version and non-telco version are two different things." HPE adheres to a standardized set of guidelines for building telecom equipment known as NEBS (for network equipment-building system). During field tests with Comcast, a cable operator using HPE equipment in its operations, a pod of 26 servers was dumped in a thermal chamber heated to 61 degrees Celsius and left there for three full days. All survived without damage, said Ram.

But this "hardening" of servers for telco operations means they are just as customized as appliances, according to another source who spoke on the condition of anonymity. Both HPE and close rival Dell are building highly specialized NEBS-compliant servers, he said. Hyperscalers such as Amazon are doing the same, often using their own central processing units (CPUs) as opposed to Intel's x86 chips. The economies of scale lauded by open RAN devotees simply aren't there.

A server also lacks the inbuilt redundancy of an appliance, the source said. The latter comes with a passive backplane connecting multiple cards. If one goes down, a neighbor can shoulder the work. In a server, by contrast, the motherboard and its CPU represent a single point of failure. That's not an issue in a data center where other servers are nearby. But at a mobile site, an operator determined to virtualize might be better off loading cloud-management software onto Ericsson or Nokia appliances, according to this source.

Clash over cards

Cards can also be slotted into servers, however, thanks partly to a standardized interface called PCIe (for peripheral component interconnect express). And some open RAN vendors are championing the use of PCIe cards in the RAN. A technique dubbed "inline" acceleration would offload the demanding "Layer 1" software from the CPU to customized silicon hosted on one of those cards. If a telco needs extra capacity, it can add a card instead of buying a whole new server.

It's an approach Ram likes, and that puts her at odds with Intel, one of her biggest suppliers. The chipmaker has been scathing about inline accelerators. Not without some justification, it argues that customized silicon needs proprietary code on top of the cloud-management software used for the rest of the virtualized network. "Lookaside," its preferred technique for RAN acceleration, keeps most functions on the CPU.

With Sapphire Rapids, its latest family of server CPUs, Intel has worked to integrate the accelerator and the CPU on the same die. And it wants to go even further, according to Ram. Until now, Intel's Columbiaville-branded connectivity chips, which support fronthaul links between baseband servers and radios, have been provided on network interface cards (NICs). But Granite Rapids, the next generation of products, would put these NIC chips on the same motherboard as the CPU, said Ram.

It's a headache for the HPE executive, who said it would force her company to build a bigger range of servers to suit the various connectivity needs of different telcos. Intel's approach means an operator keen to support both FDD (frequency division duplex) and TDD (time division duplex) communications would also need two servers, said Ram. "The way you can do it with inline is to have an inline FDD card and an inline TDD card. Tell me how you do it with lookaside, because if a CPU is pinned to the accelerator and you have a CPU-accelerator combo, that is doing FDD or TDD. It can't do both."

A connectivity workaround for telcos is to buy NICs that meet their specific needs, but the components installed on the motherboard might then go unused, potentially making them an unnecessary expense. "I have another NIC with a PCIe card again, which means I am incurring the cost of this NIC that I am not using," said Ram.

The use of accelerators and PCIe cards continues to split opinion in the world of open and virtual RAN technology. The best example of that is the divergence between Ericsson and Nokia, the two biggest RAN vendors outside China, with Ericsson preferring integrated lookaside accelerators and Nokia swinging behind the inline technique and its PCIe cards. Each seems to have its pros and cons. But any evidence of cost savings related to these new architectures is still hard to find.

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like