Sponsored By

New silicon may speed up open RAN and spoil Intel party

Inline accelerators could inject some much-needed competition into the open RAN market, if their backers can build support.

Iain Morris

September 21, 2021

13 Min Read
New silicon may speed up open RAN and spoil Intel party

If semiconductor systems were sportspeople, the x86 processor would probably be a decathlete. A versatile and experienced all-rounder, competitive in multiple disciplines, it rarely beats the specialists on performance and efficiency. Pitting it against purpose-built silicon would be like holding a 1500-meter race between Norway's Jakob Ingebrigtsen, this year's Olympic champion in that event, and Canada's Damian Warner, who collected gold in the decathlon. The difference is more than a minute in Ingebrigtsen's favor.

This is a problem for open and virtualized RAN. Popular among some of the world's largest operators, that concept is partly about combining general-purpose equipment with third-party software to produce alternatives to the big kit vendors and their customized gear. Unfortunately, the radio access network turns out to be a venue where these processors have struggled.

Figure 1: Norway's Jakob Ingebrigtsen, the best accelerator over 1,500 meters. Norway's Jakob Ingebrigtsen, the best accelerator over 1,500 meters.

How badly? Back in early 2019, Peter Zhou, a senior executive at Huawei, reckoned a central processing unit (CPU) that came from Intel's stock of x86 chips used ten times as much power as Huawei's equipment in a 4G basestation. An opponent of open RAN, the Chinese company has a vested interest in slating the technology. But nobody disputes there has been a performance gap.

Whatever the size of that gap today, closing it is starting to look easier. A solution has involved using other chips as "accelerators," a muscular pair of legs for the ailing CPU. Until now, these accelerators have mainly belonged to a variety known as "lookaside." But another type, dubbed "inline," represents a more radical departure. These inline accelerators might not only boost performance. They could also make the open RAN market less dependent on Intel and the ecosystem it has built.

The inline-versus-lookaside trade-offs

In this context, the difference between lookaside and inline comes down to how much reliance there is on the CPU for processing that happens in Layer 1. The conceptualization of any communications network has meant breaking it down into various layers, with Layer 1, or the physical layer, encompassing functions such as encoding, signaling, data transmission and reception. In open RAN, most if it takes place in a box called the distributed unit (DU), installed at or near the radio site. It is this part of the network that has been troublesome for the standard CPU.

With lookaside, the CPU would hand off some Layer 1 functions to the accelerator and then pick up the baton after those have been processed. It is like bringing in a faster athlete for some laps of the race. Inline is more like asking that athlete to go the whole distance.

Figure 2: How lookaside and inline compare (Source: Ericsson) (Source: Ericsson)

"You are essentially replacing that whole functionality for part of the solution with another piece of software and hardware that is doing the whole thing for you," says Simon Stanley, a principal consultant for Earlswood Marketing and analyst-at-large for Heavy Reading (a sister company to Light Reading). "All the processor is doing is managing that. It is getting all the data and controlling it, but it is not actually in the data flow at all."

A key attraction is lower latency, a measure of the delay that occurs when signals are sent over the network. With lookaside, the processor is still in the loop and the handover between it and the accelerator can be "a restricting factor in terms of performance," according to Stanley. "Sending data out and waiting for it to come back is more bandwidth and that can become a bottleneck. And that, again, slows you down."

Big operators certainly look enthusiastic about inline. When Stanley carried out a recent survey to gauge the different levels of interest, 50% of respondents from large companies said inline was the likeliest option for hardware acceleration, while just 25% preferred lookaside. "Inline accelerators are becoming catalysts to transform the legacy cloud computing platform paradigm by opening up the path to avoid using expensive data center CPUs like x86 for packet processing," said Alex Choi, Deutsche Telekom's senior vice president of strategy and technology innovation, in a LinkedIn exchange with Light Reading.

All respondents

>$5bn

<$5bn

Inline accelerators

35%

50%

25%

Lookaside accelerators

34%

25%

40%

Under evaluation/don't know

31%

25%

36%

(Source: Heavy Reading)

An inline accelerator would not end open RAN's need for x86 processors or Intel, the giant US chipmaker behind x86 technology. CPUs are also needed for Layers 2 and 3 – the data link and network layers – split between DUs and a small number of central units (CUs). Intel is dominant here and there are still few alternatives. Even Qualcomm, an Intel rival, acknowledges that accelerators would not help.

"No one claims you can do a better scheduler in Layer 2 with accelerators," says Gerardo Giaretta, Qualcomm's senior director of product management. His firm is one of several now pitching inline accelerators to the industry as part of a DU offer. "That means let's move the entire Layer 1 to the accelerator and leave the Layer 2 only to the CPU."

Even so, by making the CPU less critical, inline accelerators developed by numerous suppliers could be a threat to Intel. Partly, that is because they would reduce power needs and allow companies to use less costly CPUs. "We are not beholden to some company that starts with an I and claims to own the server market and is based in Santa Clara, and we don't need increasingly higher-priced silicon and cores because we are doing the whole thing inline," says Raj Singh, the executive vice president of the processors business group for Marvell Technology.

Stanley agrees with Singh's basic premise. "If you are use inline, you probably need a less powerful processor and less expensive server platform, which is not necessarily something Intel wants to promote," the analyst tells Light Reading.

But resisting inline may be just as hard for Intel as resisting open RAN is for big kit vendors like Ericsson and Nokia, and the chipmaker has certainly not shown public opposition to the concept. "As service providers design their DU solutions for open RAN, there are many different technology capabilities they consider, among them processor core efficiency, acceleration technologies and networking optimizations," said Intel in a coy statement when asked for its response to the suggestion inline is a threat.

"We've been delivering a leading solution – including Xeon [its processor brand], network acceleration, Ethernet – and fostering a global ecosystem to support today's requirements and enable the early transition to virtual RAN," it continued. "We're aligning our roadmap investments to support today's requirements, including microarchitectures, networking optimizations and acceleration technologies, which are needed to truly scale virtual RAN over the next decade and beyond."

Software power

Inline is not a shoo-in for all scenarios, either. In fact, some of its potential attractions could also be seen as weaknesses. While inline's numerous backers promise Layer 1 software alternatives, Intel is years ahead with FlexRAN, a software stack it has been developing since 2010. Added inline complexity means lookaside may be simpler to manage. "The processor is in control of everything that goes on," says Stanley. "It is essentially the same software and makes life very easy."

This probably explains why there was more interest among the smaller operators that Stanley surveyed in using lookaside accelerators. Some 40% said this would be their choice, versus just 25% who preferred inline. The choices seem to reflect the trade-offs operators would have to consider. Unless there are heavy traffic loads, inline would be less efficient. "As you up the bandwidth, you get to some point where you can't use lookaside because you are using so much processing power that it just becomes untenable," says Stanley.

The difficulty for inline rivals to Intel is finding the scale economies that allow them to compete against x86 and FlexRAN. On the plus side, operators have no desire to be heavily reliant on Intel in this part of the network. Open RAN, after all, is supposedly about fostering network alternatives. But operators also want the competitively priced equipment and software that only large volumes can deliver. The two goals do not align neatly.

"Operators want diversity but they also want economies of scale and how do they balance that out?" says Gabriel Brown, a principal analyst with Heavy Reading. "How can they have both? That is an open question at the moment, but a really important one."

Want to know more about 5G? Check out our dedicated 5G content channel here on Light Reading.

Software is a real headscratcher. Persuading developers to write code for platforms that are less mature than Intel's may be tough. It would, says Stanley, entail some investment in time and engineering effort. Brown seems to agree. "Different developers like Mavenir, Altiostar, Ericsson, Nokia and Samsung have to write software that can be deployed on these hardware platforms," he says. "That's going to require customization. The question is how much in each case?"

Some may at least be reusable. Both Marvell and Qualcomm provide Layer 1 software with their inline accelerators and say it can run alongside any third-party software for Layers 2 and 3. Nvidia, another silicon vendor backing inline accelerators, offers something similar with Aerial – a software development kit that includes its own Layer 1 software for integration with another company's Layer 2 goods. "We have already announced relationships with Altran, Radisys and Mavenir," says Ronnie Vashista, Nvidia's senior vice president of telecom.

Figure 3: Ericsson's Per Narvinger: 'You are not going to be able to support all platforms from day one.' Ericsson's Per Narvinger: "You are not going to be able to support all platforms from day one."

Ericsson is another one of its partners. Back in October 2019, the Swedish vendor said it would experiment with using Nvidia's graphics processing units (GPUs) as accelerators in a virtual RAN. But the virtual RAN product it touted in June uses lookaside accelerators supplied by Intel. "You are not going to be able to support all platforms from day one," said Per Narvinger, Ericsson's head of product area networks, when asked why Nvidia was not included.

Deutsche Telekom's Choi also fears that a lack of standardization will thwart progress. "The problems for these accelerators include potential chip vendor lock-in risks due to the vendor-specific API [application programming interface] and software development environment," he says. "For example, Nvidia has its own software development tool called CUDA." This tool forms a part of Aerial, according to an Nvidia blog.

The underlying concern for Choi – also the chief operating officer of the O-RAN Alliance, a group developing open RAN specifications – is that that reliance on vendor-specific APIs will lead to fragmentation. He says "the industry needs to jointly solve this issue now to reach the common economies of scale. One possible method would be to use the O-RAN Alliance."

Do ASICs make you run faster?

Among the silicon rivals to Intel, another battle is being fought over the type of circuitry used. Nvidia, unsurprisingly, is pushing the GPUs that have made it so successful in other markets as inline accelerators. But Stanley sounds unconvinced. "They are not really designed to have a lot of data coming in and going out again through some other interface," he says.

Other options for accelerators include field programmable gate arrays (FPGAs), backed heavily by Xilinx, as well as application-specific integrated circuits (ASICs) of the kind used by Ericsson and Nokia in their traditional network products. The lookaside accelerators that Intel supplies to Ericsson are described as eASICs and seem to combine FPGAs and ASICs in one product.

Figure 4: The acceleration techniques that operators require (Source: Heavy Reading) (Source: Heavy Reading)

Operators do not appear to have any strong preferences at this stage. In Stanley's survey, 52% of respondents ticked FPGAs when asked which acceleration technologies they would need, while GPUs and ASICs each scored 45% (respondents could obviously select more than one). Nokia's bad experiences with Xilinx FPGAs in traditional 5G equipment have tarnished their reputation, however. Deemed power hogs by some experts, they could struggle as inline accelerators. "You are constrained by the performance of the FPGA so may not be able to build something with the same performance as Qualcomm or Nvidia or Marvell," says Stanley.

Both Marvell and Qualcomm are essentially building the same ASICs they already supply for traditional RAN products, which means they can piggyback on those investments. And Marvell already boasts a massive presence in the RAN. "If you look at who we are in wireless in merchant silicon, we are number one in the world today," says Singh. "That means 2.4 billion people are connected on networks that use our silicon."

Figure 5: Marvell through the years (Source: Marvell Technology) (Source: Marvell Technology)

Inline and lookaside, which sound more like the barked commands of some mad drill sergeant than technical descriptors, will certainly give the average service provider a lot to think about. Add them to more typical abbreviations like ASIC, CPU, FPGA and GPU, mix in the software platforms such as Aerial and FlexRAN, and the telco inspecting open RAN suddenly has a confusing jumble of technologies in the DU parade. And that's before it even starts to consider other parts of the network.

Yet the launch of new accelerators does promise to quell some of the criticism that open RAN is too Intel-based and inefficient, and it will excite operators who crave variety and demand support for busy 5G networks. No product, though, is guaranteed to become a high-volume and low-cost affair. And without that, any accelerator will find the progress is slow-going.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

AsiaEurope

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like