Featured Story
Deutsche Telekom's 'open RAN' plan slips after Huawei reprieve
Deutsche Telekom had promised 3,000 open RAN sites by the end of 2026, but the date has now been changed to 2027. And Germany's refusal to ban Huawei has implications.
The rollout of radio access networks powered by hyperscalers would have major ramifications for companies throughout the ecosystem.
Until now, Ericsson's cloud RAN offer has seemed to involve a lot of RAN and not much cloud, or not much cloud of the strict public variety, where the big dogs remain AWS, Google and Microsoft. The radio access network (RAN) will never be swallowed by those animals, critics have maintained, because public cloud facilities are usually too centralized. Much of the RAN computing needs to be done at or near the mobile sites.
Yet Ericsson now says it can run its virtualized RAN software on Google. It is not so much a case of the RAN coming to the public cloud as the public cloud coming to the RAN. Through a service pitched as Google Distributed Cloud (GDC), the hyperscaler can now bring servers and racks into a telco facility like a small central office, located near the basestations it would support. Ericsson slaps its RAN software on top and the service should be ready to go.
It's a potential game changer for Ericsson and its customers. In a traditional RAN, the Swedish vendor combines hardware and software and sells the whole shebang to a telco. Clouds simply aren't a feature of the landscape. With Google, the hyperscaler would take charge of the hardware, giving Ericsson limited visibility of what that includes.
"They gave us some information, but we don't know a lot of what is inside, so it is a bit of a black box," said Matteo Fiorani, the head of Ericsson's cloud RAN product line. "We have passed on specific requirements of what we need to be able to run our application, and they have given us feedback on what they will provide us and told us what would be inside servers and racks, but not in detail."
One code base, any hardware
Some degree of opacity should not matter provided each party sticks to certain guardrails. On Ericsson's side, this means writing software compatible with a general-purpose processor, rather than code tailored to suit an application-specific integrated circuit (ASIC), the customized silicon it would traditionally use.
The only carveout is in Layer 1, a part of the software stack responsible for baseband processing. There, an especially demanding function called forward error correction would rely on a separate bit of silicon known as an accelerator to provide oomph. Ericsson is using BBDev, a standardized set of tools, to create an abstraction layer for this accelerator. Provided silicon vendors adopt that as well, Ericsson should be able to use the same virtualized RAN software regardless of who provides the hardware.
Silicon vendors, in this case, naturally means the big players in the market for general-purpose server CPUs (central processing units). And that essentially means Intel and AMD, which collectively served about 92.5% of the entire data-center market last year, according to Counterpoint Research.
The commonality is the x86 instruction set they use. Having already teamed up with Intel, Ericsson claimed early this year to have shown software compatibility with AMD. If much of the Google-supplied hardware remains a "black box," the use of x86 chips is a given – as indicated by Ericsson's reference to an "x86-based accelerator stack" in its press release about the Google tie-up.
The drawback would seem to be the lack of CPU competitors to Intel and AMD. But that situation may slowly be changing. Ericsson is now working to find alternatives via Arm, a UK-based designer of chip blueprints better known for its activities in the gadgets sector, Fiorani told Light Reading.
"We are still exploring if Arm has the ability to reach a certain capacity because while Arm is less power consuming it is also less powerful," he said. But he takes encouragement from the recent v9 architectural update from Arm and the features it includes. Among other things, a library called SVE2 would tick some of the same boxes as Intel's AVX512 instruction set. "That is basically vector processing and is very suitable for Layer 1 processing and, when they get that in, we think we can squeeze some good capacity out of an Arm system."
Accordingly, Ericsson is now collaborating with several Arm licensees on CPU development to see exactly what they can do. "The reason we are doing this is because we don't want to lock ourselves in and give only one option," said Fiorani. The big question for Ericsson is whether it will be able to use the same RAN software for both x86 and Arm-based systems.
"The answer is basically yes," said Fiorani. "It is our ambition to make the Layer 1 software portable to the extent possible between x86 and Arm. However, there will be some tweaks and optimizations that we will have to do to ensure the Layer 1 software works optimally on Arm." Trials in this area are currently underway.
Unaffordable inline
None of this means Ericsson is warming to the alternative "inline" form of hardware acceleration. Preferred by Nokia, Ericsson's big Nordic rival, inline shifts the entire Layer 1 software from the CPU to a more customized chip, usually installed on a separate card that can be slotted into a compatible server. Fiorani's main objection is that Layer 1 software would need tailoring for each vendor.
"We are not looking at the other type of architecture for Layer 1 because that is what we perceive to be a full lock-in," he said. "As long as you have vendors that follow the BBDev interface, it doesn't matter if it is Intel, AMD or Arm. For us it is the same code base. If you go with the full Layer 1, then you lock yourself in because the Marvell card is very different from the Qualcomm card or an Nvidia GPU [graphical processing unit], so you can't really port the code. You have to redo it all the time, and that's not affordable."
If the hyperscalers stump up the hardware, the BBDev approach could aid portability between different clouds, and Ericsson is also collaborating with AWS and Microsoft, said Fiorani. That said, its RAN partnership with Google is the most advanced, he acknowledged. And AWS does not appear to share Ericsson's antipathy toward inline acceleration. Earlier this year, it showcased a deployment where servers based on its Arm-based Graviton chips would host Nokia's inline accelerator cards for Layer 1 functions.
Why a public rather than a private cloud? For telcos, the key attraction will probably be the array of other services and features that private clouds simply do not offer. Highlighted in the release are machine-learning platforms from Google such as Vertex AI and BigQuery. "We are doing a lot of AI research on our side, more for RAN optimization, and Google has a lot of tools like Vertex AI for building AI applications, for optimizing how to use resources, how to deploy, how to lifecycle manage," said Fiorani.
This all sounds worrying for private-cloud platforms such as Red Hat, VMware and Wind River, not to mention Ericsson's own CNIS. Where those still have an advantage is in their maturity, according to Fiorani. "Those platforms have been hardened a bit more for telco applications and especially for site deployment," he said. "But we've seen Google making great progress in the past year in this space. There's no reason to not think they will get there soon." Many other stakeholders may not be happy to hear it.
Read more about:
EuropeYou May Also Like