Featured Story
A deeper dive into Cisco's AI prospects
Cisco has pegged many of its corporate hopes on its ability to cash in on massive AI investments. Some analysts see plenty of opportunity in the vendor's gambit.
June 4, 2021
The combination of 5G and open RAN presents a significant opportunity for carriers and enterprises to upgrade and enhance their wireless networks without being tied to legacy equipment and existing suppliers. The use of server-based white box hardware and virtualized software functions with standardized interfaces and functionality opens the market to a much wider ecosystem of suppliers. However, the performance and capacity of the network depend heavily on the acceleration solutions installed in the systems.
Open RAN solutions are split into the centralized unit (CU), distributed unit (DU) and radio unit (RU). The CU can be implemented on server platforms without hardware acceleration, though hardware acceleration may reduce the cost of the solution by offloading CPU cores. For many DU and RU deployments, the L1 processing would require a large number of CPU cores if implemented on virtualized server platforms without hardware acceleration. Offloading CPU cores by integrating hardware acceleration can increase raw performance, reduce the number of processors required and significantly reduce power consumption.
The best of both worlds
Open RAN implementations that are designed to take advantage of x86 and ARM-based server platforms also need flexible hardware acceleration to implement L1 signal processing and other compute-intensive workloads. Open RAN hardware acceleration may be inline or look-aside and can be implemented using FPGAs, ASICs, GPUs, or a combination. Wireless network operators are looking at a mix of solutions for different open RAN deployments.
A key part of the L1 processing is the forward error correction (FEC) function, and this can be implemented using a look-aside hardware accelerator that receives data from the virtualized DU function and returns the processed results. For higher bandwidth open RAN applications, full L1 processing offload may be required. In this case, an inline hardware accelerator is likely to deliver the best results, receiving data from the virtualized DU function and passing on the processed data to the RU. These acceleration architectures are still evolving, and operators expect to implement a mix of inline and look-aside accelerators, delivering the best of both worlds.
Open RAN acceleration techniques
The new Heavy Reading "Accelerating Open RAN Platforms Operator Survey" published in May 2021 presents the results of an exclusive survey of individuals working for operators with mobile network businesses. In addition to information on the primary drivers for open RAN acceleration and the use of look-aside and inline accelerators, the survey also covered open RAN acceleration techniques, as shown in the figure below.
Figure 1: What open RAN acceleration techniques do you believe your organization's solution requires? n=89
(Source: Heavy Reading)
Hardware acceleration for open RAN can use a number of technologies, including FPGAs, ASICs and GPUs. Both FPGAs and GPUs are easily programmed using standard development tools. ASICs are developed to implement specific acceleration functions and may also include general-purpose processor or FPGA blocks. Many operators will deploy acceleration solutions using a mix of these techniques, and this is borne out by the results of the survey: 52% say their organization's solutions require FPGAs versus 45% each for ASICs and GPUs.
"The biggest thing that FPGAs offer is flexibility," said Raghu M. Rao, PhD, the director of Cable and Wireless Solutions at Xilinx, when he spoke on the Light Reading "Accelerating Open RAN Platforms Operator Survey" webinar (view the archive here). "One of the drivers for open RAN acceleration was to open up the front haul. This is a way in which this flexibility, implementation on FPGAs, opens up the front haul. Anybody can have access to the data that comes along at both ends, the DU or the RU. The type of blocks you want to offload, the type of modules you want to offload, whether it is inline or look aside, or a hybrid module, all of these are available with an FPGA implementation."
Hardware acceleration can significantly enhance any open RAN implementation, increasing raw performance, reducing the number of CPU cores required and reducing power consumption. Other open RAN functions that can benefit from hardware acceleration include the Packet Data Convergence Protocol (PDCP) layer and security (e.g., IPsec).
Heavy Reading's "Accelerating Open RAN Platforms Operator Survey" focuses on why operators are deploying open RAN and which platform architectures, hardware accelerators and software and integration solutions are viewed as most important for these deployments. You can download a PDF copy here.
This blog is sponsored by Xilinx Inc. (Nasdaq: XLNX)
You May Also Like