Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.
Key findings from Heavy Reading's 'Open RAN Platforms and Architectures Operator Survey Report' show that service providers are planning to support AI inference and other applications in their open RAN/far edge installations. #sponsored
March 20, 2023
A key benefit of using general-purpose processors to implement open RAN/vRAN is that the same platforms can be used to support AI inference and other applications at the far edge of the network, such as cell site routers (CSRs) and content delivery and hosting. These edge platforms can be used to host virtualized applications closer to the user, offering significant benefits in terms of lower latency and shared infrastructure.
To find out more about which applications service providers plan to support on shared far edge solutions and how they plan to deploy open RAN and vRAN platforms and architectures for 5G networks, Heavy Reading ran an exclusive survey of individuals working for operators with mobile network businesses. The results are presented in an analyst report, Open RAN Platforms and Architectures Operator Survey Report, that can be downloaded for free here.
Benefits of edge cloud integration
The survey presented options for five edge applications that can share server platforms with virtualized open RAN baseband implementations. Respondents were asked to indicate the top three applications they plan to support in their open RAN/far edge solution. 62% selected AI, 54% each for CSR and content/media delivery/hosting and 50% for firewall. 43% selected IPsec and just 1% selected "other."
Figure 1: In addition to the base RAN protocol stack to support wireless traffic, what other applications are you planning to support in your end open RAN/far edge solution? n=113
Source: Heavy Reading
AI is becoming a critical application in many market areas, including wireless networks. The use of AI within wireless networks can significantly improve network performance and end-user quality of experience. AI learning is usually hosted in data centers with high power servers and GPU, FPGA or other acceleration hardware. Cloud native AI inference can be easily moved to far edge server platforms, reducing latency and sharing resources with open RAN baseband functions.
A CSR aggregates mobile data traffic from the RAN and routes it back to the service provider's core network. CSRs implemented as cloud native virtual functions (CNFs) are available that can share COTS server platforms with open RAN distributed unit, firewall and IPsec functions. Video caching and other content delivery and hosting can also benefit from sharing the same edge server platforms, reducing latency and backhaul traffic.
Far edge operating environments
The use of CNFs allows open RAN and other applications, such as AI inference, to be scaled and shared across the network, including far edge server platforms. Some of these far edge servers may be located in cabinets with controlled temperatures; however, many will be located in cabinets without controlled temperatures or in outdoor locations. For these locations, server platforms need to support extended temperature operation and be designed for demanding environments.
When asked about operating environments, almost 70% of respondents to the survey indicated that they had an outdoor application, and 29% said they had operating environments below –5°C. Multiple choices were allowed. 57% said they had indoor cabinets with controlled temperature and 57% said they had indoor cabinets without controlled temperature. The full results are included in the analyst report.
Heavy Reading's Open RAN Platforms and Architectures Operator Survey Report focuses on why operators are deploying open RAN and which platform architectures, hardware accelerators and software and integration solutions are viewed as most important for these deployments. You can download a PDF copy here.
— Simon Stanley, Analyst-at-large, Heavy Reading
This blog is sponsored by Kontron.
Simon Stanley is Founder and Principal Consultant at Earlswood Marketing Ltd., an independent market analyst and consulting company based in the U.K. His work has included investment due diligence, market analysis for investors, and business/product strategy for semiconductor companies. Simon has written extensively for Heavy Reading and Light Reading. His reports and Webinars cover a variety of communications-related subjects, including LTE, Policy Management, SDN/NFV, IMS, ATCA, 100/400G optical components, multicore processors, switch chipsets, network processors, and optical transport. He has also run several Light Reading events covering Next Generation network components and ATCA.
Prior to founding Earlswood Marketing, Simon spent more than 15 years in product marketing and business management. He has held senior positions with Fujitsu, National Semiconductor, and U.K. startup ClearSpeed, covering networking, personal systems, and graphics in Europe, North America, and Japan. Simon has spent over 30 years in the electronics industry, including several years designing CPU-based systems, before moving into semiconductor marketing. In 1983, Stanley earned a Bachelor's in Electronic and Electrical Engineering from Brunel University, London.
You May Also Like
Rethinking AIOPs — It's All About the DataMar 12, 2024
SCTE® LiveLearning for Professionals Webinar™ Series: Fiddling with Fixed WirelessMar 21, 2024
SCTE® LiveLearning for Professionals Webinar™ Series: Cable and 5G: The Odd Couple?Apr 18, 2024
SCTE® LiveLearning for Professionals Webinar™ Series: Delivering the DAA DifferenceMay 16, 2024