Do cloud native 5G core networks need DPUs?
With global 5G connections crossing the 1 billion threshold in 2022 and each 5G user consuming twice the amount of data as a non-5G user, mobile network traffic is growing quickly. (An average 5G user will consume 14GB per month in 2023, according to Omdia's Cellular Data Traffic Forecast service, doubling to 28GB in 2027.)
Operators deploying new 5G core networks and making the transition to 5G standalone (SA) need to think about how to handle this traffic. To investigate this and other key infrastructure, technology and service questions, Heavy Reading has published its new 5G Core Networks Operator Survey. To download a free copy of the analyst report, click here.
Cloud infrastructure for the 5G core
5G core networks are being deployed on cloud infrastructure platforms, typically using COTS servers in a racked data center architecture. There is a long-running debate in the industry about how efficiently this infrastructure – originally designed for compute tasks – can handle 5G user-plane and low latency edge use cases.
One of the key discussions is the case for using hardware accelerators – a.k.a. SmartNICs or data processing units (DPUs) – to offload packet processing and make more efficient use of server resources versus seeking to optimize performance in software. At face value, hardware accelerators are attractive in the 5G core. However, there is also an argument that major efficiency gains are still possible in software – i.e., via well-designed cloud native network functions (CNFs) – and that this is the superior near-term approach because it preserves the agility and workload portability that help make the cloud so powerful.
What makes life complicated for operators as they design network cloud platforms to run 5G core (and other network functions) is that they must leverage the economies of scale associated with commercial cloud infrastructure and, at the same time, ensure they are not too far behind the leading-edge performance set by the hyperscalers and specialist cloud providers. Hardware acceleration is a prime example of this dilemma.
For certain cloud workloads (e.g., for AI/ML, media encoding, security/encryption and graphics processing), hardware acceleration has been developing over several years and is now widely used. In raw performance and efficiency terms, high capacity 5G core workloads can benefit from accelerators (aside: this is even more the case for vRAN workloads). Therefore, it makes sense to integrate this capability into the cloud platform. Yet, packet processing "offload" has far-reaching implications for the network cloud technology stack, the operating tools and the CNF software itself.
DPUs and SmartNICs for the 5G core
The Heavy Reading survey asked respondents if their organization expects to make extensive use of hardware acceleration using DPUs/ SmartNICs in the 5G core. The results in the figure below show that respondents express broad, but not unequivocal, support for the concept.
The highest score at 40% is for the "yes, for specific scenarios (e.g., fixed wireless access)" response. This is a logical result in the sense that fixed access services generate much greater throughput per connection than mobile services (as a rule of thumb, 10x greater). As such, the case for hardware acceleration is stronger. Although it is still some years away, if wireless-wireline convergence occurs and a common user plane is deployed for fixed and mobile access, this case will strengthen further.
The 38% score for "yes, extensive use – user-plane acceleration is essential in most cases" is interesting. On the one hand, it shows a majority of respondents do not have this view for the general 5G mobile use case. But it is a strong score that merits attention. Clearly, a large segment of the respondent base expects hardware accelerators will be broadly important to efficiently process 5G core user-plane traffic.
A difficult analysis
One of the challenges with this analysis is that telecom operators have long considered networking workloads to be a special case relative to compute workloads. Thus, they are perhaps more likely to select the hardware acceleration option – this is almost the natural response from telecom networking professionals.
Another factor is the anecdotal information from some vendors and operators that says current 5G traffic loads do not really need DPU accelerator technology. With good software design and smart deployment choices, 5G mobile user-plane traffic can be readily handled on standard server hardware, and it is better to keep things (relatively) simple for now.
In this view, if hardware acceleration is needed only in a few places or for a few services, operators should question if now is the time to invest in this technology, how far ahead they should plan for it or if it is better to observe how traffic patterns evolve before committing.
The overall analysis, therefore, is that an appetite exists for user-plane hardware acceleration in the 5G core among operators, but it is not yet clear-cut and definite. In this sense, this survey does not help resolve the debate between software and hardware user-plane optimization but merely continues it!
To download a free copy of the Heavy Reading 5G Core Networks Operator Survey analyst report, click here.
This blog is sponsored by AMD.
— Gabriel Brown, Senior Principal Analyst – Mobile Networks & 5G, Heavy Reading