Reflections on Barcelona: Decision Time for 4G
Consider that it was almost 10 years after the first digital cellular network was launched in 1991 before a big architectural disruption hit the network, in the form of GPRS and CDMA 1X. These changes caused enough pain at the time, but consider that these were parallel data networks, deliberately designed so that the voice network could be left undisturbed.
The next big discontinuity was W-CDMA. Let's also not forget the howls of anguish at the resources the first operators in Japan and Europe had to use when grappling with the novelty of a CDMA-based radio interface; ATM in the transport layer; and 2G to 3G interworking. But again, remember that amid all that chaos, the network architecture didn't change. 3G MSCs, GGSNs and SGSNs did much the same thing in much the same place as their 2G counterparts. And while ATM was introduced with W-CDMA, it was as an overlay to the same TDM backhaul, which was left untouched.
MSC and SGSN pooling, the Bearer Independent Core Network and most recently the transition from TDM to packet backhaul have certainly proven to be non-trivial. And there are countless instances of other network upgrades that have caused engineers and operations people to dearly wish they had studied medicine, started waiting tables or done anything except be a telecom engineer.
But while the pain might feel familiar this time, it will be qualitatively different. And it will be different because this time, they are looking to design and implement an all-IP architecture that demands an end-to-end view be taken right from the get-go. And that means targeting end-to-end resiliency, latency, security and QoS in the face of the unforgiving headwinds of variable RF conditions, mobility and battery-life challenges that never go away in mobile networks.
The analogy that springs to mind is of a house being refurbished. In the past, transformations of the mobile network were discrete, rather like redoing the bathroom or building an extension. The 4G transformation won't allow that to anything like the same extent. IP makes network boundaries and domains more porous, so that what you do in one domain necessarily impacts all other domains (not just adjacent ones) and the fortunes of every packet that traverses them. And it drives feature distribution, which in turn drives demand for new product types. This kind of transformation more closely resembles refurbishing your entire house while you're still living in it. The need for coordination and alignment between work undertaken in one "room" and another is so much greater.
Some of the upcoming domain-level decisions that will have to be taken are intractable enough in their own right. Take the S1 interface in LTE between the eNodeB and the core. Suppose an operator wants to use IPsec across the S1 for security – how easily can that operator also support 1588v2 as a synchronization solution across that same link? And what's the relationship between these deliberations and the decision on whether to deploy the X2 interface between eNodeBs? Come to that, what's the relationship between these decisions and the operator's choice of a particular Layer 2 or Layer 3 protocol for each of the access and aggregation layers of the backhaul?
Admittedly, these are all transport domain issues. But then consider what the operator's strategy is for the adjacent Evolved Packet Core components, such as the S-GW, P-GW and MME, as well as the Policy Charging and Control, deep packet inspection and content caching functions. There are logically compelling reasons for changing the 20-year-old model here and pushing smaller devices out from the center of the network to the edge. And if the operators chooses that direction, then an aggregation node in the transport network that is picking up several S1s would be a prime physical location for also hosting these distributed core network elements.
Also consider the impact of an operator's small-cell strategy. Barcelona yielded further evidence that the small-cell market has been overhyped, and that most operators are unlikely to deploy in volume much before 2014. Nevertheless, there is increasingly widespread recognition that small cells will feature prominently in the future. Moreover, those that do have aggressive small-cell rollout assumptions need to consider that the case for IPsec is quite a bit stronger in a small-cell environment, and that this should also impact strategic thinking on IPsec at macro sites (since an operator should question whether it really wants different modes of backhaul security operation from one public cell site to the next).
Another adjacency exists between small cells and backhaul: Fiber and conventional microwave may not always be optimal for small cells, hence the potential of new generations of lower-cost wireless backhaul solutions that were being touted in Barcelona, most of which have yet to be rigorously tried and tested.
Many mobile operators also need to undertake greater distribution of their IP peering points. It's a no-brainer that with the growing volume of data traffic, the cost of hauling it halfway across the country is becoming unsustainable. Those that haven't yet done so need to select and deploy new peering points, again in alignment with other changes in the network. And to cap it all off, all these decisions need to optimize the cost of running what will become the legacy 2G and 3G networks, as well as the cost of the LTE network itself.
For most mobile operators, the really detailed work needed to arrive at tentative conclusions per 4G network domain has barely even started. They are even further away from arriving at robust conclusions that align well across domains. Unless the traffic growth somehow slows – or unless the average revenue per user somehow grows so fast that CFOs are persuaded to substantially increase headcount on the network side – navigating this new challenge will prove every bit as demanding as any that the industry has faced in its 20-year history, and potentially much more so.
— Patrick Donegan, Senior Analyst, Wireless, Heavy Reading