Clouding the Edge for LTE-A and Beyond
One of the areas of increasing discussion about LTE-Advanced (LTE-A) and especially around the yet-to-be defined 5G standard is the tension between the "edge" and the "cloud."
Over the last decades in telecom the powerful trend has been to push intelligence out to the edge. David Isenberg wrote a very good -- but oddly not as widely known or distributed as it deserves -- essay on this way back in 1997: The Rise of the Stupid Network.
We now have edge routers, we have gateways in our phones, and new smartphones have "intelligence" onboard in a way landline phones never did.
In wireless networking, a few years after Isenberg's essay, broadband was proving this logic with TCP/IP pushing intelligence out to the edge. While 2G the smarts were quite centralized -- with a basestation controller (BSC) in the network -- with 3G that focus shifted and the network started to flatten out a bit. (See Mobile Infrastructure 101.)
Bell Labs, meanwhile, had the idea of putting the router and stack all the way into the basestation with the snappily named BaseStationRouter. That of course then became the 3G small cell, with the medium access control (MAC) and stacks moving into the NodeB with Iuh replacing Iub, and then onto "flat architecture" of LTE. (See Telco in Transition: The Move to 4G Mobility.)
So small cells represent the clear case of intelligence to the edge -- some people call this the Distributed RAN (D-RAN). (See Know Your Small Cell: Home, Enterprise, or Public Access?)
The advantages are that networks become better: We put capacity exactly where we need it. The small cell is responsive and efficient, and we can do things like offload and edge caching, latency is reduced (which improves speed and QoE) and so on. It is a cost effective and intelligent way to make the network better and has been the "obvious" paradigm for the last few years. (See Meet the Next 4G: LTE-Advanced.)
But over the last few years we have seen the reverse trend too.
In computing we have the cloud. Intelligence moving out of the edge into the center: The widespread use of Amazon AWS or Google Cloud to host services, the rise of Chromebooks, cloud-based services like Salesforce, Dropbox or Gmail.
This concept is also been felt in the wireless world, as we have heard more and more about cloud RAN (C-RAN). This is the opposite trend to small cells: Having a "dumb" remote radio head (RRH) at the edge with all the digits sent back over fiber -- aka "fronthaul" -- to a huge farm of servers that do all of signal processing for the whole network. No basestation and certainly no basestation router. (See What the [Bleep] Is Fronthaul? and C-RAN Blazes a Trail to True 4G.)
Some simple advantages here are from economies of scale: One big server farm is cheaper and more efficient than having the same processing power distributed -- electricity and cooling needs at the basestation are reduced for example. A more subtle gain is from pooling, which is sometimes called "peak/average" or "trunking gain."
While in a normal network every basestation must be designed to cope with the peak traffic it will support -- even though other basestations will be lightly loaded then, only to have their peak some other time. So the network needs a best/worst case dimensioning, even though on average there is a lot of wasted capacity. In contrast, the Cloud RAN can have just the right of capacity for the network as a whole and it "sloshes around" to exactly where it is needed.
That is a benefit, but it has not seemed significant enough to persuade most carriers.
The problem has been connectivity: Those radio heads produce a huge amount of data and the connectivity almost certainly requires dark fiber. Most carriers simply do not have enough fiber, and even for those who do it is unfeasibly expensive. So, for most operators C-RAN has so far been economically interesting but not compelling and not worth the cost. (See DoCoMo's 2020 Vision for 5G.)
But there is an increasingly strong reason that is changing that calculation.
Most of the advances in signal processing that make LTE-A and 5G interesting rely on much tighter coordination between basestations. Whether they are called CoMP, or macro-diversity or 3D MIMO or beam-shaping, they all rely on fast, low-level communication between different sites. This is impossible with "intelligence at the edge" but relatively easy with a centralized approach. (See Sprint Promises 180Mbit/s 'Peaks' in 2015 for more on recent advances in multiple input, multiple output antennas.)
Hence the renewed focus on centralized solutions: Whilst before the economics were intriguing maybe these performance and spectral efficient gains make it compelling.
There is the twist that maybe a "halfway" solution would be optimal. This would perhaps have some signal processing in the radio, to reduce the data rate needed on fronthaul -- and use something easier and cheaper than dark fiber -- while still getting the pooling economies and signal processing benefits. (See 60GHz: A Frequency to Watch and Mimosa's Backhaul Bubbles With Massive MIMO.)
This tension between the edge and the cloud will be one of the more interesting architectural choices facing 5G and is something 3rd Generation Partnership Project (3GPP) and 5GPPP are looking at, as is the Small Cell Forum Ltd. (See 5G Will Give Operators Massive Headaches Bell Labs.)
But it might be an ironic twist if the architecture that becomes 5G is back to the "some at edge, some in core" we had with GSM or 3G, and we re-invent Abis and Iub for a new generation. [Ed note: Abis is an interface that links the BTS and the BSC; Iub links the Radio Network Controller (RNC) and Node B in a GSM network.]
-- Rupert Baines, Chief Marketing Officer, Real Wireless