Learning From Mistakes
- If you're not failing every now and again, it's a sign you're not doing anything very innovative. – Woody Allen
Or, as I like to look at it, you never learn anything by getting it right. Most great lessons are learned when you are digging yourself out of a hole. A friend is someone who throws you a rope to get out, but a best friend is the person in the hole with you.
Many years ago I worked on a project involving a software system called the Distributed Computing Environment (DCE), which came out of another project I worked on called Project Athena (but that is a story for another blog). DCE was a toolkit for developing client-server applications using remote procedure calls (RPC), location services, time synchronization, authentication services, and a distributed file system (DFS).
While DCE did not gain much acceptance in its initial incarnation, we did learn a lot about how to decompose programs into functional components and have servers in the network run those specific functions. By using the location services, we were able to have servers execute specific sub-programs or functions, based on a variety of selection options (e.g. proximity, spare compute cycles, or other criteria.) DCE was developed in response to dissatisfaction with mainframes and super-minis being costly, hard to manage, and difficult to scale in hardware, as well as power and cooling issues and rackspace requirements.
We find ourselves at a similar, though not quite identical, place in today's cable architecture. We may continue to increase the density of edge devices, just as we did in early 1990s, but is that really the right path to take? We can easily ride the curves of Moore, Koomey, and Denard for some time to increase the density of access equipment. However, as we learned in the days of the mainframe and super-minis, if we fail to learn from our mistakes to prepare adequately for future changes, we are highly likely to make the same poor choices again.
As part of the thought exercise, let's consider our current edge devices, including cable modem termination systems (CMTSs) and optical line terminals (OLTs). Today's DOCSIS edge devices perform many functions, primarily L2 (switching) and L3 (routing). Some functions are what traditionally would be part of an edge router, others are more like an aggregation switch, and many are similar to a media converter.
As each set of actions is scaled, the size, power, and complexities of the edge device will require either more dense and capable silicon or a larger form factor. An OLT traditionally is a L2 device with the L3 functions handled in a broadband remote access server (BRAS). But, with Converged Cable Access Platform (CCAP), we are considering a hybrid that manages both layers in a single device, like a CMTS does today.
So how do we begin thinking about separating the different layers? Which tools are available? What do the layers look like? And, more importantly, why would we do it?
This is where software-defined networking (SDN) and network functions virtualization (NFV) enter the discussion. To get things started, let's begin with a diagram of the higher-level functions performed within a DOCSIS edge device (e.g. a CMTS or CCAP).
Thanks to Harmonic Inc. for allowing me to use this diagram:
As you look at this image, you can see the top-level functions that a DOCSIS edge access device performs. As we begin to consider ways to use SDN and NFV, this functional breakdown will be our starting point for decomposing functions.
In my next blog, I will start discussing possible ways to break down the functions into layers and begin to think about separating those layers into a distributed access architecture.
— Jeff Finkelstein, Executive Director of Strategic Architecture, Cox Communications