When it comes to adoption of SDN, every network operator has its own approach, even though all are ultimately headed in the same direction. Level 3 Communications' CTO Jack Waters underscores this reality in discussing his view of SDN, which leans heavily on the advantages it brings to network provisioning.
In a recent conversation following a meeting in Chicago with Level 3 Communications Inc. (NYSE: LVLT) customers, Waters shared his views on SDN, the market in general and the upcoming challenge of integrating tw telecom inc. (Nasdaq: TWTC)'s network (if the acquisition announced in June goes through as planned).
He sees SDN as presenting two possibilities: First the separation of the control and data planes for big iron within the network, the switches which live at the heart. That's "interesting," says Waters, but more appealing to big data center operators in the near term. The second SDN use case, transforming the provisioning process using Netconf/Yang, which was embraced by the Open Networking Foundation for configuration of OpenFlow enabled devices, has much greater appeal. (See Netconf & Yang Go Mainstream.)
"That piece of it has more relevance for us," Waters says, because it enables a common data model that can be abstracted from vendor-specific provisioning processes and allow the network operator to have a common way of provisioning across its network based on service definitions that specific network configurations.
"That is really interesting because today, the provisioning process is not as abstracted, it's more specific to each vendor's equipment," he says. "If that part takes hold, it changes our business model."
Greater customer control
When NFV is added to that more abstracted provisioning process, the network operator is in a position to give its customers the ability to turn up specific functions -- such as firewalls, for instance -- in an automated, on-demand service model. At the very least, the network operator's own provisioning processes become much more automated and flexible, he says.
"Customers don't get enough control and they are asking for it," Waters says. "I don't think the economic model of how they pay for bandwidth or for these virtualized network functions has been worked out yet. But we know that is where we are headed."
The biggest challenge to getting there is having the hardware vendor support of the software interfaces needed to support the more abstracted model, and on that front, Waters admits some mild frustration with vendors who purport to be creating a more open interface to their equipment but wind up adding proprietary extensions or other "enhancements" that become roadblocks to the larger process of creating that abstraction layer.
"We'd love to see more open interfaces -- at least for the stuff we want to use," he says. "Things tend to remain open as long as there is a pretty level playing field. But once there is a clear winner, things start to bend in that direction."
Vendors still tend to develop proprietary extensions, even to open and widely used standards such as BGP, Waters notes. "We try not to corner ourselves and have any proprietary extensions force us to have to buy from a specific vendor," he comments.
In general, he sees virtualization efforts by vendors as still in their very early stages and "not fully baked yet," but is more intrigued by some of the pure software plays he is seeing than in hardware-software combinations.
Next page: Merger mania