Light Reading - Telecom News, Analysis, Events, and Research
Sign up for our Free Telecom Weekly Newsletter
Connect with us

Telecom News Analysis  

Plexxi's SDN Really Flattens the Data Center

Software-defined networking (SDN) startup Plexxi divulged details of its architecture Wednesday, describing how its optical-ring layout can make a data center better suited for cloud services.

Light Reading outlined Plexxi's details back in September. The startup goes beyond pure software; it's offering top-of-rack 10Gbit/s Ethernet switches and a controller architecture to make them all SDN-like.

How's it all work? Let us remind you of what we discovered in September:

Aha! We Knew It!
With this technology, you'd think they wouldn't need so much funding.

More specifically, Plexxi's switches are connected in a fiber-optic ring. When two items in the data center need to be linked (what Plexxi calls an "affinity"), the network ring configures accordingly. Plexxi refers to this as moving the network to suit the workload, rather than the other way around.

"We wanted to go out and create an actual network that was fully definable by software," says Mat Mathews, Plexxi's vice president of product management.

The difference might sound semantic, but it means there's no external application, sitting atop all the software, that tells the network what to do.

It also means the network has no tiers and none of the leaf/spine architecture that data center people talk about. With vendors boasting about how flat they can make the network, Plexxi seems to have found a way to be the flattest of all.

"Other networks just use protocols to make the network look flat," Mathews says.

It's all run by a controller that's centralized but also includes a federated piece distributed to each switch. The setup is similar to the way OpenFlow gets deployed, but the inner workings are very different (and no, OpenFlow itself isn't supported yet). Plexxi uses algorithms and a global view of the network to decide how to configure the network.

In other words, rather than programming route tables, the controller looks at the needs of the workloads and calculates how the network ought to be getting used. Some of this can even happen automatically.

This is where it's going to get tricky for Plexxi: To get partners involved, the company has to bend them toward this way of thinking. Partners might be used to using APIs for issuing commands to pieces of software. Plexxi calls for "affinity APIs," where a tool tells the network what it cares about -- high bandwidth or ample storage, for instance -- leaving the network to hash out the specifics.

The ring setup does create extra latency, because Plexxi can't avoid the speed of light. Its setup isn't targeting high-frequency traders that need to shave nanoseconds off of a transmission.

The entire Plexxi collection is shipping, and the company has a couple of customers in production, Mathews says. One is trying to offer a premier cloud service where workloads are guaranteed to be a maximum number of hops apart on the network. Another is offering an elastic storage service based on big disk arrays; Plexxi's gear treats each customer's chunk of storage as a workload to be mated to the rest of the customer's virtual network.

For more

— Craig Matsumoto, Managing Editor, Light Reading

Newest Comments First       Display in Chronological Order
mmathews
User Ranking
Saturday December 8, 2012 1:59:30 PM
no ratings

Yes - realtime packet optimizations can be made by the controller without changing the physical (WDM) topology, similar to have OF controllers can be used to better utilize existing network capacity. 

With The Lights Out
User Ranking
Friday December 7, 2012 12:36:34 PM
no ratings

Here happens to be a related, and quite informative, piece of conversation:

http://www.lightreading.com/messages.asp?piddl_msgthreadid=240884&piddl_msgid=351712#msg_351712

With packetized traffic, isn't it however the case that provisioning a network to match the longer-term average capacity demands between given access points will inevitably be suboptimal, as the actual realtime traffic loads are hardly ever at such longer-term average levels?

There is value though in a capability to reconfigure the W/TDM channels to provide minimum (or no) packet switching hops between access points with greatest expected/average packet traffic loads. But in addition to such non-realtime, macro-level connectivity optimization, isn't there a need for realtime micro-level bw allocation optimization within the macro-level configurations?

mmathews
User Ranking
Friday December 7, 2012 10:46:14 AM
no ratings

Our goal is not to make the network follow every change in traffic patterns, as (a) this is a very reactive approach, and (b) can lead to creating more problems than it solves. If an application is known to be bursty it can have a specifically defined policy that allocates it extra capacity or lightly used (less oversubscribed paths) so you don't get into those issues. However, in terms of automatically reacting, the controller also looks at statistics on the network and can identify "top talkers" that need more capacity and allocate that capacity dynamically, or move those flows to more lightly used links if that capacity is available, or if a more direct path is available through the network. The explicitly defined policies still take precedence. So the Controller starts with explicitly defined needs and fits those first, then leverages statistical knowledge of traffic flows over time to find cases where optimizing some heavy traffic to more direct paths can benefit those devices and benefits all other traffic by lightening the roads that everyone else is using.

With The Lights Out
User Ranking
Thursday December 6, 2012 2:02:25 PM

"the controller looks at the needs of the workloads and calculates how the network ought to be getting used. Some of this can even happen automatically"

Any specifics on this, in particular the "automatically" part?

Can the (optical) network react automatically to packet traffic bursts between given network access points, and how fast? How are possible competing bursts that cannot be accommodated by some physical capacity constraint (such as baud rate of a shared physical port) handled?

dwx
User Ranking
Thursday December 6, 2012 10:56:40 AM
no ratings

I think the expense and just the lack of off the shelf components to integrate into a L2 switch is what kept vendors away.   What is being done with the Plexxi device has been done with transport equipment for some time except the other way around, integrating a L2 Ethernet switch and client ports into an optical platform.   However those devices are not top of rack type devices, have poor Ethernet density,  and typically are not deployed in a datacenter at all.   

Not to hijack Plexxi because I do like what they have done but some optical xconnect vendors like Calient are beginning to go into the DC space for the same reason, to interconnect switches via a pure optical high speed link to take care of on-demand east-west needs.   Of course they do not have the orchestration software Plexxi has.  

Craig Matsumoto
User Ranking
Wednesday December 5, 2012 7:06:40 PM
no ratings

Do you think it's just the expense that drives other switch makers away from fiber?

matmathews
User Ranking
Wednesday December 5, 2012 7:05:26 PM
no ratings

Thanks! The abiity to create point-to-point optical paths between switches allows us to create very specific topologies that meet the needs of the data center workloads. This is much more efficient than just creating one big flat ring, because for instance we could have very high capacity in one section to say support a hadoop workload by taking that capacity from other areas that don't need it. Or we could create direct optical paths between far flung switches to create hop-less low latency paths. These are just some of the examples of the capabilities of the optical domain.

Craig - also just to clarify - we don't target low latency apps that are restricted to a single rack. Instead we focus on improving latency across the data center, especially for workloads that extend beyond the rack. So instead of worrying about shaving off nanoseconds per switch hop, we eliminate the hops entirely. The "speed of light" latency is actually quite nominal compared to incurring a switch hop, so there are great efficiencies that can be gained by understanding low latency requirements across the data center.

 

 

dwx
User Ranking
Wednesday December 5, 2012 4:50:58 PM
no ratings

I've been waiting for awhile for someone to integrate DWDM technology into device interconnects besides using it for multi-chassis.   Juniper supports 128Gbps fabric interconnects on the EX4550 today but doesn't have the direct device to device wavelength capacity like this does.   I'm not sure how beneficial that is versus just creating a big 240Gbps ring.   

 

The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.