Light Reading - Telecom News, Analysis, Events, and Research
Sign up for our Free Telecom Weekly Newsletter
Connect with us
Comments
Current display:       Newest Comments First       Display in Chronological Order
mmathews
User Ranking
Saturday December 8, 2012 1:59:30 PM
no ratings

Yes - realtime packet optimizations can be made by the controller without changing the physical (WDM) topology, similar to have OF controllers can be used to better utilize existing network capacity. 

With The Lights Out
User Ranking
Friday December 7, 2012 12:36:34 PM
no ratings

Here happens to be a related, and quite informative, piece of conversation:

http://www.lightreading.com/messages.asp?piddl_msgthreadid=240884&piddl_msgid=351712#msg_351712

With packetized traffic, isn't it however the case that provisioning a network to match the longer-term average capacity demands between given access points will inevitably be suboptimal, as the actual realtime traffic loads are hardly ever at such longer-term average levels?

There is value though in a capability to reconfigure the W/TDM channels to provide minimum (or no) packet switching hops between access points with greatest expected/average packet traffic loads. But in addition to such non-realtime, macro-level connectivity optimization, isn't there a need for realtime micro-level bw allocation optimization within the macro-level configurations?

mmathews
User Ranking
Friday December 7, 2012 10:46:14 AM
no ratings

Our goal is not to make the network follow every change in traffic patterns, as (a) this is a very reactive approach, and (b) can lead to creating more problems than it solves. If an application is known to be bursty it can have a specifically defined policy that allocates it extra capacity or lightly used (less oversubscribed paths) so you don't get into those issues. However, in terms of automatically reacting, the controller also looks at statistics on the network and can identify "top talkers" that need more capacity and allocate that capacity dynamically, or move those flows to more lightly used links if that capacity is available, or if a more direct path is available through the network. The explicitly defined policies still take precedence. So the Controller starts with explicitly defined needs and fits those first, then leverages statistical knowledge of traffic flows over time to find cases where optimizing some heavy traffic to more direct paths can benefit those devices and benefits all other traffic by lightening the roads that everyone else is using.

With The Lights Out
User Ranking
Thursday December 6, 2012 2:02:25 PM

"the controller looks at the needs of the workloads and calculates how the network ought to be getting used. Some of this can even happen automatically"

Any specifics on this, in particular the "automatically" part?

Can the (optical) network react automatically to packet traffic bursts between given network access points, and how fast? How are possible competing bursts that cannot be accommodated by some physical capacity constraint (such as baud rate of a shared physical port) handled?

dwx
User Ranking
Thursday December 6, 2012 10:56:40 AM
no ratings

I think the expense and just the lack of off the shelf components to integrate into a L2 switch is what kept vendors away.   What is being done with the Plexxi device has been done with transport equipment for some time except the other way around, integrating a L2 Ethernet switch and client ports into an optical platform.   However those devices are not top of rack type devices, have poor Ethernet density,  and typically are not deployed in a datacenter at all.   

Not to hijack Plexxi because I do like what they have done but some optical xconnect vendors like Calient are beginning to go into the DC space for the same reason, to interconnect switches via a pure optical high speed link to take care of on-demand east-west needs.   Of course they do not have the orchestration software Plexxi has.  

Craig Matsumoto
User Ranking
Wednesday December 5, 2012 7:06:40 PM
no ratings

Do you think it's just the expense that drives other switch makers away from fiber?

matmathews
User Ranking
Wednesday December 5, 2012 7:05:26 PM
no ratings

Thanks! The abiity to create point-to-point optical paths between switches allows us to create very specific topologies that meet the needs of the data center workloads. This is much more efficient than just creating one big flat ring, because for instance we could have very high capacity in one section to say support a hadoop workload by taking that capacity from other areas that don't need it. Or we could create direct optical paths between far flung switches to create hop-less low latency paths. These are just some of the examples of the capabilities of the optical domain.

Craig - also just to clarify - we don't target low latency apps that are restricted to a single rack. Instead we focus on improving latency across the data center, especially for workloads that extend beyond the rack. So instead of worrying about shaving off nanoseconds per switch hop, we eliminate the hops entirely. The "speed of light" latency is actually quite nominal compared to incurring a switch hop, so there are great efficiencies that can be gained by understanding low latency requirements across the data center.

 

 

dwx
User Ranking
Wednesday December 5, 2012 4:50:58 PM
no ratings

I've been waiting for awhile for someone to integrate DWDM technology into device interconnects besides using it for multi-chassis.   Juniper supports 128Gbps fabric interconnects on the EX4550 today but doesn't have the direct device to device wavelength capacity like this does.   I'm not sure how beneficial that is versus just creating a big 240Gbps ring.   

 



The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.