As Sterling said, it's not an energy thing but really a speed-of-progress thing. I would imagine a similar argument crops up in a lot of IEEE places over and over again (as Jodam alludes to...)
I've been told that Ron Johnson's is a fringe opinion, possibly even within Cisco. Fair enough; I did suspect that a little.
It's a tradeoff every time. The standards process does have its benefits, in getting everybody into accordance on a technology. But I can understand there being some impatience with it.
I was moderator for this panel and can provide a little more color. Actually, among the panelists (Huawei, Infinera, Fujitsu), there was not much debate on the client side evolution - everyone appeared to agree that 400G client side was the best route forward for the IEEE, and, with that in place, the ITU-T would be able to move forward on the line side - most likely a flexible line rate that would accomodate 400G in some way.
The debate came after the panel closed, when Cisco cornerd Craig and me and raised the argument that the client side, like the line side, should also be a flexible standard - not a fixed 400G interface. Their point, as Craig has described, is that standards development is too slow and a flexible client rate would allow faster innovation. When they bought Lightwire, they made these same points about innovation speed as a driver for the deal.
The benefits to Cisco are clear: they want to be able to innovate faster than the market so they can maintain leadership in switching/routing. This was the 1st I'd heard of the flexible client option and I haven't yet sorted out how feasible this is, especially in terms of interoperability. But it would eliminate the need constantly reconvene to sort out the next client rate.
I see no reason why this needs to happen UNLESS there is something to be gained such as power efficiency on HUGE fiber networks. This might be a function of the amount of data being transmitted in any given second/minute.. However, if poorly implemented could lead to bottlenecks trying to save power at the expense of speed (at best). IMO, I think they should only start doing this once major networks achieve 1TB throughputs and beyond. At this point, the possiblity of bottlenecks is quite small because the pipes are so wide. At 400gb this pipe is too small to say there won't be any bottlenecks when saving power. In a 1tb standard, it can negotiate speeds from 100gb to 1tb as necessary.. based on what the data pipes are on the other end and their traffic needs transmitted ahead of the data. Still, there is something to be said for designing network equipment which uses LESS power and LESS processing muscle to get a bigger and bigger job done more efficiently. This is why you don't see 10GigE in residential consumer equipment yet... it's not efficient and cost effective.
aaahhh..... it seems just like ysterday we had this same debate when 100G was being conceptualized.... while all these arguments make sense in one form or another... one needs to consider the cost implications..... remember there is a reason why Ethernet has been successful... and i do believe cost has played something of a role in that...
The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.
To save this item to your list of favorite Light Reading content so you can find it later in your Profile page, click the "Save It" button next to the item.