Light Reading - Telecom News, Analysis, Events, and Research
Sign up for our Free Telecom Weekly Newsletter
Connect with us
Comments
Current display:       Newest Comments First       Display in Chronological Order
no ratings

Obaut -- very interesting, thanks.  I've only skimmed the very surface of RINA but it does sound radical. I can see why you say the vendors wouldn't serve this up voluntarily.

no ratings

Perhaps with the hw/network infrastructure as a service business model, the equipment makers have to get serious about the capacity usage efficiency, instead of just supplying gear with higher nominal transmission bit rates?

Re IPv6 address lenght: There are future internet models that do not require one global address space (search eg for Recursive Internet/Network Architecture RINA/RNA), while they support flexible, user group controllable, unlimited global connectivity. These models certainly do not need 128-bit v6 + 32b v4 L3 + 48b L2 (which moreover presently often are repeated many times due to NATing etc) source and destination addresses (you get at least 52 bytes of plain address bits for each packet, many of which dont even have that much payload data).

Progressive service providers and large user organizations should look into transitioning into these future internet architectures (no, present equipment vendors are not likely to deliver them voluntarily). Since the innovation is in the service models and architectures, many of the existing protocol and application implementations can be used as such for the new architectures, so there is no actual operational disruptive change needed.

no ratings

> Wouldn't service delivery/network architectural optimization allow achieving better application payload layer throughput rates, with better cost efficiency, even using say current 40Gbps WDM technology than taking the present un-architected protocol patchwork to 100..Gbps physical capacities?

Good point.  Is anyone doing serious work on that right now?

Maybe that's the next step after 400G -- buy terabit some time by more efficiently filling the pipes that are already there? Or would the gains really be worth it? I'm guessing issues like bigger packet headers for IPv6 (and OpenFlow) are immovable objects.

no ratings

You can easily draw parallels with software development practices of building layers upon layers. Functionality that took 32Kbyte on a C64 will take 32 Gbyte on Windows 7. It's progress.

no ratings

Since internet packet traffic started dominating service provider network traffic volumes, the network capacity utilization has been very poor -- in the order of just 1 out of 10 timeslots carrying revenue-driving traffic. With the additional protocol overhead created with IPv6 (looong headers), and, despite the vendor speak, ever increasing numbers of protocol layers (eg L5+ protocols o IPv4/6 o IPv6/4 (+ either or both often NATed) o MPLS o Ethernet o SDH/OTN) the ratio of revenue generating payload bytes to non-revenue-driving byte time slots in networks is actually shrinking still. The lower physical layer capacity allocation granularity caused by equipment supporting only 'fat pipe' L1/0 links is making the efficiency situation even worse.

What is the payoff in trying to further widen the 'fat pipes' that will be evermore poorly utilized (think of it: 400Gbps pipe will in reality, at best, be carrying at most ~40Gbps of revenue driving traffic in average), vs rationalizing the network and service delivery architectures to get the capacity utilization ratios to a reasonable level?

Wouldn't service delivery/network architectural optimization allow achieving better application payload layer throughput rates, with better cost efficiency, even using say current 40Gbps WDM technology than taking the present un-architected protocol patchwork to 100..Gbps physical capacities?

Craig Matsumoto
User Ranking
Wednesday December 12, 2012 10:14:23 AM
no ratings

So, at 60 wavelengths per fiber, that comes out to a channel spacing of, what ... 75GHz?  I've got a query in to Fujitsu to make sure I worked that out right.

Their press release talks about using 16QAM *and* DP-QPSK, but that doesn't seem right... I wonder if they mean they'll be able to run 400G and 100G on the same fibers.  I'm looking for clarification on that as well.



The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.