The companies said Tuesday that they'll be using 16QAM modulation. The channel width would amount to 60 wavelengths per fiber, for a total potential bandwidth of 24Tbit/s per fiber.
In addition to speed, the companies will be focusing on the potential power savings of 400Gbit/s transmission. Their work is part of an Ultra-High Speed and Green Photonic Networks program under the Ministry of Internal Affairs and Communications (MIC).
The project will last "until 2014," according to the companies' press release.
Why this matters
It's looking certain that a 400Gbit/s generation will exist in optical transmission and in client-side Ethernet. While many in the industry have stressed the need to start researching 1Tbit/s immediately, projects such as this one would seem to indicate that carrier demand will support an intermediate step.
Obaut -- very interesting, thanks. I've only skimmed the very surface of RINA but it does sound radical. I can see why you say the vendors wouldn't serve this up voluntarily.
Perhaps with the hw/network infrastructure as a service business model, the equipment makers have to get serious about the capacity usage efficiency, instead of just supplying gear with higher nominal transmission bit rates?
Re IPv6 address lenght: There are future internet models that do not require one global address space (search eg for Recursive Internet/Network Architecture RINA/RNA), while they support flexible, user group controllable, unlimited global connectivity. These models certainly do not need 128-bit v6 + 32b v4 L3 + 48b L2 (which moreover presently often are repeated many times due to NATing etc) source and destination addresses (you get at least 52 bytes of plain address bits for each packet, many of which dont even have that much payload data).
Progressive service providers and large user organizations should look into transitioning into these future internet architectures (no, present equipment vendors are not likely to deliver them voluntarily). Since the innovation is in the service models and architectures, many of the existing protocol and application implementations can be used as such for the new architectures, so there is no actual operational disruptive change needed.
> Wouldn't service delivery/network architectural optimization allow achieving better application payload layer throughput rates, with better cost efficiency, even using say current 40Gbps WDM technology than taking the present un-architected protocol patchwork to 100..Gbps physical capacities?
Good point. Is anyone doing serious work on that right now?
Maybe that's the next step after 400G -- buy terabit some time by more efficiently filling the pipes that are already there? Or would the gains really be worth it? I'm guessing issues like bigger packet headers for IPv6 (and OpenFlow) are immovable objects.
You can easily draw parallels with software development practices of building layers upon layers. Functionality that took 32Kbyte on a C64 will take 32 Gbyte on Windows 7. It's progress.
Since internet packet traffic started dominating service provider network traffic volumes, the network capacity utilization has been very poor -- in the order of just 1 out of 10 timeslots carrying revenue-driving traffic. With the additional protocol overhead created with IPv6 (looong headers), and, despite the vendor speak, ever increasing numbers of protocol layers (eg L5+ protocols o IPv4/6 o IPv6/4 (+ either or both often NATed) o MPLS o Ethernet o SDH/OTN) the ratio of revenue generating payload bytes to non-revenue-driving byte time slots in networks is actually shrinking still. The lower physical layer capacity allocation granularity caused by equipment supporting only 'fat pipe' L1/0 links is making the efficiency situation even worse.
What is the payoff in trying to further widen the 'fat pipes' that will be evermore poorly utilized (think of it: 400Gbps pipe will in reality, at best, be carrying at most ~40Gbps of revenue driving traffic in average), vs rationalizing the network and service delivery architectures to get the capacity utilization ratios to a reasonable level?
Wouldn't service delivery/network architectural optimization allow achieving better application payload layer throughput rates, with better cost efficiency, even using say current 40Gbps WDM technology than taking the present un-architected protocol patchwork to 100..Gbps physical capacities?
So, at 60 wavelengths per fiber, that comes out to a channel spacing of, what ... 75GHz? I've got a query in to Fujitsu to make sure I worked that out right.
Their press release talks about using 16QAM *and* DP-QPSK, but that doesn't seem right... I wonder if they mean they'll be able to run 400G and 100G on the same fibers. I'm looking for clarification on that as well.
The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.
To save this item to your list of favorite Light Reading content so you can find it later in your Profile page, click the "Save It" button next to the item.