x
Optical components

Photonic Integration Starts Slowly at 100G

Photonic integration is going to be a necessity for some designs, according to a recent Heavy Reading report, but that trend hasn't manifested itself in 100Gbit/s yet.

It's certainly needed at the client-side module level, but for the most part, systems makers probably won't have to make any grand photonic-integration moves until the 400Gbit/s generation.

"It's not like every company needs to do a large-scale, parallel-integration chip like Infinera Corp. (Nasdaq: INFN) just to succeed at 100Gbit/s," says Sterling Perrin, Heavy Reading's optical analyst.

There's some photonic integration involved in the Optical Internetworking Forum (OIF) definitions for systems, but it's low-level serial integration -- that is, the merging of neighboring functions.

Still, photonic integration is poised to become a necessary fact of optical design, according to the recent report, "Photonic Integration, Super Channels & the March to Terabit Networks." Perrin wrote the report along with Gazettabyte editor Roy Rubenstein.

It's a follow-up to Perrin's 2008 report on the topic, and Perrin admits he thought photonic integration -- the combining of multiple components and functions into a single device -- would be closer to ubiquity by now.

Coherent detection has been one factor. "The industry is using sophisticated electronic processing to enable 100Gbit/s transport using less sophisticated optics (specifically, optics built to handle 25Gbit/s data rates," Perrin and Rubenstein write in the report.

Photonics are also different from semiconductors. Chip integration tends to be a no-brainer, because it almost always brings benefits in size, power consumption and cost. But for optical components, a photonic integrated circuit (PIC) doesn't always bring those power savings. Even the cost savings of photonic integration are sometimes not that great, because of the R&D costs that have to come first.

None of this means photonic integration should be dismissed. For one thing, there's the size factor.

The CFP optical module, a client-side standard for 100Gbit/s, is due for a shrink, and the smaller CFP2 and CFP4 modules are almost certain to take advantage of photonic integration.

"They've got these form-factor requirements that are very strict. You really do need to combine a lot of components to reduce the size of what's fitting into the module," Perrin tells Light Reading. "For CFP4, everybody we've talked to says you need photonic integration."

Photonic integration will get its starring role in optical transport as systems strive for interface speeds of 400Gbit/s and 1Tbit/s. It's looking like those interfaces will be built of superchannels, multiple wavelengths of light that are combined to carry one traffic stream. A superchannel requires multiple lasers, so parallel integration becomes a pretty obvious way to go.

That's certainly been the case so far. Every 1Tbit/s demo has used superchannels, and the 400Gbit/s designs shown off earlier this year used two 200Gbit/s channels. (See Ciena Pushes Ahead to 400G, AlcaLu Can Do 400G Too and Huawei Strives for Optical Respect.)

For more


— Craig Matsumoto, Managing Editor, Light Reading

Page 1 / 2   >   >>
Pete Baldwin 12/5/2012 | 5:24:13 PM
re: Photonic Integration Starts Slowly at 100G

The report notes that people have given up on finding a "Moore's Law for optics," which has been the battle cry for silicon photonics and photonic integration.


The reasons are obvious, and I wish I'd called people out on them earlier.  Namely, Moore's Law works because you're talking about one element (the transistor) that's repeated over and over on a die. Make the element smaller, all kinds of good things happen.


Optics involves an arbitrary number of different elements, and they're analog. Different case.

fiberslut 12/5/2012 | 5:24:12 PM
re: Photonic Integration Starts Slowly at 100G

i don't get why moving to 100G reduces the need for photonic integration?


so far photonic integration - at 10G -has helped consolidate a bunch of lasers, modulators, muxes, detectors and so on into a single device.


at 100G, don't you need twice as many lasers:  one Tx and on for the coherent local oscillator?  don't you need 4 times as many modulators to do QPSK modulation instead of OOK modulation?  don't you need 4 times as many detectors (or is it 8 times as many)?  and if QPSK 100G don't all these need to work at 25Gb/s instead of 10Gb/s.


so can light/heavy reading explain why do you need LESS integration?  and why are these 25Gb/s components cheaper than 10Gb/s?


 

ninjaturtle 12/5/2012 | 5:24:11 PM
re: Photonic Integration Starts Slowly at 100G

The visionary approach INFN started in 2001 plus all the years of R&D that has been spent will position them very nicely both with the current DTN-X and future large scale PIC systems. Todays customers have the assurance that the past 11 years INFN has put into developing a PIC based system rather than a PIC component will enable them to grow there networks seamlessly. Because at the end of the day carriers purchase systems not components. From day one INFN's systems architecture has been design around their proprietary PIC supported by custom hardware, software and firmware uniquely designed to their meet their custom specifications. A true game changer. There were many iterations of MP3 players. Then came the iPod. Also a game changer.

specsavers 12/5/2012 | 5:24:10 PM
re: Photonic Integration Starts Slowly at 100G

If you are considering integration shouldn't the operational integration piece be considered. Just wondered why this was not considered in any great detail here, seems strange to leave it out.

Sterling Perrin 12/5/2012 | 5:24:10 PM
re: Photonic Integration Starts Slowly at 100G

To clarify the report findings: we DON'T conclude at all that LESS integration is needed at 100G, as fiberslut's post states! In fact, we state that line side photonic integration is specified in the OIF MSA (linear integration) and so has become a requirement. In reading this article, the summary looks consistent with the report statements to me but, obviously, people posting are getting a different read.


Let me have another go at the discussion with Craig that is part of this article: Back in 2008 there was a question of whether EVERY supplier would need parallel photonic integration to succeed at 100G. Fast forward to today and we see that Infinera remains in the large scale PIC camp (as was expected) but others have not been forced to follow. What has been required by everyone is coherent detection with DSPs. Therefore, looking at the industry as a whole, for 100G, coherent detection has been a bigger development than large scale PICs.


Looking into the future, it appears that super channels for 400G+ will be the insertion point for parallel PICs on the line side as super channels are multi-laser designs. Here, it is likely that supplier will need to move to parallel integration, we conclude. We can't say this for certain, however, because suppliers have not shared this kind of detail with us yet.


Hope this helps.


Sterling

^Eagle^ 12/5/2012 | 5:24:09 PM
re: Photonic Integration Starts Slowly at 100G

Note: the cost of the die is not the biggest factor in this arguement.  If you roll up the BOM, the die cost becomes one of the smaller items.  Even high end MZI's for 100Gig.  And lasers, and detectors.  


The real cost comes in packaging.  You get by far the most bang for your buck by co-packaging.  Monolithic at die level does not really buy you much when you roll up the entire BOM and manufacturing costs.  yes, it buys you some, but not as much as you would think.


Packaging, burn in, test... this is where the real savings are.  


sailboat

^Eagle^ 12/5/2012 | 5:24:09 PM
re: Photonic Integration Starts Slowly at 100G

I recall that early on IFN wanted to make PIC's for everyone.  I saw some presentations to this effect. Only when the big T1 OEMs turned them down did they pivot and make their own system. 


it was a wise move on their part.  


sailboat

^Eagle^ 12/5/2012 | 5:24:09 PM
re: Photonic Integration Starts Slowly at 100G

wow, this same ole arguement pushed by Ifn shells.


so, how about the truth.  Even mighty IFN does not do monolithic integration, but co-packages several building blocks.  arrays of devices yes, but not completely monolithic.  they use 2 different material systems inside the PIC.  copackages due to better properties of the different materials.  actives InP, passive mux / demux based on other material.  copackaged.  Even they recoginize the differences in material properties.


The underlying message for this is that you get the most bang for your buck by removing multiple gold boxes and the fiber interconnects between them.  The big gain is in co-packaging and testing / burn in.  smaller gains are to be had by the monolithic integration.  For instance if you use InP as the mux, you have much higher insertion loss (physics) and then need more EDFAs.  And considering wafer size for InP, if you integrate too much, you have few devices per wafer and poor yields.  


While we will never know for sure, it would be really interesting to see a true cost breakdown for IFN's PIC.  


The other dirtly little truth, if IFN had not raised massive amounts of cash during the tail end of the boom in VC funding, they could not make it happen in the financial climate of the last several years.  NO one would have given them over 300M in VC funding, and even more in debt funding.   They got lucky with timing.  And in the end, IFN is having some challenges in becoming truely profitable.  (I mean old fashioned making money by positive margins each quarter.  Not by financial accounting tricks.)


I know of at least 5 other labs that could have done more than IFN if they had 300M.  And note: the first PIC was not from IFN.  First PICs were more than a decade earlier also in INP.


Now: let us take the case of the modulator as mentioned in the post.  you have 2 materials paths: InP and Lithium Niobate.  In both cases there is a trade off in performance if you monolithically integrate all the functions for DP-QPSK.  yes, you get fewer die if you monolithically integrate, BUT, very high insertion loss, which means more heat to dissipate, and need for more EDFA's... which means more cost and more complexity.  OR, you use 4 individual MZI's and do the other functions in bulk optics.  You can actually get better performance this way.... but then there is the cost of integrating the bulk optics (lenses, PBS, PBC, Faraday rotators, etc.).  and the small increased reliability risk.   Note bulk optics have low IL and really low cost.


So the arguement about monolithic integration is somewhat based on your own philosophy.  Kind of a Ford vs Chevy thing.  


The real key for the present and foreseable future is that IF you copackage the optical chain (lasers, modulators, detectors, drivers) you get a big savings in cost, space and power consumption.  


This is where the real bang for the buck comes with integration.  


This may change in the future.


but who is wiliing to invest hundreds of millions to move the benchmark further?


Note: I know this post will get lots of flame from IFN shills.  I am prepared for it.  I am not an unknowlegeable person.  Having worked in InP for over a decade.


sailboat

fiberslut 12/5/2012 | 5:24:05 PM
re: Photonic Integration Starts Slowly at 100G

sailboat - your description of INFN's PICs as a hybrid component using multiple materials types co-packaged in a module is DEAD WRONG.  you obviously haven't even bothered to read the many technical papers INFN's team hs published on the topic, where they provide lots of details that exain what they do, and demonstrate that you do not know what you are talking about.  i didn't even bother to read the rest of your post - how can anyof it even be taken seriously when you have the basic premise so fundamentally wrong?

redface 12/5/2012 | 5:24:03 PM
re: Photonic Integration Starts Slowly at 100G

 "INFN got lucky in timing"?  I think the answer is actually "yes".  Although INFN got a lot of funding in 2002-2005, it got substantial funding in 2001 before the fiber bubble burst.  And after the bubble burst, the investors had to continue investing to prevent complete loss of earlier substantial investment.  In other words, if INFN started in 2005 instead of 2001,  there probably would not be an INFN today.  


It seems that INFN now has a headstart on everyone else in photonic integration.  With transmission speed going from 100Gb/s to 400Gb/s while individual light beams modulating at 25 Gb/s, a massive number of identical elements in array form are required.  Photonic integration is becoming indispensible.  This game suits INFN (or some other players who can do photonic integration).  This integration is not necessarily the kind of integration that INFN has been doing though (integrating lasers, modulators, AWGs etc in one chip), which is geared for point-to-point transmission.  

Page 1 / 2   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE