Featured Story
Orange reveals 6G disconnect between telcos and their suppliers
Some of the biggest vendors are still wedded to the idea that innovation must come through hardware, complains Orange's Laurent Leboucher.
The next step in optical transmission: * Technology options * Standards * Modulation issues
December 22, 2008
40- and 100-Gbit/s networking represents a major opportunity for both system and component vendors. The rapidly growing number of broadband subscribers and the increased use of 10-Gigabit Ethernet are forcing carriers and enterprise network managers to look for higher bandwidth solutions -- and components are a key part of the 40/100-Gbit/s ecosystem.
Certainly, there is a lot of current interest in putting in 40/100 Gbit/s into the metro -- over 60 percent of the respondents to an online poll held during the Light Reading Webinar on which this report is based saw the metro as the immediate application need for these speeds, followed by regional/long-haul applications. And about 10 percent of respondents had already implemented such interfaces, while about 80 percent plan to do so within the next one to two years.
“We are seeing that 40-Gbit/s interfaces and components are needed now, and 100 Gbit/s is being planned really as a logical extension to those 40-Gbit/s solutions, rather than as something separate,” says Simon Stanley, principal consultant at Earlswood Marketing. “For the short reach – the data center and high-performance-computing applications -- there’s 4x10-Gigabit Ethernet, 40-Gbit/s Ethernet, or InfiniBand now; and IEEE 802.3ba is moving ahead with the combined 40- and 100-Gbit/s Ethernet. Within the box the 40G Base-KR4 is probably going to dominate, certainly for Ethernet-based solutions. For long-haul and metro systems, we see 40-Gbit/s interfaces shipping now, and those 300-pin modules are starting to replace the linecards in some systems.”
So this report reviews the technology and component aspects of the industry’s move towards 40- and 100-Gbit/s networking, and outlines some of the remaining challenges.
Here’s a hyperlinked contents list:
Page 2: Market Trends
Page 3: Technology Options
Page 4: 10-Gigabit Ethernet Lessons
Page 5: IEEE 802.3ba
Page 6: 40-Gbit/s Within the Box
Page 7: Long-Haul History
Page 8: 40-Gbit/s Modulation Issues
Page 9: 100-Gbit/s Modulation Issues
Webinar
This report is based on a Webinar, 40 & 100 Gbit/s Technology & Components, moderated by Simon Stanley, Principal Consultant, Earlswood Marketing Ltd. , and sponsored by Emerson Network Power and Mintera Corp. An archive of the Webinar may be viewed free of charge by clicking here.Stanley is also author of the recent Light Reading Insider, 40-Gbit/s Technologies: Lower Costs Will Drive New Demand, which also covers product and company information for:
Applied Micro Circuits Corp. (Nasdaq: AMCC)
Bay Microsystems Inc.
CoreOptics Inc.
Cortina Systems Inc.
Dune Networks
Enigma Semiconductor Inc.
EZchip Technologies Ltd. (Nasdaq: EZCH)
Finisar Corp. (Nasdaq: FNSR)
Integrated Device Technology Inc. (IDT) (Nasdaq: IDTI)
JDSU (Nasdaq: JDSU; Toronto: JDU)
Luxtera Inc.
Mellanox Technologies Ltd. (Nasdaq: MLNX)
Mintera Corp.
Opnext Inc. (Nasdaq: OPXT)
Optium Corp. (Nasdaq: OPTM)
StrataLight Communications
Xelerated Inc.
Click here for further details.
Related Webinar archives:
10-Gbit/s Ethernet Technology & Components
Making 40-Gig Transport a Reality
Migration From Sonet/SDH to High-Performance Carrier Ethernet Transport
Testing Carrier Ethernet
Ethernet Optical Networking
— Tim Hills is a freelance telecom writer and journalist. He's a regular author of Light Reading reports.
Next Page: Market Trends
The core network trend is the obvious growing demand for bandwidth, and this is coming from all directions. Consumers, armed with higher-speed xDSL, fiber-to-the-home, cable, and mobile data access, can select from a growing number of bandwidth-intensive services, such as IPTV and YouTube. On the business side, Gigabit Ethernet connections to almost every PC are becoming the norm, and the number of 10-Gigabit Ethernet-connected servers is growing. The latter effect is accelerating following the introduction of the first 10G-based T-interface cards during 2008/9.
Unsurprisingly, bottlenecks are developing in both aggregation nodes and long-haul networks, and service providers are now starting to make strategic investments in higher capacity – for example, AT&T Inc. (NYSE: T) and Verizon Communications Inc. (NYSE: VZ) are already heavily investing in 40-Gbit/s links. And enterprises also need to invest in new capacity to handle the growing number of 10-Gbit/s-connected servers and service solutions, as noted in this LRTV report on silicon photonics:
And it is not only the external links where bandwidth increases are needed. As Doug Sandy, senior staff technologist and systems architect for embedded computing at Emerson Network Power, points out, as the overall network bandwidth grows, the in-shelf network-element bandwidth also has to grow to keep up.
{column}“It lags a little bit behind, but, as these network bandwidths go up, we have seen an increase from 1-Gbit/s bandwidth per blade to multiple-gigabit bandwidths per blade, and moving on up to 10 Gbit/s,” he says. “Eventually we do see 40-Gbit/s in-shelf network bandwidths.”
According to Earlswood’s Stanley, the main applications for 40- and then 100-Gbit/s networks are in the long-haul and metro layers, and also in high-performance computing.
“For the long haul and metro, these high-speed links are required for connecting core routers together,” he says. “These links are currently mainly 10 Gbit/s, with some 40-Gbit/s build-out, and we are expecting 100 Gbit/s will be required within the next two to three years.”
Within the data-center and high-performance-computing sectors, these high-speed networks are required for the aggregation of 1- and 10-Gbit/s Ethernet links. 40 Gbit/s is needed now, and, again, Stanley sees 100 Gbit/s being needed within two to three years.
Next Page: Technology Options
40- and 100-Gbit/s systems need a wide range of components in their construction. The most obvious are the optical modules and the physical-layer devices used within those modules. However, physical-layer devices to drive the backplanes are also essential, and these devices are among the most challenging because of their requirement for high-speed analog circuits.
Also needed within a complete system are the media access control (Ethernet MAC, in this case) and high-speed packet-processing solutions, such as network processors or field-programmable gate arrays (FPGAs). There are already network processors from companies like Bay Microsystems, EZChip, and Xelerated that are capable of 40-Gbit/s aggregate bandwidth throughput, and all these companies are working on next-generation solutions for 100 Gbit/s and beyond.
The technology options for 40- and 100-Gbit/s networking fall largely under the three market areas already identified. For the long-haul market, the solutions are based either on DWDM (with direct detection or coherent detection) or on the International Telecommunication Union, Standardization Sector (ITU-T) Optical Transport Network (OTN). To date, all such high-speed systems have used direct detection, but this may change.
“There is definitely a move to look to coherent detection,” says Niall Robinson, VP for product marketing at Mintera, “It is a different kind of receiver architecture, and it promises quite a bit more in terms of performance, although there are some technology hurdles that have to be overcome.”
And, in other developments relevant to long haul, the Optical Internetworking Forum (OIF) ’s Physical and Link Layer Working Group (PLL WG) launched in May 2008 a 100-Gbit/s long-haul project for a DWDM transmission implementation agreement (IA) focused on a specific modulation format and receiver approach. The aim is to agree on a Forward Error Correction (FEC) algorithm suitable for long-haul 100-Gbit/s application, to complement work underway defining 100-Gbit/s Ethernet in the Institute of Electrical and Electronics Engineers Inc. (IEEE) , and the new 100-Gbit/s level of the Optical Transport Hierarchy (OTH) in the ITU-T.
It’s important to realize that no one is actually talking of providing (TOP), say, a single 100-Gbit/s electrical interface on the client side of long-haul systems. The idea that 4x25 Gbit/s would give the right balance of technology cost and system capability seems to be becoming established in the standards groups. Similarly, on the DWDM optical side, the structure will depend on the chosen modulation format. For example, if that is polarization-multiplexed QPSK (see page 8), this might be roughly a 4x28-Gbit/s electrical signal squeezed into a single wavelength (not four separate wavelengths of 28 Gbit/s).
For backplane or chip-to-chip connections, solutions are based on the 10-Gigabit Ethernet series of standards – the 10GBase-KR solution (also known as Backplane Ethernet or 802.3ap) and extensions to the 10-Gbit/s XAUI parallel interface for 40 and 100 Gbit/s.
According to Emerson’s Sandy, for a backplane interface to be viable, it is important that the number of lanes be kept to a minimum. So 40 Gbit/s using 10 Gbit/s Base-KR is a four-lane technology.“The chip-to-chip interconnects, the XAUI extensions, and the 40-Gbit/s extension are four lanes,” he says. “It is interesting to note that for 100 Gbit/s the approach has 10 lanes, which could be quite a routing challenge when doing any significant number of channels.”
For the data center and high-performance computing, the immediate options are InfiniBand or to use 4x10- or 10x10-Gbit/s Ethernet (using something like a QSFP+ module). Longer term, there will be the option of 40- and 100-Gbit/s Ethernet based on the work being undertaken by the IEEE 802.3ba Task Force.
Sandy also points out that there is some misunderstanding about the difference between 4x10 Gbit/s and true 40 Gbit/s as it affects data centers.
“With 4x10 Gbit/s you need to balance traffic on each 10-Gbit/s connection separately – and that is difficult. With a 40-Gbit/s connection you don’t need to balance across multiple channels, so you can get a much better efficiency out of a true 40 Gbit/s than you can out of 4x10 Gbit/s.”
Next Page: 10-Gigabit Ethernet Lessons
Experienced garnered over the years with 10-Gbit/s transmission (Ethernet or otherwise) suggests some lessons for 40/100-Gbit/s technologies. According to Earlswood’s Stanley, these include:
Value from similar baud rates. All the principal 10Gbit/s technologies – 10-Gigabit Ethernet, Sonet (OC-192/STM-64), and OTN (OTU2) – used similar baud rates, which is practically and economically very useful to the industry.
Limit the number of standards options. There were too many options for 10-Gigabit Ethernet – 10Base-SR/LR alone accounts for most use.
Need for a backplane solution. The original 10-Gigabit Ethernet specifications did not include a backplane solution, which led to market fragmentation as different vendors took different approaches. Recently, the 802.3ap specification has added such a backplane to 10-Gigabit Ethernet, and this has both a four-lane solution and a single-lane serial solution.
Too many OTU2 mappings. As with the proliferation of 10-Gigabit Ethernet options, for OTN there have probably been too many OTU2 mappings for the different 10-Gbit/s Ethernet port options. This has created problems at the 40-Gbit/s rate because there were so many 10-Gbit/s Ethernet flavors that had to be mapped into a 40-Gbit/s solution. This can be avoided in the future by taking the same Physical Coding Sublayer (PCS – one of the several sublayers into which the Physical Layer [PHY] of Ethernet is divided).
InfiniBand cable is good in the data center. The use of InfiniBand cables for 10-Gbit/s Ethernet in the data center with the 10GBase-CX4 solution has proved very successful.
Need for a range of modules as the technology develops. A range of optical modules will be essential, but probably not as many as were developed for 10-Gbit/s Ethernet (see Figure 1). But it would be a mistake to develop too many. Perhaps a three-stage approach would be adequate for 40-Gbit/s Ethernet: initially 300-pin, followed by something like a Xenpak , and then by QSFP+.
Next Page: IEEE 802.3ba
The key standard for 40/100-Gbit/s Ethernet will be IEEE 802.3ba, currently under development after receiving Project Authorization Request approval in December 2007. 802.3ba aims to:
Support full-duplex operation only
Preserve the 802.3/Ethernet frame format utilizing the 802.3 MAC
Preserve the minimum and maximum frame sizes of the current 802.3 standard
Support a BER better than or equal to 10E-12 at the MAC/PLS service interface
Provide appropriate support for OTN
Support MAC data rates of 40 and 100 Gbit/s
There is also going to be a range of Physical Layer options, supporting distances from 1m over the backplane to 40km over singlemode fiber, as shown in Table 1.
Table 1: P802.3ba Physical Layer Support
Physical layer to support at least | 40 Gbit/s | 100 Gbit/s |
40km on SMF | No | Yes |
10km on SMF | Yes | Yes |
100m on OM3 MMF | Yes | Yes |
10m over copper | Yes | Yes |
1m over a backplane | Yes | Yes |
Figure 2 shows the current proposal in the standards work for the different lower layers, with five groups of physical layer, and these will provide several options for 100-Gbit/s fiber, 100-Gbit/s copper, 40-Gbit/s fiber, 40-Gbit/s copper, and 40-Gbit/s over the backplane.
Emerson’s Sandy points out that this PHY model leverages some strengths of the existing 802.3 model, and gives flexibility.
“What is really most interesting to me is the copper interfaces. There is a 100-Gbit/s and a 40-Gbit/s copper-cable interface – the CR4 and the KR. These are incredibly complex PHYs with FEC in order to make reliable connections, because the bit error rate is at 10E-12,” he says. “Note also that the last number in the name, say, 100GBase-CR4 or 10, is the number of lanes required to make the connections. So CR10 is 10 lanes, KR4 is 4 lanes. There are different solutions even within copper on how you get there.”
The 802.3ba schedule is fairly tight – the first draft is due for completion in September 2008, and a completed standard is due in mid-2010.
“It is a timescale that doesn’t have much room for delays or issues to crop up – currently, not that anything would be expected,” says Mintera’s Robinson. “But the timeline is critical for some of the electronic components that are being looked at for the 100-Gbit/s work – for example, the framing or FEC ASICs that will need to be developed. So this timeline is getting a lot of attention in the industry.”
Next Page: 40 Gbit/s Within the Box
To create a network element – a shelf – with internal 40-Gbit/s switching requires a number of components. These are the switching elements themselves, processors to do useful work on the 40-Gbit/s streams, and a backplane to transport the signals between different blades held by the shelves.
Switches & processors
It’s important to appreciate just how much throughput a typical 40-Gbit/s shelf would have to handle. Sixteen blade slots would be typical, giving an aggregate of 16x40-Gbit/s plus a couple of 100-Gbit/s uplinks – 840 Gbit/s in total. And, since many of these shelves will have redundant networks for high availability, the aggregate total that has to be provided for in terms of component density could effectively be doubled again to well over a terabit per second. Silicon with this kind of density is probably still many years away.
On the processing side, dedicated network processors are emerging that can handle 40-Gbit/s streams, although other approaches are possible. General-purpose processors are currently completely inadequate for this task. A rule of thumb (known as Amdahl’s rule) on how much processing capacity is needed to supply network performance states that a balanced system should have about 1 MHz of CPU for about 1 Mbyte of I/O for about 1 Mbyte of memory.
“Actually, from what I have seen, up to 8 MHz of CPU can be needed for network applications,” says Emerson’s Sandy. “General-purpose processors with that sort of horsepower are more than five years away with any sort of gradual Moore’s Law improvements to processor architectures. To get there needs architectural enhancements. Ones that might help are offloading the stacks plus some traffic management and routing, especially given that most of these processors are multicore.”
An important design point is that blade/shelf power consumption is likely to be an issue when switching multiple lanes of 10 Gbit/s in order to get 40 Gbit/s with high port density. A conservative estimate is that at least a 4x improvement in power efficiency over that of 10GBase-KR is needed for reasonable port densities to be viable.
Backplanes
Currently, backplanes are in a transition to 10GBase-KR, which has a serial 10-Gbit/s interface, as defined in 802.3ap. This uses a 10.3125 GHz signal, with 64b66b encoding, and can provide a 1E-12 BER with a closed-eye pattern. PCI Industrial Computer Manufacturers Group (PICMG) standards work on closed-eye-pattern technology is now going on.
“These are incredibly complex PHYs in order to receive and decode the backplane signals, and a great deal of attention must be paid to the channel characteristics of the backplane, as well as every characteristic of the entire channel,” says Sandy. “Backplanes that have been designed for XAUI or 1 Gbit/s that are deployed today probably won’t work for 10GBase-KR.”
The upcoming 40-Gbit/s backplane standard is the four-serial-lanes 40GBase-KR4, currently being defined in IEEE802.3ba. This is very much like a 4-lane implementation of 10GBase-KR, and will be backward compatible with 10GBase-KR, so backplanes with 4x10GBase-KR will support 40GBase-KR4. And 40GBase-KR4 will negotiate down to a single lane of 10 Gbit/s, which will make it possible to mix and match on a blade-by-blade basis in a shelf – an excellent feature.
Beyond 40 Gbit/s there is a standards black hole, as the IEEE currently has no plans for a 100-Gbit/s backplane standard. However, practically, 10- and 40-Gbit/s backplanes will probably prove adequate for some time to come.
Timelines
To pull these various developments together, Figure 3 shows a potential industry technology roadmap. The blue diamonds represent industry open standards work that is occurring, or likely to occur, in PICMG, which provides the Advanced TCA and Micro TCA open standards. Work (the left blue diamond) is currently underway to incorporate both 10-Gbit/s serial Ethernet and four-lane 10-Gbit/s Ethernet into the ATCA platform.
The red diamond represents an estimated date for the completion of the IEEE 802.3ba specification. A little later it can be expected that that its 40-Gbit/s technique will be incorporated within the Advanced TCA platform as well (the right blue diamond).
In terms of in-shelf systems, current 1GBase-KX or other 1-Gbit/s approaches, and also XAUI, are likely to persist for some time, depending on where the particular network element is deployed in the network. Not all network elements are going to need more in-shelf bandwidth in the short term – as network bandwidths increase, a gradual drift upwards is more likely than a sudden jump for some elements.
“We can expect in-shelf networks to start deploying 10GBase-KR systems relatively soon, as 10GBase-KR components become more available and in increasing channel densities,” says Sandy. “Sometime after the IEEE specification is completed, essentially by 2011, we should start seeing some 40GBase-KR systems being deployed. And it is likely that the switching capacity is not going to support 16 slots immediately – that will increase over time as both the network element requirements and the switch aggregate throughput capacity increase.”
Nevertheless, challenges remain. Sandy sees these as being principally:
Development of high-port-density 40GBase-KR4 switches. The technology of these switches and the PHYs will dictate how quickly the industry can incorporate the port densities needed to support a full shelf of 40 Gbit/s.
Completion of a 40/100-Gbit/s specification by IEEE 802.3ba. The schedule is very aggressive, but critical.
Development of low-power 40GBase-KR4 MACs, PHYs, and switches. As the switching frequency and lane density increase, the power density needs to stay constant or decrease.
Development of network-accelerated server processors. These will be needed in addition to network processors.
Next Page: Long-Haul History
The long-haul 40-Gbit/s market has so far had a short, but torrid, technology history, illustrated in Figure 4.
Starting in 2006, 40 Gbit/s was deployed typically by linecard systems, and Figure 4 shows the shelf-based subsystems that were then very popular. Importantly, in these early 40-Gbit/s solutions, there was no traffic on the backplane – all the traffic would enter a particular linecard on the client interface and would be processed and exit that same card on the DWDM interface. As the industry moved on and the 40-Gbit/s market began to mature, it became apparent that the shelf-based subsystem was not going to be the long-term vehicle for network deployments, and that customers were looking for linecards that would be plugged into their own shelves. So the industry went through a period of exploring customized linecards or daughterboards that would plug into a customer’s existing solution.
Then, very quickly through 2007, the industry moved straight into the 300-pin modules for 40 Gbit/s, and this is the current position. The DWDM second-generation solutions are based on Phase-Shift Keying (PSK) technology, and this can support the needed 50GHz channels. For OTN, the ITU-T SG15 is working on mappings for 40- and 100-Gbit/s Ethernet into OTNs.
“The focus here is to enable customers to be able to deploy the latest technology in a standardized form factor that they are used to handling – with what they have been used to from 10 Gbit/s,” says Mintera’s Robinson. “We expect the 300-pin form factor to last through 2008 and into 2009, and then maybe in 2010 you are looking at the very first pluggables, depending on the component integration that happens in the industry.”
He stresses that all of these various form factors to date have been deployed in the industry, and that there are still customers looking for the shelf-based subsystems, who are not designing their own linecards. So providers of 40 Gbit/s, or even future 100 Gbit/s, will have to provide this entire portfolio of solutions for some time to come.
Next Page: 40-Gbit/s Modulation Issues
A further complication in the 40-Gbit/s long-haul market is the large number of signal modulation formats, of which Table 2 lists the most important. The columns cover an alphabet soup of formats, and the rows show characteristics that are important to system performance.
Table 2: Summary of 40-Gbit/s Modulation Formats
ODB / PSBT | NRZ-DPSK | NRZ-ADPSK | RZ-ADPSK | RZ-DQPSK | PM-QPSK | |
OSNR sensitivity @ 2 x 10E-3, dB (1) | 17.5 | 12.5 | 13 | 12.5 | 13.5 | 12.5 |
Nominal reach with EDFA, km (2) | 700 | 1600 | 1600 | 2200 | 1400 | 1700 |
Filter tolerant, support 50GHz grid | Yes | Impacts reach | Yes | Yes | Yes | Yes |
PMD tolerance without PMDC, ps | 2.5 | 3 | 3.5 | 3.5 | 6 | 10 (3) |
Sensitive to non-linearity | No | No | No | No | Yes | Yes |
Complexity / cost | Low | Low | Low | Low / medium | High (but improving) | High |
Optical Duo Binary, sometimes know as Phase-Shaped Binary Transmission (ODB/PSBT), was the first 50GHz-capable modulation format that the industry developed. Typical optical-signal/noise-ratio (OSNR) performance requires a higher optical SNR compared to any of the Phase-Shift Keying (PSK) modulation formats, which significantly limits the transmission reach.
The PSK modulation format gains in terms of the OSNR sensitivity, and all the PSK modulation formats listed give excellent reaches. However, according to Robinson, basic classical Differential PSK (DPSK) has a reach problem in supporting 50GHz transmission, and reach is affected quite severely. So Mintera has developed Adaptive DPSK (ADPSK) alternatives – Non-Return-to-Zero ADPSK (NRZ-ADPSK) and Return-to-Zero ADPSK (RZ-ADPSK). These support the 50GHz grid, while not affecting the overall reach.
The rationale of ADPSK is shown in Figure 5. The horizontal axis of the graph is the bandwidth of the transmission line, starting from 63GHz, which is typical of a 100GHz-channel-spaced system. The vertical axis is the pre-FEC (forward error correction) OSNR needed for a BER of 10E-5 – a level that can be corrected by the FEC to be error-free post-FEC. The red line is a classic DPSK result: It has very good performance at 100GHz, which is the equivalent of the 63GHz 3dB bandwidth.
But, when 50GHz interleavers are introduced in increasing numbers (say, through the addition of more and more ROADMs), the transmission-line net bandwidth falls, and the OSNR required by the classic DPSK moves up the red line to the right – that is, classic DPSK suffers a performance penalty. With ADPSK, however, switching takes place between two modes, light and strong. Light is used for lightly filtered systems (such as ultra-long-haul, with few or no ADMs – the green curve), giving good performance for 100GHz spacing. For 50GHz spacing, the receiver is switched to strong mode (the blue curve), and can tolerate heavy optical filtering down to 28GHz. So the strong mode is appropriate for, say, regional transmission systems with many ROADMs.
There are a couple of further technology developments that the industry is looking at, shown in the final two columns of Figure 5.
RZ Differential Quadrature PSK (RZ-DQPSK) has a better polarization mode dispersion (PMD) tolerance than a DPSK solution, but is sensitive to nonlinearities, and the complexity and cost are high, as several components tend to be doubled. This doubling is also seen in Polarization Multiplexed QPSK (PM-QPSK).
“PM-QPSK has the interesting capability of really driving down the line rate of the signal to effectively that of a 10-Gbit/s solution, but you are introducing quite a bit of complexity and cost in the overall components that you have to use,” says Robinson. “But certainly the DQPSK space this year is showing some new integrated components coming onto the market that promise that DQPSK could be a candidate for something like a 300-pin 5x7 module. So definitely DQPSK is improving from that point of view.”
Various forms of DQPSK will, however, be of crucial importance to 100-Gbit/s systems.
Robinson expects that 40-Gbit/s transponders will begin to reach the crucial price point of 2 to 2.5 times that of 10-Gbit/s transponders around late 2008 or into 2009. This will make 40-Gbit/s systems highly competitive with multiple 10-Gbit/s ones.
Next Page: 100-Gbit/s Modulation Issues
For direct detection, many of the technologies developed for 40 Gbit/s will be exploitable at 100 Gbit/s, but from a DQPSK starting point. Figure 6 shows how potential ADQPSK and RZ-ADQPSK formats give an improved eye opening to that of classic DQPSK for a 110-Gbit/s signal sent through multiple filters.
(a) shows the significant degree of eye closure suffered by the classic DQPSK signal after it has passed through multiple filters. (b) shows the improvement that follows from developing a solution that uses a similar adaptive approach to that of 40-Gbit/s ADPSK, even though here it is an adaptive DQPSK solution. Finally, (c) shows the further improvement that results from extending (b) by using a return-to-zero version.
Coherent detection at 100 Gbit/s is a much tougher nut to crack as the optical signal has to be converted back into the electrical domain for the coherent-detection processing – it can’t be done purely optically in any practical system (as yet) because of fundamental difficulties with optical heterodyning.
“One of the more significant technology hurdles for this is the ability to take the 100-Gbit/s signal and convert it into the digital domain so that you can then do that processing. You need a very fast analog/digital converter,” says Mintera’s Robinson.
The snag with analog/digital converters (ADCs) is that there is an inevitable tradeoff between the number of samples per second and the number of bits per sample – the greater the number of bits per sample (that is, the finer the sampling and so the more information that can be extracted from the signal), the fewer the number of samples that can be taken per second. Obviously, ADC technology does progress, and it even has its own version of Moore’s Law, called Walden’s Law. But this is pretty modest in comparison: For a given sampling rate, there is a 1.5 bit improvement in bits per sample every eight years.
Robinson points out that, assuming Walden’s Law continues to hold, extrapolating from current state-of-the-art, it would take the industry until beyond 2013 to reach 5-bit sampling at rates of around 50 to 60 gigasamples per second, which could be considered an ideal performance for 100-Gbit/s systems.
“This is a hurdle that the industry is looking at,” he says. “There are solutions being developed, mainly moving to a multichip solution. So you are seeing, for example, technologies that enable things like ‘systems on a chip’ or ‘system in a package’ to achieve the necessary ADC technology. But this is certainly one of the key pieces of the 100-Gbit/s coherent technology.”
— Tim Hills is a freelance telecom writer and journalist. He's a regular author of Light Reading reports.
You May Also Like