& cplSiteName &

Jumbo Optics

Column
Column
Column
5/29/2003

What do a Boeing 747 and Hewlett-Packard Co.’s (NYSE: HPQ) HP 12C calculator have to do with optical networking gear? More than you might think. Let me explain.

A few years ago I had the chance to cash in some of my frequent flyer miles and take the family to Australia (where we saw some interesting fauna, much of it pouched). On that trip, as we were flying over the Pacific on a United 747 I pulled out my trusty HP-12C calculator to show the kids how many miles we’d be traveling as we toured Australia. I had this sudden sense of déjà vu. Hadn’t I been sitting in more or less the same seat on more or less the same plane at more or less the same time of day when I pulled out that same HP-12C on my first trip to Australia back in 1988?

I had, in fact. But what had changed? The airplane’s engines were certainly quieter than I remembered. The video consoles were a new wrinkle. But, in general, the transit wasn’t any smoother or shorter than the 747 I took 15 years before. Despite frequent use, I hadn’t seen any need to get a new calculator, either. I wonder if the guts in the HP-12C calculators they sell today are the same as the guts in my old workhorse?

When I landed, I looked it up. The Boeing 747 has been flying for 35 years now. Boeing’s still makin’em. I bought my HP-12C back in 1985. They started making these things over 20 years ago.

It’s a law of technological progress – if you know the name of it, write and tell me – that, at a certain economic eigentime, radical innovation stops and steady refinement begins. Costs plummet and use increases dramatically. Somehow, you hit just the right balance. A standard is set that lasts for a long, long time. It’s true for airplanes and calculators – and, I think, especially after viewing the offerings at OFC and subsequent meetings, it’s coming true for optical networking, as well.

Indeed, after years of false visions and uncertainty, the fog is finally lifting. The components on display at OFC were not the one-off, glued-together kludges of the past few years. Instead, they are clearly designed, for the most part, for high volume, high-yield manufacturing. Many are actually integrated – like filters, switches, and attenuators all on one substrate. Even if no potential customer has any money to actually buy these babies, they are there ready to be used. And the emerging, stable standard – our equivalent of the 747 – also looks more certain than ever (drum roll, please...): It’s 10 Gig.

The 10-Gbit/s long-haul transmission standard with which Nortel Networks Corp. (NYSE/Toronto: NT) outflanked Lucent Technologies Inc. (NYSE: LU) in the glory days of telecom expansion is about to settle in as the dull, apple-pie standard for almost everything else. We’ll use it in the enterprise and at every point in the metro. We’ll use it for networks and for storage systems. Heck, I wouldn’t be surprised to see it getting pushed into the urban access network in a few year’s time. It’ll probably never make it to the desktop or the home, but it’ll get used everywhere else.

Now, I know this is not a new idea. People were talking about the convergence of 10-Gig Sonet and Ethernet four years ago. But what struck me at OFC was that 10 Gig is here now and it’s real and it’s stunning! XFP transceivers at 10 Gig are cheap to manufacture and getting cheaper. They’re the size of your thumb and they work! The next generation of protocol conversion will come via a 0.13 micron CMOS chip that interfaces directly with that transceiver and breaks out individual packets. Today’s 10-Gig line card is about to become radically simpler. In the very near future it will consist only of a transceiver in a socket, a handful of chips, discrete components, and a monolithic backplane interface.

In other words, the 10-Gig line card will be every bit as easy to assemble and handle as, well, an optical Gigabit Ethernet card. The big news, though – and the reason 10 Gig will become the default standard for years to come – is that the 10-Gig line card will be relatively cheap. That’s because, in a rare alignment of the telecom planets, companies have designed components that will work equally well for both Ethernet (10-Gig E) and Sonet (OC192) applications.

Increased production volumes will significantly alter the standard Sonet formula for bandwidth increases: The current rule of thumb is that, for every 4x increase in bandwidth, you increase the cost of parts by 2 to 2.5, yielding a net cost reduction of roughly 50 percent per bit. And production growth will have equal impact on the Ethernet formula whereby part costs double per 10x of bandwidth increase, which yields a net 80 percent decrease in cost per bit. Together, the resulting economies of scale should make the cost per part at OC192 only a very little bit higher than at OC48 and not a heck of a lot more than Gigabit Ethernet. Can that really be true? If so, it’s a big, big deal for this industry.

We’re already seeing signs in the marketplace that the 10-Gig cost revolution has begun. That’s how I interpret Cisco Systems Inc.’s (Nasdaq: CSCO) recent price cuts on their 10-Gig line cards (see Cisco Takes On 10 GigE Competition and Riverstone Fuels 10GigE Price War). Before, the cost of a 10-Gig line card was more than $50,000. It’s rapidly heading in the direction of $10,000. The costs of continuous future innovation, always built tacitly into the price of optical networking, grow increasingly irrelevant in an era of more innovation stability. The era of 747-style optical networking is upon us.

As an industry venture capitalist, of course, I’ve bet quite a few millions of dollars of limited partner money on 10-Gig componentry (and maybe I just had a few too many beers at OFC). But I find it hard to believe that, having made 10 Gig as monolithic and standard as 1 Gig, we won’t just start using it everywhere we can in the network.

What about the next step up to 40 Gig? Three years ago, lots of VC money and engineering testosterone said that step was inevitable. In today’s telecom depression, both of those drivers have withered. But there’s another reason 40 Gig looks unlikely, and it’s similar to the one that killed supersonic transport and made the Concorde little more than a niche player. The reasons such planes never leapfrogged the 747 came down to environment and cost: Too many sonic booms, too much danger to the ozone layer, too much fuel consumption to make mass ticketing affordable. So it is that the Concorde is getting retired while Boeing is still building 747s.

Like the sound barrier for airplanes, the barrier confronting proponents of 40 Gig is daunting: Dispersion introduces signal interference that grows at the rate of the square of the increase in bandwidth. Also, the cheaper CMOS manufacturing technology, which seems to be quite agile at 10 GHz, becomes woefully inadequate at 40. That’s not to say that 40 Gig won’t happen – just that it will be significantly more expensive per bit than 10 Gig for metro and enterprise applications and therefore find only niche applications. Have we, with 10 Gig, reached the apex of that curve? Has the virtuous cycle been broken? Has the industry changed forever? Can Rocky save Bullwinkle?

Finally, there’s the argument about how much capacity we really need. My recollection is that the human brain can absorb no more than about 50 Mbit/s, most of that through our eyeballs. So, if you pack 16 colors of 10-Gbit/s traffic per fiber into a 144-fiber cable, you get enough bandwidth for a half a million people’s eyeballs. A couple of dozen cables running around the city and you’re serving HDTV to all the eyeballs in Tokyo! The reason we don’t need 40 Gig is the same reason we don’t need airplanes the size of football fields to take us to Australia. Only so many of us want to go to Australia for $1,000 at any given time, mate.

Now I know this idea is rather simplistic. Networks are big, hairy, complicated things. But my point was that wandering around the OFC floor I saw real, working 10-Gig parts. If 10 Gig is the standard for decades ahead, who cares about the other stuff? Radically lower optical networking costs will certainly help get the industry going again. Much more significantly, the new standard will offer certainty. One year ago at OFC 2002, buyers were presented with a veritable cornucopia of different approaches to optical problems – so much that they had to step back from the table to think about what they wanted to eat. Now they’ve got a main course to concentrate on. After establishment of the 747 standard, airlines never felt forced to hold back to wait for the next generation of technology. They ordered ’em in droves. Will the same apply to our industry?

We’re a long way from being healthy. But to paraphrase Churchill, we are now past the “end of the beginning” which was a period of technological and economic chaos. I think we’re now entering a new phase, “the beginning of the end,” which will lead to more stability and even growth.

— Drew Lanza is a general partner at Morgenthaler, based in Menlo Park, Calif.

For extensive and up-to-date coverage of next week's Supercomm tradeshow, where the latest in 10-Gig kit will be on disply, visit Light Reading's Supercomm Preview Site.

(80)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 8   >   >>
Peter Heywood
Peter Heywood
12/4/2012 | 11:58:53 PM
re: Jumbo Optics
Nice column!

I was chatting to Geoff Bennett (LRU director) the other day, who was pointing out that some IP protocols need some radical updating to take advantage of 10 Gbit/s Ethernet switches.

A few of the issues he raised:

1. Someone needs to address the fact that TCP isn't designed to work at these speeds. If you look at the Internet2 speed record that we reported on back in March....

http://www.lightreading.com/do...

... the record is 2 Gbit/s, and in order to do this, universities had to write an application that used UDP rather than TCP, according to Geoff.

2. The whole resiliency issue gets much more critical at 10 Gbit/s. Protocols like OSPF and Spanning Tree can take a long time to re-establish a route around a network failure -- and at 10 Gbit/s, that equates to a LOT of data dropped on the floor.

3. It's pretty much impossible to reconfigure mesh networks very quickly because calculating the new route isn't something that can be done locally.

dwdm2
dwdm2
12/4/2012 | 11:58:51 PM
re: Jumbo Optics
"By the by, what brings you to this message board on a Sunday? It's sunny here in California and I'm going back to my gardening chores."

Drew, a good epilogue. While I am not in the mood to discuss the science/engineering this sunday afternoon, but let me take issue with you that it is not sunny over all of California! Those of us who did our lawns yesterday or day before here in the east coast area, are enjoying a pretty good sunshine this afternoon. Speaking of which, I myself going back to take care of the pumpkin garden. If it grows well, we hope to be able to save on buying pumpkins for the halloween... :)
Drew Lanza
Drew Lanza
12/4/2012 | 11:58:51 PM
re: Jumbo Optics
Peter:

Thanks for the kind words.

You make an interesting point. I think that what's most encouraging about this new breed of 10 Gbps parts is that, in order to take advantage of the combined economy of scale of 10 Gig Ethernet and OC-192 SONET, the parts that are being designed will handle both protocols (and, presumably, whatever comes along that "radically updates" TCP/IP).

Handling both protocols is not necessarily an easy task since the two protocols differ at a very low level on specifications for extinction ratio and jitter and lord knows what else.

We were debating this exact same topic over a couple of beers the other night. The characteristics of circuits, cells, and packets are all very, very different. Each is optimized for a certain type of traffic at which they excel. ATM cells were supposed to be "the ideal middle ground" between circuits and packets. But you have to wonder if they got the best of both, or the worst of both worlds. Now there's a debate worthy of a pint or two! (And this debate ignores the virtues of analog/digital broadcast TV as a highly efficient protocol that soaks up a lot of traffic and revenue).

As I alluded to in the article, it may turn out that all of the access technologies cap out at 1Gbps and we only use 10Gbps to interconnect the aggregation devices. If that's so, then Ethernet maintains its reputation for universal connection of terminals while something less obvious goes on behind the scenes. I'd love to have Ethernet to my home. I don't care (at all) what it becomes once it leaves the wall of my garage. Whatever format it's in, somebody will sweat the details of reliability and quality of service which is all I really care about once it's on the outside network.

Packets were great and are great and routing protocols were great and are great but neither may be great in the future. One of our investments is in Caspian Networks and I think the team over there (under Larry Roberts) has done some really heretical things to try to 'fix' some of the problems related to packet traffic (not the ones that you bring up; there are a lot of issues out there).

They've come up with the notion that packet traffic isn't, well, packetized. Even today, packet traffic 'flows'. Their observation was that as soon as you start carrying web pages with graphics, or VoIP, or music or video over the Internet, any simple transaction between me and whatever server I'm talking to results in hundreds, or maybe thousands, of packets being sent (I just checked, by the by, and the main page of LightReading takes over 450 packets to paint on my screen!). And that's just for one transaction between me and one server. Caspian's simple observation was that if a packet just came from the server to me, there's a good chance that dozens (hundreds? thousands?) more are right behind and eager to get to me. They route the first packet. They switch the rest along the same path. Route once, switch many. It dramatically reduces the overhead associated with the routing software and creates something very similar to ATM QoS.

I didn't put that there as a shameless plug. Flame me if you will. But if you think about what they're doing, it makes a lot of common sense. Still, you'd be really surprised at how many people hear that and get themselves into an absolute tizzy. "Heresy!", they cry. Personally, I think this razor wire boundary between circuits and packets will continue to blur. ATM was a good first attempt at that, but I think it's too much on the circuit side of things and not enough on the packet side of things.

By the by, what brings you to this message board on a Sunday? It's sunny here in California and I'm going back to my gardening chores.

Drew
rjmcmahon
rjmcmahon
12/4/2012 | 11:58:50 PM
re: Jumbo Optics
They've come up with the notion that packet traffic isn't, well, packetized.

Sorry, but this is hogwash. Packets are packets. Now one can argue about the size of a packet, or a cache line, or a cell, or a byte, but they still are what they are. And they are not circuits.

Even today, packet traffic 'flows'. Their observation was that as soon as you start carrying web pages with graphics, or VoIP, or music or video over the Internet, any simple transaction between me and whatever server I'm talking to results in hundreds, or maybe thousands, of packets being sent (I just checked, by the by, and the main page of LightReading takes over 450 packets to paint on my screen!). And that's just for one transaction between me and one server.

This seems short sighted at best. Also, flows at 450 packets seems like bad anecdotal data.

Many servers, and many flows, becoming a requirement to support one transaction seems the most salient point. In other words, the suggestion that anyone in the middle can add transactional value seems a weak position. Distribution to the edges and ends seems the key to success.
rjmcmahon
rjmcmahon
12/4/2012 | 11:58:50 PM
re: Jumbo Optics
1. Someone needs to address the fact that TCP isn't designed to work at these speeds. If you look at the Internet2 speed record that we reported on back in March.

2. The whole resiliency issue gets much more critical at 10 Gbit/s.

3. It's pretty much impossible to reconfigure mesh networks very quickly because calculating the new route isn't something that can be done locally.


These assertions have elements of truth but also mislead, in my opinion. The performance of protocols has most always lagged IO performance, at least from a technical measurement perspective. Using a protocol limit arguement to suggest against IO improvements would be a mistake.

The better questions in mind are, what is the function of protocol performance vs IO performance over time, what is the best infrastructure which allows humans to improve on these limits, and what exactly are the limits as defined by physical laws? (Remember, Moore's law is not a law of physics, it's more of a statement about human technical potential.) In other words, these protocol "limits" may be more self imposed than most realize and designing in a fixed assumption about these limits would be a big mistake.

Also, resilency is seen as critical regardless of IO speeds. Customers will expect it from their primary communications infrastructure. We'll have to hit that ball squarely.

Finally, suggesting it is impossibile to improve mesh networks and routing/switching times underestimates our capabilities. It's not cold fusion, but rather is something that seems very likely to be achieved in our lifetimes. (Along similar lines, imagine trying to explain Google to someone 10-15 years ago. Few would believe in the possibility. Now that it's here, most take it for granted. Humans seem too bipolar sometimes ;-)
arch_1
arch_1
12/4/2012 | 11:58:49 PM
re: Jumbo Optics
This article makes a high-level point that 10Gbps is "good enough" for most interfaces. I agree with the author. It's good enough. In fact, 1Gig-e is good enough for any interface (e.g., the individual PC or workstation) designed to support a single human. As of today a 1Gig-e NIC costs about $43.00 US, retail, quantity one.

There is a more important point, however: up until the 10Gbps generation, a single-channel bps increase translated directly into an increase in long-haul fiber utilization, and in efficiency within a router or computer. This does not appear to be true for the 10Gbps->40Gbps transition.

For long-haul DWDM, 40Gbps requires four times the spacing that is required by 10Gbps. By contrast, 10Gbps takes the same spacing as 2.5Gbps. Thus, an increase to 10Gbps increases the total bandwidth of a single fiber, but an increase to 40Gbps provides no increase. The increase to 40Gbps does reduce the number of lasers by a factor of four, but the physics require an increase in cost by more than a factor of four. This is a fundamentally different situation than we find on the semiconductor side, where the factor-of-4 speed increase does in fact cost only 2.5 more dollars.

At the PCB and system level, we find another constraint: 10Gbps is at the upper end of the speed at which a signal can be propgagated electrically for more than a very few inches. Therefore, all modern tranceivers convert to a lower speed parallel signal of some sort. The current state of the art is 2.5Gbps. Faster electrical signals require exotic technologies that cost more per bps. Let's assume that current innovations will drive the "sweet spot" to 10Gbps. This may be possible but driving the electrical sweet spot to 40Gbps is not practical in the foreseeable future. What this means is that a 40Gbps signal must be broken into 10Gpbs (or lower) signals at the tranceiver. If you are doing this anyway, any possible gain of operating at 40 Gbps on the semiconductor is lost in the parallel-serial conversion: given that your basic I/O is at 10Gbps, why not just handle packets at 10Gpbs?

The historical answer to this question is that a single high-speed stream is more efficient than multiple low-speed streams: for example, a 128Kbps bonded ISDN is more efficient than two 64Kbps channels. However, almost all of the reasons for this efficiency become less relevant as the speeds increase, for two reasons. First, the speed of light is absolute, so jitter and delay through the switch become trivial (relative to transmission delays and last-mile delays) as speeds increase. Second, any particular stream becomes a trivial percentage of the bandwidth.

So, the author is right, but for the wrong (implied) reason. Moore's law is perfectly happy to drive the internal packet processing rate of an NP or a TM to 40Gbps, but since the long-haul and on-board signalling rate "sweet spots" are constrained to about 10Gbps, there is no reason to do so.

Another post (from the article author) discusses the "flow" concept, and gives an example. The example unintentionally demonstrates a basic fallacy in the "flow" argument as applied to high-speed connections. The post refers to a flow consisting of 450 packets. If the user has a 1Gbps connection to the internet and the host has a 10Gpbs connection, then this "flow" can be injected into the internet by the host before the destination receives the first packet. At these speeds, it's more expensive to set up a "flow" than it is to simply forward the individual datagrams. Again, the reason is that the speed of light is constant, so what was reasonable for 56Kbps is no longer reasonable for 1Gbps.
gea
gea
12/4/2012 | 11:58:49 PM
re: Jumbo Optics
Drew Lanza wrote...

"Handling both protocols is not necessarily an easy task since the two protocols differ at a very low level on specifications for extinction ratio and jitter and lord knows what else."

Well, the difficulty will probably lay in forcing Ethernet to obey the more stringent SONET interface standard. Ethernet jitter (at least a 1 Gb/s and below) is much less stringent, and is probably one of the things that allows Ethernet to be somewhat cheaper than SONET.

rtfm
rtfm
12/4/2012 | 11:58:48 PM
re: Jumbo Optics
Copper.

GigE was mentioned as $43, but that is for copper.

What really helped Ethernet (at 10 -> 100 -> 1000) was the autosensing and backwards compatibility.

I think some people are working on Very Short Haul 10-gigE copper. That would also help its deployment a lot.

IMHO,

rtfm
arch_1
arch_1
12/4/2012 | 11:58:48 PM
re: Jumbo Optics
Yes, $43 for a Gig-E NIC is for copper. Do not get distracted by this. The article is about 10GigE as as the emerging defacto standard.

If 1GigE (copper) is now cheaper than an RS-232 serial interface, then we can easily conceive of office networks that have GigE tail circuits and 10GigE uplinks. This feeds into the concept that 10GigE is good enough. A set of (say) 16 PCs can feed into a local switch using copper GigE. The local switch can have a 10GigE uplink to the corporate firewall, which can uplink to the internet at Nx10GigE (CWDM) on a single fiber, to an ISP POP that connects to the core at Nx10GigE(DWDM).

There is no point in this chain where 40Gbps is more cost-effective than 4x10Gpbs.

This entire discussion presupposes a breakthrough in the last mile. Until we get GigE (or at least 100Mbps of some sort) in the last mile, there will be no justification for a change in technology elsewhere in the network. The existing technoloogy is good enough, so there is no compelling reason to improve it. If you have a mythical new technology that is ten times more effective in the core today, the core customer has no use for it, since the traffic cannot expand faster than the last mile can accomodate it.
Drew Lanza
Drew Lanza
12/4/2012 | 11:58:47 PM
re: Jumbo Optics
rj:

Packets may be pretty much the same as they always were. Packet traffic is anything but.

In the old ARPANet days, we'd send short emails or maybe be at a telnet session (I remember routinely crashing MIT's servers from here in California when I was an undergrad in the 70's). Back then, a packet could send a short email or a command line just fine.

Nowadays, the packet we're sending is seldom on its own. It is usually one of hundreds of packets (over 450 in the case of Light Reading's home page, most of them coming from a single source) that make up part of something that we think of as a single, atomic transaction.

And we're starting to force packet traffic to handle stuff that we think of as circuits, as in the case of a VoIP session where we suddenly start having to worry about things like isochronicity and jitter that weren't big deals in the old days.

The current routed packet network does not take advantage of the fact that most of the packets flowing between any two points in the network at any point in time are probably part of a much larger sequence of packets, closely spaced in time. Why not take that into account? Why not "route once and switch many"?

I would be willing to wager a large steak dinner that, if the packet network really does start to carry a lot of voice and video traffic over the next decade, by 2010 over 99.99% of all packets on the network will be part of a "flow" of packets. That is, almost all packets will be following milliseconds behind another packet that has the same source and destination IP address. If I'm right in that assumption, shouldn't we design network equipment to take advantage of that network behavior?

You say that packets aren't circuits. I think of circuits as being composed of frames. Same source, same destination, lots of 'em, neatly spaced in time. Set up the circuit. Send the frames. Tear down the circuit.

We've begun asking the routed single packet network to take on the chores of the switched circuit frame network for things like VoIP and streaming media. Why not insert the notion of flows and flow state into that and say that we've begun to turn the switched circuit frame network into the routed flow packet network? I think the analogy holds.

Nobody made up the idea of flows. It's just the way the Internet is today. And it will be more so over time as it inevitably starts to handle more and more legacy circuit traffic.

Tell me where I'm going wrong with this line of reasoning.

Drew
Page 1 / 8   >   >>
More Blogs from Column
It's like Mad Max in the optical networking space, with every group of participants – optical transceiver vendors, chip manufacturers, systems OEMs and even end customers – all fighting their own war.
An analyst firm is at odds with industry execs on how quickly the market for LiDAR applications will take off. Several companies that supply the telco industry are making bets that LiDAR will pay off soon.
A new study from BearingPoint shows that CSPs have a lot of work ahead of them if they are to appeal to enterprise customers.
The optical networking industry has seen its fair share of customers show up to the party and then leave without warning. One analyst ponders what's going to be different in the next 12 months.
NFV has many naysayers, but it's alive, kicking and thriving, with SD-WAN as a significant catalyst.
Featured Video
Upcoming Live Events
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events
Partner Perspectives - content from our sponsors
Sports Venues: Where 5G Brings a Truly Immersive Experience
By Peter Linder, 5G Evangelist, North America, Ericsson
Multiband Microwave Provides High Capacity & High Reliability for 5G Transport
By Don Frey, Principal Analyst, Transport & Routing, Ovum
All Partner Perspectives