Ethernet equipment

Google Wants Variable-Rate Ethernet

The Institute of Electrical and Electronics Engineers Inc. (IEEE) isn't working on a variable-speed Ethernet standard yet, but Google is pushing for one. It was a point of focus for Bikash Koley, Google's principal architect and manager of network architecture, during a panel session on the last day of OFC/NFOEC last week. What he wants is variable-speed Ethernet. So, instead of running a connection at 100 Gbit/s or 400 Gbit/s, which are the two standard choices, he'd like to pick arbitrary speeds. And he offered a bit of a carrot to the component and systems vendors in the audience: "The reason we keep forcing the industry to reduce cost of goods is because we cannot use them efficiently," Koley said. The technology on the optical side is actually ready for what he's asking. Variable-speed transceivers and flexible-grid ROADMs exist. What's missing is on the packet side: a media access control (MAC) layer that's capable of dealing with a variable-bit-rate physical (PHY) layer. "We have an adaptive programmable PHY layer ready to go but we don't have the glue to the packet layer," he said. Fans of a flexible-rate physical layer are still considered a fringe camp, just like -- well, like fans of the TV show Fringe, maybe. But this is the second time in six months that the issue has come up at a conference. The first time was in conversation at Light Reading's Ethernet Expo, where Ron Johnson, a director of product management at Cisco Systems Inc., brought up the idea. He was disappointed in the IEEE sticking with the usual single-rate format when it came to 400 Gbit/s. (See Why a 400G Standard Might Draw Complaints.) Right now, if Google wanted, say, 135 Gbit/s between two data centers, it would have to create a 200Gbit/s pipe, or maybe 150 Gbit/s by link-aggregating 15 10Gbit/s lines. Either way, Koley has two problems with that kind of over-provisioning. First, Koley said, it's an inefficient use of what turns out to be the most expensive part of his network, namely, the cost to connect data centers across thousands of kilometers. Google is loathe to leave capacity on those pipes unused. Second, higher-speed links generally have to travel shorter distances. "It's not about just the capacity. If I have to go and put a regen [i.e., optical regeneration] every 600 km, that fundamentally is a no-go," he said. It was an interesting perspective at a conference that was obsessed with shrinking the power, size and cost of optical modules. "Those [features] have actually very small incremental value. The real value you have in a network of our scale is in optimizing the resource you have," Koley said. Granted, not everybody is Google. But Google has a way of forcing a conversation. — Craig Matsumoto, Managing Editor, Light Reading Review all our OFC/NFOEC coverage at http://www.lightreading.com/ofc-nfoec.
Page 1 / 2   >   >>
mvissers 4/5/2013 | 2:38:22 PM
re: Google Wants Variable-Rate Ethernet "to suffer from massive structural inefficiency, which will be worsening with the ever coarser L1/0 connection bandwidth granularity (100>400>1000Gbps), so long as the L1 bandwidth rates are non-adaptive."

You seem to make a mistake here... L1 connection bit rates are not dependent on the bit rate of the optical signal. I.e. ODUk/flex connection bit rates are independent of the bit rate of the optical signal. E.g. a 1000G 'OTUC10' signal will be able to carry 800 1.25G ODU0 connections, 400 2.5G ODU1 connections, etc. Also a variable number of ODUflex connections, each having a bit rate between 1G and 1000G can be carried.

"Yet, in our current networks the L1 connection bandwidth allocation is not adaptive to the the L2+ packet traffic load variations"

For a very good reason L1 connection bandwidth-áis not adaptive to its momentary (i.e. microseconds level)-áclient traffic load. To be adaptive to micro second level load changes demand the introduction of large buffers at intermediate switching nodes, or to discard traffic at those switching nodes. Neither is L1 compliant.

L1 connection bandwidth can be adaptive to busy-hour and application specific traffic demands. ODUflex is designed with such adaptivity in mind.

Busy-hour adaptation of L1 connection bandwidths exists for more than 30 years and is controlled from the services network topology manager. 30 years ago it was the PSTN topology manager that adjusted the number of E1/DS1 circuits between voice switches, today it would be the NGN (IP) topology manager that controls the number or bandwidth of the-áODUk/flex (Variable-Ethernet) -áconnections between Service Nodes. The control of such adaptivity is not yet automated in most networks. The work on Transport SDN in ONF's Optical Transport WG-áshould change that and-áenable such control automation.

mark-r 4/4/2013 | 6:06:51 PM
re: Google Wants Variable-Rate Ethernet We need a rethink away from single and/or non-adaptive rate L1 channel(s) per an optical carrier.

The much bigger efficiency issue than variable vs fixed rate optic carriers is the current practice (eg in case of all Ethernet variants) of using perma-rate L1 channels to carry L2+ traffic between packet-switching nodes/NAPs. (By perma-rate I mean L1 connection baud rate that does not adapt according to any packet traffic load variations.)

Adaptive-bandwidth L1 channels, on a shared L1/0 carrier, are key to bringing the packet networking efficiencies down to the physical layer. They allow running packet-switched network traffic of any burstiness (per each given source-destination node flow) collectively at up to 100% of the network I/O capacity utilization without congestion effects (QoS degradation such as delay or jitter increase/variation).

Adaptive-bandwidth L1 channels (of which there will be multiple per a given static L1/0 carrier) enable achieving maximum network utilization efficiency, while providing direct circuit like premium QoS (the transport between a given pair of packet-switched network access points is on actual L1 circuit, even if across multiple intermediate packet-switched access points; the bandwidth of that L1 channel just is dynamically optimized between 0 and the maximum carrier signal capacity based on the data load variations from all the L1-connected sources to the given destination).

And they enable this maximized efficiency and performance without a need for variable rate optics or such. Plain digital logic (for bandwidth allocation among L1 channels on shared L1/0 carriers) will do the job over any standard L1/0 carrier.
brookseven 4/3/2013 | 6:15:58 PM
re: Google Wants Variable-Rate Ethernet So, I think we have actually just agreed. -áI have a couple of final comments on the topic. -áIf a vendor wanted to make such a MAC/PHY there is nothing stopping them. -áIn fact, it would better now than after a standard when prices would drop due to all the vendors in the market.

I think the challenge for this is that the number of links involved will have to be big enough to support development that is probably going to cost more than a non-modulated technology. -á

So, it may be a wonderful idea for a startup - not so sure for a standard.

IJD 4/3/2013 | 5:09:22 PM
re: Google Wants Variable-Rate Ethernet I don't expect you'd use the same hardware to do anywhere from
10Gb/s up, rather to get more than the "worst-case" speed (e.g. 100Gb/s)
out of the vast majority of links which have significantly more SNR margin. Obviously any system which adapts to the channel needs communication links and protocols between the two ends to do this, it's the price you have to pay to push up the capacity beyond the worst-case minimum.

I'm well aware of the ADSL interoperability issues having designed an ADSL chipset. Many of these came from the sheer complexity of the system because of the single-pair line combined with POTS, the vast range of line characteristics and rates (and variation with time), the very high SNR/SINAD/PAR required to push very high bit loading per carrier, and the complex synchronisation and communication protocols that resulted.

Optical DMT/OFDM is in many ways much simpler; obviously for interoperablity protocols would have to be defined and agreed, but these can be far less complex than for ADSL.

And in many cases (e.g. data centers) people don't really care since they own both ends of the link, they just want something that works and delivers the highest capacity and the best bang-for-the-buck.
brookseven 4/3/2013 | 2:10:04 PM
re: Google Wants Variable-Rate Ethernet IJD,

You are correct but there is no need for ADSL to go far below the maximum line rate. -áAnd you still need ADSL chips on both ends. -áSince they need to be able to run the max rate they will not be cheaper if they run at a lower rate.

Your point is my entire second point. -áAnd at that point, you need to understand your ADSL analogy. -áADSL is TUNED for phone lines. -áThat is what I mean by having issues with getting the optics tied to the MAC. -áNow NOTE: -áADSL has LOTS and LOTS of interoperability issues. -áGetting stated performance out of ADSL took years and often times there are specific modems do not meet standard with specific DSLAMs. -áThese are known in the ADSL world as Interoperability issues and were one of the driving forces behind the use of the UNH labs.

Now...I want you to think that you have a Cisco product on one end and a Huawei product on the other and this is supposed to perform to a standard as complicated as ADSL (note if you want to do this with ADSL think Westel and Huawei or Efficient and Alcatel).

IJD 4/3/2013 | 10:28:48 AM
re: Google Wants Variable-Rate Ethernet The MAC and everything sitting behind it would have to be able to deal with variable-rate channels in some smaller-grain chunks (1G? 10G?) until the point where statistical muxing kicks in, when non-guaranteed rate happens anyway. It's just a case of extending this principle all the way to the fiber to maximise the available bandwidth over a given hardware (MAC+PHY+fiber) channel -- where now you would always get (for example) a 100G link, you might get 100G or 120G or 150G depending on the quality of the link.

And if better MAC/PHY/optics become available over time they can just be added in with no change to the system, the bandwidth just goes up automatically -- up to the point where you hit the Shannon limit, obviously :-)

This does all mean a complete rethink away from the current fixed-rate (10G-40G-100G-400G-1T) optics mindset...
IJD 4/3/2013 | 8:57:38 AM
re: Google Wants Variable-Rate Ethernet You're still thinking in terms of optics that are fixed-rate. ADSL manages fine-rate granularity perfectly well, it squeezes just as much data down the wires as is posssible over that particular channel, including adapting to channel changes. There's no reason the same approach can't be done with optics, in fact it was demonstrated at OFC if you looked carefully enough...
mark-r 4/1/2013 | 5:13:02 PM
re: Google Wants Variable-Rate Ethernet Let me clarify that we should not design any Layer x techniques for a certain subset of Layer x+1 application use cases, but instead for all possible Layer x+1 usage scenarios. Eg L3 (IP) is not designed for some specific L4+ usage patterns, but rather IP datagram preparation and transmission is done according to whatever way the L4 PDUs are provided for the IP layer.

Neither should L1 protocols be designed with some narrowing expectations for types of L2+ traffic they are to serve (eg scheduled transfers of large blocks of data). Instead, the L1 needs to have the intelligence to sense the prevailing L2+ packet traffic load variations among a given set of access points being interconnected, and automatically and continuously re-optimize the capacity allocation among the L1 channels.

That way the L1 is finally made to operate according to packet networking principles, while providing circuit based transport with the associated benefits of minimal latency, jitter and no packet loss (other than due to bit errors).
steinarb 4/1/2013 | 9:13:03 AM
re: Google Wants Variable-Rate Ethernet Capacity on Ethernet pipes can be fully used, saving money.
By circuit switching Ethernet links and fill up any spare capacity using statistically multiplexing you can:
1) Guarantee capacity and performance on links as for OTN and wavelengths.
2) Use any spare capacity for services requiring less stringent performance demands.
3) Isolate the services.

This technique is called Fusion networking or Integrated hybrid networks.
mvissers 3/31/2013 | 3:04:04 PM
re: Google Wants Variable-Rate Ethernet "OTN/ODUflex however does not add any new capabilities beyond SDH contiguous concatenation."

ODUflex is indeed-áa form of 'contiguous concatenation'. If you compare ODUflex with SDH contiguous concatenation, then the ODUflex provides the following new capabilities:
- value of n can be any value (in SDH VC4-Xc values of-áX are 1, 4, 16, 64, 256, so a subset of the values in the range 1 to 256)
- ODUflex can be carried over any set of 1.25G Tributary Slots within a HO ODUk, contiguous-áor non-contiguous-á(SDH VC4-Xc demands a specific set of AU4 Time Slots, if one Time Slot is occupied by another HO VC signal, then VC4-Xc can not be set up).

"Actually virtual concatenation would have been a bit smarter than the present ODUflex:"

ODUflex very deliberately-áwas-áselected (instead of ODU0 VCAT) for the following reasons:
- network management of VCAT does not scale well; e.g. 10000 1GE signals over SDH VC4-7v introduces 70000 VC4 connections to manage instead of 10000, i.e. 60000 additional connections
- VCAT end points require deskew buffers, which may become large at 1G and higher rates; this prevented VCAT technology to be used in line cards on packet switching nodes.
These issues are not present in case of ODUflex:
- number of ODUflex connections is 1-to-1 with required number of Ethernet connections
- ODUflex end points do not require deskew buffers.

"unlike SDH virtual concatenation, ODUflex Nx1.25Gbps will not go through unless each mux along the connection supports ODUflex for the desired bit rate, and there are contiguous blocks of N 1.25Gbps timeslots at each inter-mux section along the connection."

As indicated above, ODUflex connections do NOT require contiguous blocks of N 1.25G tributary slots.

10G OTU2, 40G OTU3-áand 100G OTU4-áline ports either do not support ODUflex, or support ODUflex up to the maximum rate of the line port. The latest generation line ports do support ODUflex.

"But none of the conventional flexible bandwidth (VCAT/LCAS, ODUflex) L1/0 connection schemes really gives the user much net benefit."

This depends on the role that the variable rate ODUflex based Ethernet connection-áperforms. If it carries an aggregate of service layer signals, then there is no requirement to ajust the bit rate of the ODUflex connection on a millisecond basis.

Assume that a Data Centre (DC)-áhas an SDN DC Controller and that this controller is connected to the SDN OTN Network-áController. The SDN DC Controller will be able to control the ODUflex connectivity within its OTN DC Virtual Network. The SDN DC Controller can
- set up a new p2p bidirectional ODUflex connection between two DCs, or a new p2mp unidirectional ODUflex connection between multiple DCs
- tear down a p2p or p2mp ODUflex connection
- add or remove leaves to/from a p2mp ODUflex connection
- increase or decrease the bandwidth of an ODUflex connection
- protect an ODUflex connection
- remove-áprotection of an ODUflex connection.
The SDN DC Controller can perform those actions under control of data centre applications; e.g. to transfer 10 TB of data from DC #A to DC #B. To transport this data will take approximately:
- 266 minutes with a 5G ODUflex connection
- 133 minutes with a 10G ODUflex connection
- 66 minutes with a 20G ODUflex connection
- 33 minutes with a 40G ODUflex connection
- 22 minutes with a 60G ODUflex connection
- 11 minutes with a 120G ODUflex connection.

Once the 10TB of data is transported, the ODUflex connection can be removed or decreased. Those multi-minute level timescales will be well supported in a Transport SDN environment.-á

Page 1 / 2   >   >>
Sign In