AT&T Submits DDC White Box Using Broadcom Chips to Open Compute Project

AT&T said it took a further step toward white box network architecture by sumitting its specifications for a Distributed Disaggregated Chassis (DDC) white box architecture to the Open Compute Project.

September 27, 2019

4 Min Read

DALLAS -- AT&T submitted today to the Open Compute Project (OCP) its specifications for a Distributed Disaggregated Chassis (DDC) white box architecture. The DDC design, which we built around Broadcom’s Jericho2 family of merchant silicon chips, aims to define a standard set of configurable building blocks to construct service provider-class routers, ranging from single line card systems, a.k.a. "pizza boxes," to large, disaggregated chassis clusters.

AT&T plans to apply the Jericho2 DDC design to the provider edge (PE) and core routers that comprise our global IP Common Backbone (CBB) – our core network that carries all of our IP traffic. Additionally, the Jericho2 chips have been optimized for 400 gigabits per second interfaces – a key capability as AT&T updates its network to support 400G in the 5G era.

"The release of our DDC specifications to the OCP takes our white box strategy to the next level," said Chris Rice, SVP of Network Infrastructure and Cloud at AT&T. "We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace."

AT&T’s DDC white box design calls for three key building blocks:

  1. A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.

  2. A line card system that support 10 x 400G client ports, plus 13 400G fabric-facing ports.

  3. A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.

Traditional high capacity routers use a modular chassis design. In that design, the service provider purchases the empty chassis itself and plugs in vendor-specific common equipment cards that include power supplies, fans, fabric cards, and controllers. In order to grow the capacity of the router, the service provider can add line cards that provide the client interfaces. Those line cards mate to the fabric cards through an electrical backplane, and the fabric provides the connectivity between the ingress and egress line cards.

The same logical components exist in the DDC design. But now, the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane. Cooling is significantly simplified as the components can be physically distributed if required. The strict manufacturing tolerances needed to build the modular chassis and the possibility of bent pins on the backplane are completely avoided.

Four typical DDC configurations include:

  1. A single line card system that supports 4 terabytes per second (Tbps) of capacity.

  2. A small cluster that consists of 1 plus 1 (added reliability) fabric systems and up to 4 line card systems. This configuration would support 16 Tbps of capacity.

  3. A medium cluster that consists of 7 fabric systems and up to 24 line card systems. This configuration supports 96 Tbps of capacity.

  4. A large cluster that consists of 13 fabric systems and up to 48 line card systems. This configuration supports 192 Tbps of capacity.

The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links. The design inherently supports redundancy in the event fabric links fail.

"We are excited to see AT&T's white box vision and leadership resulting in growing merchant silicon use across their next generation network, while influencing the entire industry," said Ram Velaga, SVP and GM of Switch Products at Broadcom. "AT&T's work toward the standardization of the Jericho2 based DDC is an important step in the creation of a thriving eco-system for cost effective and highly scalable routers."

"Our early lab testing of Jericho2 DDC white boxes has been extremely encouraging," said Michael Satterlee, vice president of Network Infrastructure and Services at AT&T. "We chose the Broadcom Jericho2 chip because it has the deep buffers, route scale, and port density service providers require. The Ramon fabric chip enables the flexible horizontal scale-out of the DDC design. We anticipate extensive applications in our network for this very modular hardware design."

AT&T

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like