x
Ethernet equipment

A Guide to PBT/PBB-TE

PBT was famously kicked off by BT and Nortel in 2004 as an idea for a potentially simpler and cheaper alternative to MPLS for providing carrier connection-oriented Ethernet in the metro environment. Initially, it didn’t carry much weight because everyone was pretty much enamored with Internet Protocol (IP)/MPLS, and felt that MPLS-enabled devices and routers would support everyone for everything. However, BT’s subsequent backing of PBT for its next-generation network, the 21CN, and push for standardizing the technology through the IEEE PBB-TE project sparked a flurry of development and activity. Every major vendor wanted a piece of the 21CN businesses, and other carriers began to pick up on the PBT idea for its own potential merits.

Technology basics
The Light Reading report "PBT: New Kid on the Metro Block" overviews the rationale and technology of PBT/PBB-TE in some detail, and readers new to the topic should refer to it. Several of the Vendor Summaries later in this report also give more detailed snapshots of PBT/PBB-TE characteristics. This page gives only a summary of some of the key aspects, and indicates how PBT/PBB-TE fits into the seemingly endless development of Ethernet.

The basic point is that PBT/PBB-TE is Ethernet, but in an extended form that also removes some usual features and limitations to produce a somewhat Sonet/SDH-like carrier transport mechanism, but without the latter’s specific TDM/ring technology for recovery and QOS mechanisms. That is, it provides point-to-point connections set up through predetermined network paths by an external network management system (NMS), and thereby provides traffic engineering and deterministic behavior. Further, paths are protected on an end-to-end basis with a predefined backup path that is switched over by using OAM Continuity Check Messages (CCMs) sent every 3.3 milliseconds. In the event of the primary path failing, the secondary can be switched within the 50 millisecond Sonet/SDH norm -- below 20 milliseconds has been achieved -- and some QOS mechanisms are available. At the Carrier Ethernet World Congress in 2007, for example, ANDA Networks Inc. , Hammerhead Systems Inc. , Nortel, and World Wide Packets Inc. demonstrated 50ms failover using PBB-TE.

All this is contrary to normal bridged/switched Ethernet behavior, where Ethernet switches dynamically interact and explore the network by mechanisms such as Spanning Tree and Flooding to learn and establish paths to destination addresses. Such behavior, although effective at producing flexible and easy-to-build enterprise networks, lacks the deterministic and predetermined characteristics that carriers need if they are to use Ethernet as the network Layer 2 for handling large numbers of traffic flows to tight service-level agreements (SLAs).

PBT/PBB-TE disables Ethernet Spanning Tree and address learning, and manages switch forwarding tables externally from a central point. It also uses an extension to the Ethernet frame known as Provider Backbone Bridging (PBB), which is the most recent in a series of extensions of the original Ethernet 10 Mbit/s LAN frame that have propelled Ethernet toward becoming a carrier networking technology. As Table 1 shows, the Ethernet frame has been successively tagged, or wrapped, to support such key carrier-type requirements as virtualization, separation of customer, and carrier domains, and greater hierarchical classification for scalability and security. The result is vaguely similar to the MPLS stacked-tagging idea, but applied in a very specific way just to Ethernet (although, other than PBB and PBT using MAC addresses, there is conceptually no reason why this same frame format could not be used to transport other protocols). MPLS, in contrast, is a perfectly general technique, applicable to any packet technology -- hence the name, Multiprotocol Label Switching.

Table 1: Simplified Schematic of Parts of the Developing Ethernet Frame
(1) Basic Ethernet (2) VLAN tagging (802.1Q) (3) Provider Bridges (802.1ad) -- Q-in-Q (4) Provider Backbone Bridges (802.1ah) -- MAC-in-MAC
DA -- destination MAC address DA DA [B-DA -- Backbone DA]
SA -- source MAC address SA SA [B-SA -- Backbone SA]
PAYLOAD etc. VID -- VLAN identifier S-VID -- Service VID [B-VID -- Backbone VID]
PAYLOAD etc. C-VID -- Customer VID [I-SID -- Service identifier]
PAYLOAD etc. DA
SA
S-VID -- Service VID
C-VID -- Customer VID
PAYLOAD etc.
[ ] indicates the provider Ethernet encapsulation of the customer Ethernet subnetworks. Source: Light Reading, 2008




It’s worth pointing out that, although Table 1 suggests that PBB introduces a lot of overhead, comparison of the full details with a PWE3 MPLS pseudowire frame (used for transporting Ethernet over MPLS) shows that the overhead is the same size.

Standards and initiatives
PBB was invented by Nortel in 2004, and is subject to ongoing standardization by the Institute of Electrical and Electronics Engineers Inc. (IEEE) (starting in 2005) as 802.1ah -- aka MAC-in-MAC. As the MAC-in-MAC name suggests, it puts a second MAC wrapper around the original Ethernet frame, and this second MAC is used by the provider edge and transport bridges to convey the frame to the correct egress provider bridge. PBB offers what might be termed "normal" Ethernet capabilities, and can thus support point-to-point, point-to-multipoint, and multipoint-to-multipoint networks. A final standard is not expected to emerge before mid-to-late 2008.

PBB provides only the data plane of PBT/PBB-TE. The control plane is also still a work in progress, and is subject to some debate, as it can be argued that there is actually no need for a specific PBB-TE control plane, if PBB-TE is used only for point-to-point services configured by an external OSS, and PBB, with a protocol to replace Spanning Tree Protocol (STP), is used for all the other service topology types. The situation is further obscured by the industry’s tendency to sometimes also lump OAM capabilities under the term "control plane".

"There is clearly a debate heating up regarding PBT/PBB-TE control plane -- centralized versus MPLS signaled control plane," says Ken Davison, Meriton's VP of marketing and business development. "From the perspective of PBT/PBB-TE as an infrastructure enabler for Ethernet access aggregation and backhaul, the bulk of the network topology tends to be quite static. The drivers for a control plane are quite different than those at the services layer. For example, traffic engineering at the transport layer often takes into account physical realities that are not easily visible in the logical network -- like using a shared-risk-group database to avoid routing redundant paths through the same fiber or even fiber duct. Given the slow dynamics of the network at this level and the relative closeness of the physical (fiber) plant, an off-board OSS-based management scheme with a consistent view of the infrastructure as a whole seems more appropriate than a signaled control plane. Why should node processors incorporate databases and distributed algorithms to take these kinds of things into account?"

PBB-TE OAM capabilities include two important standards. The IEEE’s Ethernet OAM management specifications (802.1ag) provides Connectivity Fault Management, covering proactive alarms for service faults, and the detection, verification, and isolation of connectivity failures, while the ITU-T’s Y.1731 adds performance measurement and monitoring capabilities.

“With respect to 802.1ag and Y.1731, portions of these are implemented in PBT today, specifically those portions related to CCM and protection switching,” says Daniel Barry, Tpack A/S 's director of marketing. "The discussion for PBB-TE is whether to implement the full set of Y.1731, the full set of 802.1ag (which is itself a subset of Y.1731), or an even smaller subset. Y.1731 is fully comprehensive with every monitoring feature you could wish for, both for connection-oriented and connectionless operation. However, many carriers agree that a full implementation would be overkill. Therefore the discussion is what the minimum necessary set of functions should be."

Pulling all this together is Provider Backbone Bridge Traffic Engineering (PBB-TE) itself, which is being standardized in the IEEE 802.1Qay project, and, as already indicated, is essentially a connection-oriented modification to PBB. A standard is unlikely to emerge before sometime in 2009. Figures 1 and 2 indicate in general terms the arrangement of the PBT/PBB-TE data plane and the OAM.





Inevitably, PBT has its own industry vendor initiative – the Carrier Ethernet Ecosystem – founded by Nortel in June 2007, and still very much promoted by that company. By early 2008 there were over 20 vendor members. Formally, the CEE is not restricted to just PBT technology, and is open to vendors producing “Ethernet solutions for use by service providers in a variety of carrier-class environments," but PBT/PBB is clearly a major interest. As with most initiatives of this type, a big aim is to develop and demonstrate practical interoperability and interoperability testing -- obviously a crucial point with a new carrier technology like PBT. Other aims are also standard: to promote carrier Ethernet generally, and to aid the development of standards relating to it.

Next Page: Why PBT/PBB-TE?
Previous Page
2 of 10
Next Page
xornix 12/5/2012 | 3:44:09 PM
re: A Guide to PBT/PBB-TE Is ANY retail service inherently Ethernet? I think NOT which means PBT/PBB-TE can only be used as a transport technology without any relation to the service itself.

In tier 1 ISP networks, there is probably a case for PBT backhaul as MPLS may be a bit hard to scale and operate down to the access, but that's all I can see as:
-the access needs IP aggregation
-the core needs IP routing

If you are focused on business services, this is obviously a completely different story.

What is BT focused on?
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE