Ethernet Over MPLS

Technology update: * Hard QOS * Service interworking * End-to-end management

April 28, 2006

21 Min Read
Ethernet Over MPLS

Interest in the use of Ethernet in telecom networks has grown like crazy in the past year or two, driven by service provider needs to drive down costs, boost bandwidths, simplify user interfaces, and create standard-based platforms for rolling out next-generation services. A few facts and figures illustrate this:

  • Carrier Ethernet switch/router sales tripled in the past year, going from $194 million in 2004 to $637 million in 2005, according to Heavy Reading. (See All Hail CESR!)

  • Light Reading’s Ethernet Expo: Europe 2006 conference and exhibition, held in London this week, was mobbed by more than 250 representatives from service providers wanting to get the latest info on technology and market trends. (See 21CN: It's an Ethernet Thing, Swisscom Eats Up Ethernet , and Colt CTO: Let's Get Simple.)

  • The number of Ethernet services being offered by operators is increasing rapidly. Light Reading’s Ethernet Services Directory has captured details of 469 services offered by 260 operators, but many more now exist.

There are numerous ways of offering Ethernet services, but the one that's been garnering the most attention recently has been Ethernet over Multiprotocol Label Switching (MPLS), partly because it supports mesh topology, any-to-any, virtual private networks (VPNs) and partly because MPLS has some nice attributes when it comes to traffic engineering and controlling quality of service (QOS).

All the same, Ethernet over MPLS isn't without its downsides – two of them being cost and complexity. That in turn creates some other issues, because it often isn't economical to extend MPLS all the way to the customer site. (See MPLS in Access Networks.)

Right now, this means service providers are faced with some tricky compromises – often relating to QOS and service management. They need to weigh up whether they can afford to wait for standards to evolve in these areas, bearing in mind demand for Ethernet services has taken a lot of service providers by surprise. The big carriers are worried that, without standards, operational costs could get out of control as they scale their networks to handle very large numbers of customers. (See Aggregation Aggravation.)

Another hot issue is standardizing service interworking – so operators can offer VPNs that link some sites using Ethernet and other sites using legacy technologies such as Frame Relay and ATM. This provides a way of migrating enterprise users to Ethernet on a step-by-step basis.

This report aims to identify key concerns of carriers in this field and provide a status report on how these issues are being addressed in standards bodies and in the products being developed by equipment vendors.

Here’s a hyperlinked contents list:


This report is based on a Webinar, Ethernet Over MPLS: Technology Update, moderated by Stan Hubbard, Senior Analyst, Heavy Reading and sponsored by HP Inc. (NYSE: HPQ) and Tellabs Inc. (Nasdaq: TLAB; Frankfurt: BTLA). It may be viewed free of charge in our Webinar archives by clicking here.

— Tim Hills is a freelance telecommunications writer and journalist. He's a regular author of Light Reading reports.

Most carriers that are used to legacy Frame Relay and ATM services are attracted by the flexibility and service innovation that Ethernet allows. However, as much as possible, carriers want to have the well proven revenue-generating virtues of these legacy services. This means that next-generation carrier Ethernet must provide

  • End-to-end Ethernet services

  • Seamless service interworking

  • Guaranteed SLA per customer over any protocol (hard QOS and reliability)

  • Carrier-to-carrier interconnection

  • End-to-end management and rapid provisioning for on-demand services

Further, carrier Ethernet has to support a wide range of different access networks, and be compatible with network trends. For example, carriers may be using end-customer access interfaces such as Q-in-Q (IEEE 802.1ad – Provider Bridges), and, in metro networks, increasing numbers are using Ethernet over Sonet/SDH (a.k.a. X.86) or Ethernet over MPLS. In the core, Ethernet services are commonly provided by Ethernet pseudowires over MPLS.

Hard QOS has become a bit of a buzzword in Ethernet services. The recent Heavy Reading "Fall 2005 Global Survey of Service Provider Technology Deployment Plans" showed that many carrier professionals felt that the top priority for standards work was ensuring hard QOS. This is necessary for end-to-end SLAs and the support of mission-critical applications.

Says Andrew Malis, Chief Technologist, Advanced Data Products, Tellabs, and Chairman and President of the MFA Forum , “Carriers need hard QOS because very often enterprises are running real-time applications, such as VOIP and videoconferencing, over their metro and wide-area Ethernet services.”

Backbone convergence is another key requirement. Carriers provide their customers with a wide range of services, such as IP, ATM, Frame Relay, VOIP, TDM, and Ethernet, and each puts various requirements on the network. It is extremely expensive in both operational and capital terms for service providers to try to meet these requirements by using separate networks.

Examples of the varying requirements are:

  • TDM private line networks: Support for voice quality; ring-like protection and resiliency; network timing

  • IP networks: L3 redundancy; protocols such as OSPF, ISIS, BGP, MPLS, and VR; inter-provider peering

  • ATM/Frame Relay networks: Cell relay; SVC, SPVC, PNNI; interworking; inter-provider trunks; high QOS

  • VOIP/VOATM: Intra-LATA voice services; distributed voice switching; maintenance of current legacy voice quality

  • Ethernet services: Link aggregation and no link loops; point-to-point and Layer 2 Ethernet VPNs (Virtual Private LAN Service – VPLS)

Being able to satisfy all these successfully from a single network instead of multiple parallel networks would clearly be a big gain for carriers.

Carrier Ethernet for Convergence

There are two fundamentally different ways of creating a converged packet-oriented network:

  • Packet approach, which typically adds transport interfaces to packet-based equipment

  • Transport approach, which adds an additional shim MAC layer between the Sonet/SDH transport and packet services, as in various next-gen Sonet/SDH technologies, such as Generic Framing Procedure, Virtual Concatenation, and Link Capacity Adjustment Scheme

Carrier Ethernet, as defined by the MEF , uses the packet approach, and most carrier Ethernet equipment aggregates legacy interfaces and backhauls them by using Sonet/SDH or MPLS/Ethernet. Table 1 shows Malis’s view on how carrier Ethernet compares to enterprise Ethernet and to next-gen Sonet/SDH in terms of some of the carrier requirements. The basic point is that enterprise Ethernet, which was used in some early carrier situations, simply doesn’t cut the mustard as a regular carrier technology, while next-gen Sonet/SDH is good as a development of existing carrier-grade investment, but lacks some of the appealing enterprise-class attributes, such as simplicity and affordability. Carrier-class Ethernet, however, effectively marries the desirable attributes of enterprise Ethernet and next-gen Sonet/SDH.

Table 1: Carrier Ethernet Compared to Enterprise Ethernet & Next-Generation Sonet/SDH


Enterprise-Class Ethernet

Next-Generation Sonet/SDH

Carrier-Class Ethernet

Service Interworking




Guaranteed SLA




























Malis argues that Ethernet over MPLS is most advantageous if a carrier is offering point-to-multipoint services, has a large number of network points of presence, or is subject to a high Ethernet service traffic. Next-generation Sonet/SDH, in contrast, is most appropriate if the carrier is offering point-to-point or point-to-multipoint (hub and spoke) services, or has only a small number of network points of presence. These considerations suggest generally that EOMPLS is more cost-effective for “shared” and multipoint services, while EOSonet/SDH is more appropriate for a more expensive, premium “private” service.

Much of the standardization work for multiprotocol interworking has been taking place in the MFA Forum, where the focus is on two different types of interworking between Ethernet and legacy services, such as Frame Relay and ATM. These are bridged service interworking and routed service interworking.

“It is extremely important when a service provider rolls out carrier Ethernet that they are able to interwork the Ethernet interfaces with their existing Frame Relay and ATM interfaces,” says Tellabs’ Malis. “This means that their end customers can have one wide-area network but with a mix of ports – some being Ethernet, some ATM, and some Frame Relay – and they are able to work together.”

Bridged interworking supports all protocols, IP and non-IP (such as IPX, SNA, and any other Ethernet-based protocols in the enterprise network). This allows carriers to bridge together native Ethernet ports with ports that are using Ethernet frames encapsulated over Frame Relay or over ATM. Some of the applications support are Transparent LAN Services and Virtual Private LAN Services (VPLS), and configurations can be point-to-point or multipoint-to-multipoint.

Routed service interworking interworks IP over Frame Relay or over ATM with IP over Ethernet. This is a point-to-point service, and is (obviously) only for the interworking of IP. However, it is a very scaleable way of handling IP, as it gives any-to-any connectivity for IP packets, whatever their source in an enterprise network provided by a service provider. The target application is taking the existing Frame Relay and ATM networks that enterprises are using, and migrating those customers to Ethernet.

Figure 1 shows some of the currently standardized methods of interworking ATM, Frame Relay, TDM, IP, and Ethernet. For Ethernet the primary interfaces that are being interworked are 10/100-Mbit/s, VLAN interfaces using Q-in-Q or VPLS, and Ethernet over Sonet X.86, as well as Gigabit Ethernet and 10-Gigabit Ethernet.

91570_1.gifFigures 2 and 3 show some of the advantages that a service provider can gain by using a converged backbone and interworking among the different protocols. Figure 2 should be familiar to many service providers, as it is pretty much what is deployed in networks today. It shows several different overlay networks being used to provide all the services shown on the left. So there is an IP network, which is typically overlaid on an ATM network, which in turn is typically overlaid on a Sonet/SDH transport network.91570_2.gifAll these services are brought into access network equipment and aggregated for carriage over the appropriate backbone network. This is an extremely expensive proposition for a service provider because of all the different network equipment to supply and maintain, and also there is no opportunity for service interworking because of the different backbone networks.

Figure 3 shows the same services being offered over a converged MPLS and IP core network, where ATM, Frame Relay, Ethernet, and TDM (for private line) services are encapsulated into MPLS-based pseudowires for carriage through the core network.

91570_3.gifThere is one first level of aggregation equipment that is native to the service being provided. Aggregated services are then put together to an IP edge router for carriage through the backbone network.

One interesting thing is what to do about private-line services, and there are basically two different ways to provide such services from T1 up to Sonet/SDH speeds. Which is preferable depends very much on what the service provider has in its infrastructure. The first way is to continue to use the existing Sonet/SDH/WDM transport network. This is likely to be best for service providers that already have a significant deployed base of Sonet/SDH equipment that cannot yet be retired. However, for service providers, such as ISPs, that do not have a significant deployment of Sonet/SDH transport equipment, TDM encapsulation into pseudowires for carriage through the IP/MPLS backbone network is the better approach.

As end customers increasingly run real-time applications such as VOIP and videoconferencing with metro and wide-area Ethernet services, it becomes increasingly important for service providers to offer SLAs that include what is known as hard QOS. This is the QOS that applications require to run real-time services. There are a number of different mechanisms that can be used within the network equipment – such as multiservice routers – to provide hard QOS.

These are essentially:

  • Providing a large number of queues for individualized SLAs on the basis of per customer, per application, per service, and per protocol

  • Giving each traffic flow its own flow classification, using Priority Queue, Policer and/or Shaper, and Congestion Manager

These techniques are derived largely from earlier experience with ATM, so ATM’s per-VC queuing reappears as IP/MPLS per-flow queuing, for example. Giving each network flow its own queue within switches/routers allows each application to have its own personalized queue for QOS purposes. Similarly, giving each flow its own flow classification on the basis of packet content inspection (for example, data protocol, source and destination addresses), allows it to be placed on the proper priority queue, where the flow can be policed according to the SLA. If necessary, the flow can be shaped to remove some of the inherent burstiness of Ethernet traffic, which is an important consideration in congestion management.

“The result of all this – policing, shaping, admission control, and congestion management – is that we really get the reliability and predictability that you can get originally only from circuit switching or from ATM,” says Tellabs’ Malis. “And you can do this in an IP/MPLS-based packet network.”

An important practical point for core transport is to aggregate flows (and traffic tunnels) with the same QOS requirements, as this is the only way that flow-based QOS can be made to scale up to the hundreds of millions of individual flows that can be passing through a large core router/switch at any given moment.

Another very important matter for service interworking is to be able to define the QOS characteristics in terms of the native traffic parameters to ensure consistency. For example, the following parameters are usually used to define QOS natively:

  • ATM: PCR (Peak Cell Rate), SCR (Sustained Cell Rate), MBS (Maximum Burst Size), and CDTV (Cell Delay Variation Tolerance)

  • MPLS LSP: PDR (Peak Data Rate), CDR (Committed Data Rate, and PBS (Peak Burst Size)

  • Frame Relay: CIR (Committed Information rate) and Bc and Be (Committed/Excess Burst)

  • Ethernet/ Ethernet over Sonet/ VLAN: Maximum and minimum rate, and MBS (Maximum Burst Size)

The network must be able to convert among these parameters if it is to provide a true end-to-end service with the QOS that is required by the customer.

Many current Ethernet services use point-to-point (P2P) or point-to-multipoint (P2MP) configurations using VLANs, as shown in Figure 4. P2MP is often referred to as a hub and spoke or star configuration. Although cost-effective for essentially circuit-oriented ATM and Frame Relay networks, P2MP has disadvantages. The hub represents a single point of failure, and designing around this adds cost and complexity. Spoke-to-spoke packets must travel via the hub, which increases latency, and is contrary to the increasing use of any-to-any applications, such as VOIP and instant messaging. Further, configuration can become somewhat complex, because a CPE router is required at the hub, and VLAN tags are used to identify separate spokes at the hub.

91570_4.gifHowever, Ethernet services are continuing to develop and offer new options, especially multipoint-to-multipoint (MP2MP) architectures using VPLS, as shown in Figure 5.

91570_5.gif“We are finding with Ethernet services that very often customers are migrating to a peer-to-peer or multipoint-to-multipoint kind of topology. And those services are very often provided in the service provider by using VPLS,” says Tellabs’ Malis. “The advantages to the end customer is that there is not a single hub as a single point of failure – instead there is a true direct any-to-any connectivity, which doesn’t add delay by going through a hub site, and you get added transparency.”

One reason that VPLS, which underlies the MP2MP architecture of Figure 5, is attractive for service providers is that it simplifies configuration – both for the service provider, and for the end customer. This leads to lower costs, which can be reflected in the customer pricing, so it can now be as cost effective for the end customer to use a peer-to-peer configuration for network architecture as to use an older-style hub-and-spoke architecture.

Of course, there are potential drawbacks to an MP2MP architecture. The classic one is the N2-link effect – to fully mesh N sites (which is what a MP2MP architecture effectively does) requires on the order of N2 intersite links as N increases. This leads to issues of scaleability for large enterprise networks with, say, hundreds of sites. The fact that these links for VPLS running over an MPLS network will be virtual (as pseudowire connections) rather than physical, makes no difference – the system would have to be able to create, maintain, and manage tens of thousands of such pseudowires, which is a challenge for a carrier, which may also have many such enterprise customers.

To overcome this issue, VPLS has been extended to what is known as Hierarchical VPLS (HVPLS). This divides a VPLS VPN into a hierarchical structure (typically of two or three levels) of meshed hubs fed by spokes over which multiple end customers are aggregated. The result is to reduce the number of network nodes that need to be fully meshed.

As more and more carriers deploy Ethernet and MPLS services, there seems to be growing agreement that more attention needs to be paid to the operational and support challenges presented by MPLS. In particular, this means looking at:

  • Managing across business processes

  • Managing the customer’s service end-to-end

  • Managing from top to bottom

Managing Across Business Processes

The kinds of services enabled by Ethernet and MPLS are much more complicated, and have wider ramifications, than traditional telecom services. For example, they allow

  • On-demand or customer self service, such as bandwidth on demand

  • Value-added IP-based services, such as triple-play voice, video, and data

  • Per-customer and per-VPN SLA management

Such capabilities necessarily have to work with several different parts of the carrier’s business processes.

“Consider a carrier basically allowing the end customer to go in and tune the service bandwidth on the fly,” says Paul To, Solution Manager, Hewlett-Packard. “The customer might hit a turbo button, and answer a series of questions – how long do you want the extra bandwidth for, and how much extra? for example. The carrier’s OSS has to deal with it, and would authorize the request and interact with the network to increase the bandwidth in real time.”

Equally, the customer is going to need a detailed view of its management and service-level data to be able to use such self-service features. And the huge amount of bandwidth that is allowing new service such as VOIP and IPTV to be converged onto access links creates service management and provisioning issues and highlights the backend system’s ability to support the multiple classes of provisioning for those services.

To do all these things is not easy. Figure 6 shows the different mix of technologies that are deployed to enable a typical Ethernet or MPLS service. From a management perspective, the top half of the Figure illustrates some of the basic functional blocks that are required to provide an integrated service management for the underlying network technologies.

91570_6.gifService fulfillment covers activation and provisioning of services – fulfilling the customer request for things such as bandwidth on-demand. Service assurance covers managing the service and making sure that the service is available – managing faults and alarms, SLA, performance statistics, and so on. Finally, from a carrier perspective, it is essential to make sure that everything is billable, which is the main task of service usage, although, obviously, such data will be used by the carrier for business analysis as well.

The activation system needs to be aware of the underlying technology, and to provide provisioning across all those technologies. Similarly, fault-management needs to be able to manage and monitor alarms coming in from all the underlying networks, such as the Sonet/SDH, ATM, and MPLS/IP layers. All these different performance statistics must be gathered and processed to provide intelligible reports to the operator and to the end customer as well. And these performance characteristics must be translated into SLAs so that the carrier can manage the QOS for the end customer. And billing data needs to be collected from all the different underlying media.

“Each of those integration points presents unique challenges when we work with each of the different underlying technologies,” says To. “MPLS and Sonet have different mechanisms for fault and performance data representation, for example, and that introduces a level of complexity. Another challenge we have to solve is how the functional blocks need to interact with each other as a system.”

As an example, when a carrier activates a VPN it is important also to configure the fault and performance system so that faults and performance are managed not just from a network perspective, but also from how they affect the customer service. So the systems must become customer- and service-aware, and all the backend systems must to work together in real time if carriers are to offer services such as customer-controlled bandwidth on-demand.

Managing End-to-End

Managing a customer’s service end-to-end is an obvious requirement, but it is a challenge because networks are almost never homogeneous – there are many legacy technologies, which have typically be mixed or layered. And even newer technologies such as MPLS VPNs can be (and are) implemented to a variety of different technology standards.

Typical issues with multiple technologies are how to provide an end-to-end service bill, how subsequently to collate these different domains to provide a single management bill, and how to activate a service from one end to the other when crossing different technology boundaries. Currently, these issues usually involve trying to link and interwork islands of vendor/technology-specific management systems.

In addition to being able to manage end-to-end across different network and VPN technologies, it is also necessary to manage end-to-end across different service domains.

“A big motive for IP and NGNs is value-added services,” says To. “When we talk about providing the pipe and the guaranteed QOS for the pipe, the pipe is really only the beginning. What most carriers want to do is generate additional revenue services on top of those pipes.”

This is the key point of triple-play services, illustrated in Figure 7. Here the management system has to be able to manage the service lifecycle of multiple services and to facilitate the interaction between the end customer and the network and services. This means a single point of provisioning and management across the different application services, such as voice, video, and content.

91570_7.gifManaging Top to Bottom

The final big challenge is coping with the inherently multilayer structure of Ethernet over MPLS services. From top to bottom these typically include

  • EoMPLS VPN services

  • Access networks

  • IP/MPLS layer

  • Sonet/SDH transport network

  • Photonics layer

The issue is not only being able to manage each layer, but being able to manage also all the interactions between them – and to relate them to what the customer sees and the agreed SLAs.

For example, if there is a Sonet failure, and a consequent OAM alarm, there will be a cascade effect, both horizontally and vertically, through the layers. A Sonet link failure on a PE router will cause reachability failure alarms from PE routers on the other end of the link, for example. Equally, a Sonet link failure will cause some LSP failures in the MPLS layer, generating in turn a related event in the routing plane and router layer, which will eventually translate to a customer event – the loss of a particular VPN.

“A key challenge is relating all these different alarms so that carriers can very quickly identify which customer and services are impacted, so that they can proactively address the impending customer-care problem,” says To. “But it also works the other way. When a customer calls in with a problem, they are not going to say link X is down – they will say they have a problem with a particular VPN. So carriers need to be able to drill down from the top and be able to identify the underlying piece of infrastructure that is responsible for that particular service fault – and do it quickly.”

The good news is that technologies are appearing to overcome these issues. For example, increasingly there are technologies available to provide real-time monitoring of events in the routing and MPLS layer, so it is possible to generate alarms without waiting for the next round of polling or SNMP traps.

Also there are synthetic test agents – technologies such as IP SOA or Cisco SAA – for on-demand generation of tests at the service level. For example, what is the latency between all the VPN points? Are all the VPN endpoints talking to each other? The management system can use these agents to very quickly diagnose and validate that the VPN and service are functional.

Finally, there is great scope for developing a single management entity that can digest all the diverse alarm and other information generated from multiple sources of network instrumentation, and correlate it across the different network layers and to the service performance as experienced by the customer.

“From an HP perspective, we have been working with the different technology providers and customers to try to address such problems proactively, and to tie together the different layers and functional areas,” says To. “I think the key is pre-integration of management functionality into the various technologies, rather than trying to pull everything together afterwards.”

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like