Some folks say they'll never forgo six-nines reliability and 50 millisecond restoration standards, but there are definite signs Web 2.0 approaches are taking hold.

May 21, 2015

5 Min Read
Is 'Good Enough' Ever Enough for Telecom?

In a telecom world of 99.999% reliability, where network downtime is measured in minutes annually and service restoration time is expected within 50 milliseconds, can there ever be such a thing as networks that are merely "good enough?"

To long-time telecom folks, the quick and easy answer is "Of course not," but in today's competitive Internet realm, the more accurate answer might be, "Well, maybe, if..."

It's a topic that came up frequently over the course of our one-day event last week in Denver at Carrier SDN Networking, as service providers from the telecom and cable sectors considered whether, in the course of adopting networking technologies born in the data center and IT realms, they might consider one day accepting "good enough" as a networking standard, in return for gaining more flexible, scalable networks that deliver new services faster.

Many speakers -- such as Verizon Communications Inc. (NYSE: VZ)'s Chris Emmons -- still stressed reliability, and the need for virtualization to be adopted in a way that doesn't threaten the reputation telecom has established for having rock-solid networks. Others such as Cox Communications Inc. s' Jeff Finkelstein, went a bit out on a limb, admitting there is a point at which he'd say, "Good, enough" when it comes to determining how to best use financial resources, since those are never unlimited.

A lot of this thinking comes from the move of the telecom sector to compete with -- and begin to emulate -- the Web 2.0 network architectures, notes Sterling Perrin, Heavy Reading senior analyst and chair of the SDN event. The Googles, Facebooks and Yahoos of the world have built networks in a very different fashion from the regulated telecom sector but have been successful in growing those networks, delivering services over them and building big market caps, he notes.

"These guys [Web 2.0 players] have been very vocal at conferences about what they are doing," Perrin noted. By building networks from scratch, they have been able to set their own approaches to network architectures and have leaned more heavily toward networks that scale and enable fast turn-up -- and turn-down -- of new services and applications. "They are getting a strong reception from traditional telecom companies when they speak. But I don't know that anything's being done yet, certainly not at any large scale."

With the aggressive move of some players, such as AT&T Inc. (NYSE: T) with its Domain 2.0 initiative, to embrace SDN and NFV, things may be changing. Service providers aren't going to abandon reliability and performance measures, but they may start to think about them differently, he adds.

"These newer Internet-based players think more holistically about network performance, taking into account the overall reliability of a network or system, versus trying to build in protection at every layer or for every individual element," Perrin says. Telecom, with its concerns about single points of failure, redundant paths and backup/standby boxes, tends to spend more building in prescribed safeguards against any single piece of equipment failing.

"If you are offering a service that goes across three different layers of the network then each of those layers would have to individually meet performance requirements, which does become overkill and I think you can see operators trying to recognize that," he says. "That's a fairly recent thing that is being discussed by the traditional network operator guys. The Web 2.0 guys just came at things differently."

That was Finkelstein's point, to some extent, that "Good -- enough!" means spending only what's necessary and not gold-plating the level of reliability.

Cable operators may be moving a bit more aggressively to side with the Web 2.0 side of the issue than those who come out of the traditional TDM voice world, Perrin notes. For the traditional folks, it's harder to stop doing things the way they've always been done.

Already, he says, the telecom transport's traditional standard of 50 millisecond restoration time is proving less relevant for services that support wireless operations because those are largely data-based. In the data world, dropped packets are simply re-sent.

Want to hear lively discussions on this topic and much more about the impact of virtualization? Join us at Light Reading's Big Telecom Event on June 9-10 in Chicago. Get yourself registered today or get left behind!

But there are still certain telecom traditions that aren't being set aside, such as the need for diverse fiber routes, since those play a key role in ensuring back-up paths exist for services and network connections, Perrin points out.

What ultimately may impact telecom thinking is a change in consumer expectations which may already be happening. Consumers are being conditioned to think of data services differently from their voice services. "If the phone network is down for any period of time, I freak out," he says. "But if the Internet goes down, the average consumer reboots their computer or waits to see if it comes back."

Similarly, increased reliance on cellphones is reducing consumer expectations about voice quality, and even dropped calls. Increased reliance on data may impact how consumers view network reliability.

"People have been accustomed to a different experience over the last 15 years or so. There has been a mentality of accepting 'good enough' as long as the really important stuff is there," Perrin says.

Even local phone service isn't as routinely reliable as it once was, as more people are connected via fiber or cable HFC networks that aren't line-powered and thus accept the fact that their phone service goes out in a power outage or survives on backup batteries for a period of hours. Since many home phones used cordless systems, the phones themselves won't work either.

"Still, especially in the telco world, [lifeline voice] is still very important to a large piece of those organizations, so I think that is a back and forth dynamic that still goes on," he comments. "It is in a discussion phase but networks aren't being built that way yet."

What will become important is where six-nines reliability and 50 millisecond restoration will continue to be needed and "Where can you have different metrics and still meet the customers' needs 100%," Perrin concludes. "But until recently, that discussion wasn't even happening."

— Carol Wilson, Editor-at-Large, Light Reading

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like