x
SDN architectures

Is 'Good Enough' Ever Enough for Telecom?

In a telecom world of 99.999% reliability, where network downtime is measured in minutes annually and service restoration time is expected within 50 milliseconds, can there ever be such a thing as networks that are merely "good enough?"

To long-time telecom folks, the quick and easy answer is "Of course not," but in today's competitive Internet realm, the more accurate answer might be, "Well, maybe, if..."

It's a topic that came up frequently over the course of our one-day event last week in Denver at Carrier SDN Networking, as service providers from the telecom and cable sectors considered whether, in the course of adopting networking technologies born in the data center and IT realms, they might consider one day accepting "good enough" as a networking standard, in return for gaining more flexible, scalable networks that deliver new services faster.

Many speakers -- such as Verizon Communications Inc. (NYSE: VZ)'s Chris Emmons -- still stressed reliability, and the need for virtualization to be adopted in a way that doesn't threaten the reputation telecom has established for having rock-solid networks. Others such as Cox Communications Inc. s' Jeff Finkelstein, went a bit out on a limb, admitting there is a point at which he'd say, "Good, enough" when it comes to determining how to best use financial resources, since those are never unlimited.

A lot of this thinking comes from the move of the telecom sector to compete with -- and begin to emulate -- the Web 2.0 network architectures, notes Sterling Perrin, Heavy Reading senior analyst and chair of the SDN event. The Googles, Facebooks and Yahoos of the world have built networks in a very different fashion from the regulated telecom sector but have been successful in growing those networks, delivering services over them and building big market caps, he notes.

"These guys [Web 2.0 players] have been very vocal at conferences about what they are doing," Perrin noted. By building networks from scratch, they have been able to set their own approaches to network architectures and have leaned more heavily toward networks that scale and enable fast turn-up -- and turn-down -- of new services and applications. "They are getting a strong reception from traditional telecom companies when they speak. But I don't know that anything's being done yet, certainly not at any large scale."

With the aggressive move of some players, such as AT&T Inc. (NYSE: T) with its Domain 2.0 initiative, to embrace SDN and NFV, things may be changing. Service providers aren't going to abandon reliability and performance measures, but they may start to think about them differently, he adds.

"These newer Internet-based players think more holistically about network performance, taking into account the overall reliability of a network or system, versus trying to build in protection at every layer or for every individual element," Perrin says. Telecom, with its concerns about single points of failure, redundant paths and backup/standby boxes, tends to spend more building in prescribed safeguards against any single piece of equipment failing.

"If you are offering a service that goes across three different layers of the network then each of those layers would have to individually meet performance requirements, which does become overkill and I think you can see operators trying to recognize that," he says. "That's a fairly recent thing that is being discussed by the traditional network operator guys. The Web 2.0 guys just came at things differently."

That was Finkelstein's point, to some extent, that "Good -- enough!" means spending only what's necessary and not gold-plating the level of reliability.

Cable operators may be moving a bit more aggressively to side with the Web 2.0 side of the issue than those who come out of the traditional TDM voice world, Perrin notes. For the traditional folks, it's harder to stop doing things the way they've always been done.

Already, he says, the telecom transport's traditional standard of 50 millisecond restoration time is proving less relevant for services that support wireless operations because those are largely data-based. In the data world, dropped packets are simply re-sent.


Want to hear lively discussions on this topic and much more about the impact of virtualization? Join us at Light Reading's Big Telecom Event on June 9-10 in Chicago. Get yourself registered today or get left behind!


But there are still certain telecom traditions that aren't being set aside, such as the need for diverse fiber routes, since those play a key role in ensuring back-up paths exist for services and network connections, Perrin points out.

What ultimately may impact telecom thinking is a change in consumer expectations which may already be happening. Consumers are being conditioned to think of data services differently from their voice services. "If the phone network is down for any period of time, I freak out," he says. "But if the Internet goes down, the average consumer reboots their computer or waits to see if it comes back."

Similarly, increased reliance on cellphones is reducing consumer expectations about voice quality, and even dropped calls. Increased reliance on data may impact how consumers view network reliability.

"People have been accustomed to a different experience over the last 15 years or so. There has been a mentality of accepting 'good enough' as long as the really important stuff is there," Perrin says.

Even local phone service isn't as routinely reliable as it once was, as more people are connected via fiber or cable HFC networks that aren't line-powered and thus accept the fact that their phone service goes out in a power outage or survives on backup batteries for a period of hours. Since many home phones used cordless systems, the phones themselves won't work either.

"Still, especially in the telco world, [lifeline voice] is still very important to a large piece of those organizations, so I think that is a back and forth dynamic that still goes on," he comments. "It is in a discussion phase but networks aren't being built that way yet."

What will become important is where six-nines reliability and 50 millisecond restoration will continue to be needed and "Where can you have different metrics and still meet the customers' needs 100%," Perrin concludes. "But until recently, that discussion wasn't even happening."

— Carol Wilson, Editor-at-Large, Light Reading

Sterling Perrin 5/21/2015 | 11:47:31 AM
Re: how many nines? Great summary of this issue, Carol!

Another nuance on reliability is that there are cases where operators adhere to legacy metrics that may not be relevant in today's or tomorrow's networks. Again, take the 50 millisecond rule which was developed around TDM voice. Some data applications may deliver the same reliability/experience whether the network is built to 50 milliseconds or hundreds of milliseconds recovery.

In those cases, there is no sacrifice in quality in building to slower recovery, but there can be big cost advantages in relaxing the standard. These are the types of things that traditional telcos need to be re-thinking.

I don't read alot of business books, but I did read The Innovator's Dilemma many years ago. And one of the key points in that book is that legacy companies often way overshoot on delivering certain features, while missing the boat completely on the new features that become important to customers. I believe that is coming in to play here.

The issue of whether or not customers will accept lower levels of quality and reliability is a separate discussion. 

Sterling

 

 
mendyk 5/21/2015 | 11:38:34 AM
Re: how many nines? The irony at least for mobile services is that in a true emergency, data (i.e., texting) works a lot better than voice precisely because the service doesn't require immediate reliability. As for swamping the CO outside a natural disaster, here in Jersey we did that only for Springsteen.
cnwedit 5/21/2015 | 10:58:09 AM
Re: how many nines? I think the bigger point is that service providers need to invest their money where reliability counts. Wireless operators could invest a ton of money in trying to make their networks better able to survive disasters, based on what makes sense for a given area. But when something unanticipated and catastrophic happens, chances are the volume of calls is still going to overwhelm them. Does it make sense to continue pouring massive amounts of money into capacity that sits idle until something horrible happens? I don't know that is something they can afford. 

Way back in 1989 when the Cubs made the MLB playoffs for the first time in almost 40 years, fans trying to get tickets brought the Chicago landline system down. People couldn't get dialtone for emergencies or anything else for a period of time. And Ameritech was crucified. But if they'd raised rates to build in network capacity that would sustain a volume of calls that comes around every 40 years, consumers would have been outraged. 

As they deploy more IT-grade technology, telecom service providers have to figure out where they need to build in reliability and where they can afford for there to be brief outages or downtime or degraded performance. I think that's the overall message here. 

 
mendyk 5/21/2015 | 10:48:57 AM
how many nines? For most services today, five 9s as a generic uptime metric is meaningless. That's true for almost all consumer-class services. But it's probably a mistake for network operators to lower their reliability standards only because most of their customers aren't going to notice any minor slippage. As you imply, "good enough" is now good enough for some if not most consumer-class services, especially on the mobile side. But remember the beatings that mobile operators take when a disaster hits and lots of customers suddenly can't make a call.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE