OFC/NFOEC Takes on Cloud Computing

Leading telecommunications companies discuss energy efficiency, video over the Web, and cloud computing at OFC/NFOEC 2011

March 3, 2011

7 Min Read

WASHINGTON -- With a ubiquitous technology like the Internet, companies with vested interest in communication technology must stay one step ahead of the demands of consumers and the challenges of the future. At the world's largest international conference on optical communications, researchers at leading telecommunications companies will discuss how they are working to prepare for the massive amounts of data expected to flood networks in the coming years. Outlined below are several challenges that representatives of three of these companies will be addressing at this year's Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC), taking place March 6 -- 10 at the Los Angeles Convention Center.

Sure, you may use various Google applications for mail or word processing, Facebook for staying in touch and Flickr for photo sharing. These are all examples of cloud computing, the ability to access computer processing and data storage as services on-demand via the Internet. Does this mean cloud computing has entered the mainstream? Hardly, according to an October 2010 report released by market research firm Gartner, which gave cloud computing the dubious honor of being at the peak of inflated expectations. (See: http://www.gartner.com/it/page.jsp?id=1447613.) Telecommunication (telco) firms should be breathing a sigh of relief at this news since they are grossly unprepared to offer cloud services today, at least according to Alcatel-Lucent Bell Labs researcher Dominique Verchere.

The problem, Verchere says, is that most telcos offer Internet access as more or less a static commodity. Maybe you own a small business and need to sign up for Internet access. It's likely the local telco will provide broadband access at a fixed price that probably guarantees an expected data rate and little else. Of course, Gmail, Facebook and other similar applications hardly demand heavy-duty processing or storage in the cloud. But what if your business does complex R&D that involves periodically running sophisticated mathematical models on off-site server farms? Can you negotiate for a higher level of service while these models are churning away, say, by asking for more network resources while the modeling and related collaborative working sessions are taking place?

Probably not, says Verchere, who will report at OFC/NFOEC that new hardware, software and protocols are needed that support a "fast reconfigurable network." And such a network must provide certain guarantees when it comes to enhanced connectivity. The enhanced connectivity encompasses the notion of "carrier-grade" cloud service. Just as you know you'll get a dial tone when you pick up the phone to place a call, so too do you need to know you'll be able to get the network access you needs when you consign more complex computing tasks to the cloud.

In his paper, Verchere, based in Nozay, France, will discuss how a telco might be able to schedule such carrier-grade connectivity service. His recommendations, which include a slew of new scheduling algorithms and protocols, and a host of headaches around issues like confidentiality of data, are not for the faint of heart. Still, this is work that inevitably needs to be done, Verchere believes. Cloud computing, he asserts, involves more than a guarantee of certain computing and storage resources, which cloud vendors like Amazon already offer to customers ranging from Netflix to NASA. These cloud applications, he writes, also "require explicit reservation of network resources."

Talk OMW1, "Cloud Computing over Telecom Network," Dominique Verchere, Monday, March 7, 4 p.m.


More and more of daily life seems to be connected with the Web. After all, listen to nearly any company with a vested interest in the Internet and you're bound to hear some version of the following: traffic on the network is growing nearly without bounds and control of content distribution is being pushed to the edges and democratized. The reality, says AT&T researcher Alexandre Gerber, is more complicated and in some ways, quite the opposite.

Gerber would know. He concentrates on IP traffic analysis in the United States for AT&T Labs Research in New Jersey. Most of his raw data comes from the AT&T network itself, one of the largest and most diverse in the world. The company provides direct Internet access to homes, carries traffic for other Internet service providers, operates a cellular network, maintains a nationwide fiber optic network and offers private dedicated networks mostly for corporate customers. A big-picture look at the traffic on these networks reveals at least three conclusions, each of which Gerber will discuss in his talk at OFC/NFOEC.

First, the wired broadband penetration rate continues to rise, although this rate is slowing as connections get closer to complete saturation. According to the Organization for Economic Cooperation and Development, in 2000, the penetration rate of broadband services in the U.S. grew by 48 percent, whereas in 2009 growth was less than 4 percent. This makes sense as there are fewer candidates for new wired broadband connections.

Second, the application mix is tilting dramatically toward video over the Web. Annual traffic growth from video over HTTP (think here of watching a YouTube video in a Web browser) now exceeds 80 percent, more than two and half times the overall organic growth rate for traffic in general. One question on Gerber's mind is what role cable TV will play in a future with more and more video coming in via the Web.

Third, though the Internet is doubtless enabling consumers to find and even contribute to all sorts of new channels for entertainment, news and communication, a big chunk of the bits generated by these activities is carried by a very small handful of so-called content distribution networks. These networks are operated by companies that are relatively unknown--Akamai Technologies and Limelight Networks are among the dozen or so that are most prominent--but that "will have a clear impact on future optical-layer traffic growth," writes Gerber in his conference paper, co-authored by AT&T colleague Robert Doverspike.

Talk OTuR1, "Traffic Types and Growth in Backbone Networks" Alexandre Gerber, Tuesday, March 8, 4:30 p.m.


Most recent news about Comcast has centered on its $30 billion acquisition of NBC Universal, approved Jan. 18 by the U.S. Federal Communication Commission and the U.S. Justice Department. That deal was no doubt made possible in part by Comcast's substantial revenue, which exceeded $37 billion in 2010. Though the company produces some original programming, the lion's share of this revenue comes from managing and operating its cable network, the largest in the world

So when the company's technologists make public statements about its network, as Shamim Akhtar will do in an invited talk at this year's OFC/ NFOEC, it's perhaps useful to remember that the relevance goes beyond the latest innovations in technologies like multiplexers.

Akhtar is Comcast's senior director of network technology development, which means he focuses on the development of technologies that are used to package, route and deliver data across the fiber network. Akhtar says his goal in his talk is to "get in front of the research community and help them understand the length and breadth of the network we run, and where the technology gaps are."

One particular issue he'll discuss is so-called robustness, a challenge for all network operators. This has to do with how much data traveling on the network is recoverable in the event of some sort of technical or physical snafu, like an errant backhoe operator digging where he is not supposed to. Everyone knows how maddening it can be when Internet access goes down. However, robustness is expensive and getting the right amount of it is hard given the explosion of traffic on the Comcast network, a balancing act he'll discuss in his talk.

In addition to addressing demands of bandwidth-hungry customers, Akhtar will touch on issues relating to transitioning to more green, ecologically-friendly technologies. Specifically, he'll talk about Comcast's challenges regarding energy efficiency, space, and cooling. Photonic switches consume far less power than an Internet router. Yet the switches are not yet sophisticated enough to provide the appropriate level of scaling flexibility. How do you serve up the latest multimedia content from ESPN and save the planet at the same time? Akhtar hopes that his efforts to describe how Comcast manages these and other tradeoffs may help researchers to come up with solutions to address these timely issues.

The Optical Society (OSA)

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like