Light Reading Europe – Telecom News, Analysis, Events, and Research
Sign up for our Free Telecom Weekly Newsletter
Connect with us

Telecom News Analysis  

How to Save Nokia Siemens's Optical Business

What would it take to make the optical division of Nokia Siemens Networks into a viable standalone company?

Marlin Equity Partners , which announced on Monday that it's buying the division, says it wants to be a consolidator in the optical sector. But sources tapped by Light Reading aren't sure that's the best idea. (See NSN to Sell Optical Business.)

Suggestions that came up included creating a broader play for software-defined networking (SDN) in the metro, or simply flipping the business to Juniper Networks Inc. (NYSE: JNPR).

At the core of some observers' concerns is the weak state of optical networking. "There's growth in the sector, but it has to find a way to grow profitably," says Larry Schwerin, CEO of components and subsystems vendor Capella Photonics Inc.

Marlin vs. the big fish
Not that consolidation is a bad idea. The top optical-networking systems vendor tends to have about 20 percent market share, with a multitude of single-digit competitors trailing the top three or four. NSN is one such. Infonetics Research Inc. pins the NSN's optical revenues at about €400 million (US$522 million) for the year, good for 4 percent market share.

The consensus has been that consolidation is in order, and Schwerin, a former venture capitalist, has noted for a couple of years that private equity has been circling the sector. He's also expected to see a move toward vertical integration among optical companies. (See Can Vendors Build Their Optical Components?.)

If Marlin wants to combine NSN with other optical properties, it's already gotten a start. In October, the firm announced plans to acquire Sycamore Networks Inc. (Nasdaq: SCMR), adding $54 million in annual revenue, based on Sycamore's last four quarters.

That's still not exactly a powerhouse, and Schwerin isn't convinced that adding an optical components company would be that much help either. What could Marlin do, then?

Rather than pile NSN together with more optical companies, Marlin should combine it with metro packet technology, argues Tom Nolle, principal analyst with CIMI Corp. .

Nolle's idea is that Marlin, or anybody targeting metro networks, for that matter, should be melding packet and optical technologies under the same control software. Yes, that brings software-defined networking (SDN) into the discussion.

"If you buy nothing but optical and you don't look at this other direction of how you're going to integrate it into a metro strategy, you're buying parts that don't add up to a whole. You're amassing failure," Nolle says. "My question for Marlin is: Are they looking ahead far enough?"

No metro vendor is adequately pursuing the path of integrating packet and optical aggregation, Nolle claims. And he's picking on the metro space because it's the best telecom sector, business-wise ("There's metro networking, and there's networking that doesn't have a hope of being financially viable," he says) and because he believes SDN would be relatively simple to implement in a metro aggregation setting.

The idea that comes closest to Nolle's vision is the Open Transport Switch (OTS), an idea being proposed by Infinera Corp. (Nasdaq: INFN) and other vendors. (See Optical Transport Gets an SDN Idea and Optical SDN Gets a Test Run.)

Marlin the flipper?
Another possibility would be for Marlin to flip its new optical company to Juniper Networks Inc. (NYSE: JNPR), according to Dana Cooperson, an analyst with Ovum Ltd. .

She doesn't know if that's in Marlin's plans. It's just that NSN has been Juniper's optical partner for a long time, and by some reckoning, Juniper needs to consider owning some optical networking.

"Tying the packet and optical accounts together is something Cisco has been doing, something Alcatel-Lucent has been doing, and something Huawei has been doing," Cooperson says. "If Juniper wants to become a full-service vendor, they might want to do something like that."

She's got two questions to go along with that theory, though. The first is whether the sale to Marlin includes the optical portion of NSN's services, a substantial part of its business. (She guesses it would be, but Marlin and NSN haven't specified that yet.)

The second is the state of developmental technology inside NSN. The company showed an R&D glimmer in October, claiming a fiber-optic speed record based on spatial multiplexing technology. But NSN, despite still having a worthy staff, has been lacking in other areas, such as OTN, she says.

"It's not clear how much real in-house technology they have, because they made themselves into -- not quite but almost -- an outsourced, buy-off-the-shelf play," Cooperson says.

— Craig Matsumoto, Managing Editor, Light Reading

Newest Comments First       Display in Chronological Order
Page 1 of 2 Next >
Balet
User Ranking
Thursday December 13, 2012 2:40:59 AM
no ratings

Pleasantly sirpised to see thatLarry is still speaking as a Capella's CEO.

Anybody knows if Capella is still alive and funded? I thought they were done.

jcadler
User Ranking
Friday December 7, 2012 1:50:11 PM
no ratings

Gross margins in the optical business lines are below 40% and falling.  Adding software on custom-built hardware won't get the overall margins back, either. 

OldPOTS
User Ranking
Thursday December 6, 2012 6:49:19 PM

 

Let me quote 'obaut'

Tuesday December 4, 2012 9:03:58 PM

“On how futuristic/strategic *software* defined networking is: Isn't it true that, when there's the alternative of clever architectural solution, best software is no software. Shouldn't we thus be looking toward +++*user/application/traffic*+++ defined networking?”

---------------------------------------------------------------------------

Only if network traffic is managed through user/application/contributor defining each traffic transmissions (eg email vs video via volume, latency timeliness parameters) can the management dynamically anticipate/respond correctly and/or in a timely manner rather than react after the fact. While this was attempted previously the exponential increase in the level of current high speed traffic mix is making those concepts more relevant.

Most people assume that network traffic is a near continuous stream. That was true in the days of 9600 bps, but the scale of the traffic packets size to the speeds (time) on the network have drastically changed. Watching outgoing traffic (See how below) from a PC in 2k packets at 45/100Mb shows that the transmission of each packet lasts for a very miniscule part of a second and becomes a stream of impulses with gaps. (Calculation left to reader for their favorite speed)

So as traffic leaves contributors as impulses & gaps they are usually merged on higher speed lines as shorter time impulse transmissions. Managing this traffic merger requires very fast scheduling of individual packet impulses, not allocating/fixing bandwidth. This dynamic traffic requires the speed only available from very fast hardware/switching fabrics. High level network guidance/policy decisions can be passed to the hardware/switching fabric using table/state driven methods.

But the question is: how fast this high level guidance/policy can be provided to make decisions on allocation/scheduling of BW to dynamic impulse traffic in a timely or correct manner. The traffic mix varies completely second by second with the current short transmission times and mixes. My observations of both tier 1 and enterprise networks traffics. But can this be managed from software on network processors deciding sub-groups based on constant parameters or else using traffic reports of what has already happened????

OP

 

PS  For visual confirmation of impulse traffic use PC with W7;

http://addgadgets.com/network_meter/

 

And BW was once considered almost free!

In the last few years much more granular allocation management is becoming required as the BW cost increases and the demand and mixes increase exponentially.

 

 

obaut
User Ranking
Wednesday December 5, 2012 6:27:15 PM
no ratings

To clarify: where I said adaptive networking (down to layer 1) is needed is _within_ the individual service contract networks, eg within a WAN contract for an ASP or large enterprise, the external parameters (SLA) of which can remain static for several months or years and can be centrally controlled. This is consistent with present commercial models; it just makes their implementation more efficient.

tnolle
User Ranking
Wednesday December 5, 2012 3:48:11 PM

That argument is one that I think would have to be made to the buyers themselves; the approach is not consistent with the positions that they've taken, at least in my conversations.

Operator views on services is that you don't discover them or have them adapt to your needs, you purchase them, thus they are imposed by policy on infrastructure.  That's what's at the core of the central control process.  Central control lets you engineer network behavior to the services you've committed to provide.

The other side of the coin here is that any wide-area network strategy that aims for multiple service levels or grades, however they are defined and whatever their relationship to applications might be, has to contend with the current best-efforts model of the Internet and the limitations of the current peering process on QoS-delineated services.

obaut
User Ranking
Wednesday December 5, 2012 3:42:13 PM

So one way to look at it is that regular IP-routing under-controls the network from the operators' point of view by letting the user traffic patterns direct the network behavior even in a way that is not accordant with the network operator's intents. SDN would bring (operator/user/application-owner) controllability to IP-routed networks.

I find that too reactive, and that SDN over-controls matters that should be handled automatically at data plane, enabled by clever system architecture and intelligent data plane.

Much of the problems with traffic defined (IP routed) 'adaptive networking' actually arise from the technical limitations of the presently used equipment: for instance, the traffic load adaptivity does not extend beyond the lowest packet-switching layer. With the present hardware gear, the physical network layers are non-adaptive to the packet traffic load variations, causing (technically avoidable) bottlenecks and/or poor overall utilization, and often both - even at the same time.

This point is highly essential: the practice of mixing unrelated traffic streams in same physical layer capacity pools (rather than keeping them isolated at their own physical layer sub-pools) is a reaction to the lack of adaptivity of physical layer bandwidth allocation. With present HW gear, it would be too inefficient to dedicate physical layer capacity resources to each service contract such as an enterprise WAN or CDN. Instead, in an effort to improve network capacity utilization efficiency, unrelated (eg different enterprise/ASP customers') traffic streams are made to share the same physical resources. This practice (a reaction to that the present HW is limited to non-adaptive physical layer connections) leads to the everything affects everything problems of the present Internet, and the consequent (reactionary) need for control, eg via SDN.

A more proactive solution is a data plane that makes it economical to keep unrelated network applications (e.g. traffic from different network service contracts*) on their own physical layer sub-pools. A way to make this economical is to extend the packet traffic load adaptive bandwidth allocation down to the physical layer: same way as L3 IP packet load drives the creation of L2 forwarding layer frames, the L2 data loads should drive L1 bandwidth allocation (within each contract-specific L1-sup-pool) at the actual packet/byte load granularity. 

This will simplify the upper layers of the networks and the OAM (security policing, contract administration etc) a whole lot.

*Besides single customer/user organization contract networks, dedicated physical layer sup-pools can be organized for inter-exchange applications, to provide inter-connectivity, as explicitly desired/authorized, among different user groups. This model enables flexible, cost-efficient and high-performance Internet connectivity in a manner that is scalable and secure.

tnolle
User Ranking
Wednesday December 5, 2012 1:43:11 PM

I would argue that most IP architects would say that IP networks were "traffic-defined" in that they adapt to traffic and topology in an adaptive way.  The difference between what I perceive the goals of SDN to be and the notion of traffic-defined or adaptive networks lies in the question of permission.  An SDN maps connections and resources based on software control, which means that the network does what it's explicitly set to do.  Adaptive networks do what they're presented, which some will say isn't what they're supposed to do.

But I don't want you to believe I think SDN will displace everything.  The central-control concept of SDNs likely wouldn't scale to the Internet level.  SDN technology is really about "intranets", networks inside resource pools, networks inside enterprises, etc.

With respect to application-defined networks, it's my view that individual application control of network services isn't practical.  The Cloud-SDN marriage would mean that provisioning a cloud application would collaterally provision the network service, which is not applications defining networks but applications and networks being defined by a common (DevOps) process.

obaut
User Ranking
Wednesday December 5, 2012 1:32:35 PM

Yes, technologically the point is to minimize, and if possible, eliminate the need for middleware type software, via clever architectural solutions. And business-wise, the success of X-as-a-Service business models is largely due to that they are "no software, no hardware" from the customers point, just the results/functionality needed.

But regarding what is strategic in networking technologies and services: my point is that ***SDN actually is reactive***. Operationally, SDN is a reaction to the way the network control software was embedded in the dominating IP router vendors' boxes, locking the service providers to vendor-specific network technologies. And there are response time etc. performance reasons why SDN is not even a very good technology approach for solving that operational/commercial problem.

However, the much more important goal is to find a way to minimize/eliminate the need for the network control plane software. That is the goal of what could be called the application defined networking: in this architecture, the user application traffic patterns (within their longer term contract/policy defined boundaries*) directly drive the realtime network behavior at the 'bare metal' hardware level, eliminating the need for middleware, which causes non-value-adding complexity, and prevents timely response to network data plane events, thereby leading poor network efficiency and/or QoS.

*Naturally these matters shall remain in the domain of management plane and non-realtime control plane software.

Btw, can you give examples of "current adaptive networks (that) are traffic-defined"?

 

tnolle
User Ranking
Tuesday December 4, 2012 9:08:32 PM

I think that any of those concepts would have to be defined to be assessed.  In any event you could argue that current adaptive networks are traffic-defined.  Operators seem to believe that they want less adaptive behavior; that's one of the principles of SDN.

I don't think the best software is no software; minimalism?

obaut
User Ranking
Tuesday December 4, 2012 9:03:58 PM

On how futuristic/strategic *software* defined networking is: Isn't it true that, when there's the alternative of clever architectural solution, best software is no software. Shouldn't we thus be looking toward *user/application/traffic* defined networking?

Page 1 of 2 Next >
The blogs and comments are the opinions only of the writers and do not reflect the views of Light Reading. They are no substitute for your own research and should not be relied upon for trading or any other purpose.