NGN Notes
What to watch for, according to the recent conference on next-gen networks * Edge * Wireless * Grid
October 30, 2002
I went to NGN2002 expecting the worst – yet another event better named No Good News than Next-Gen Networks – but after a full week of sessions that illustrated the continued vitality of technological development in the face of diminishing markets, I’d say there’s hope. The best minds in the business are hanging in there and figuring out how to fix this mess, survive this cataclysm, and continue to engage in the evolution of the telecom network.
In recent weeks I’d been feeling extremely negative on telecom, seeing no signs of a recovery, at least not till 2005, which is far enough in the future to render futile any attempted divination of the when and how of spending resurgence.
The wisest of the people we talk with have stopped asking when the recovery will occur and instead ask what are the catalysts for recovery – guideposts to watch for in the coming years that signal carrier spending is about to tick upwards, with a focus on next-gen gear. Regulatory shifts, elimination of carrier debt loads, flushing out of equipment inventories, and reaching a critical mass of broadband homes in the U.S. are the most popular catalysts named; but it’s worth arguing that carrier decisions on network architectures – MPLS core or more ATM? RPR, Sonet or native Ethernet metros? 802.11 or 3G? – represent an even more fundamental catalyst for next-gen networking recovery.
NGN, it turned out, was a place to help determine just how emerging technologies are maturing and what carriers are saying about them.
Encouragingly, plenty of carriers showed up. I saw AT&T Corp. (NYSE: T) badges, Bell Canada (NYSE/Toronto: BCE), a few European PTTs, BellSouth Corp. (NYSE: BLS), and a number of CLEC representatives (with travel budgets!). They asked lots of questions during the sessions and talked animatedly in the hallways and receptions. None of them said, “Hey, I’m upping my spending by a billion next year, this MPLS stuff is ready for prime time!” But the level of engagement on topics such as wireless LAN, IP QOS, Internet security, VPNs, and all things Ethernet was intense.
I missed most of the keynotes, but when I poked my head in, there wasn’t much being said that hasn’t been repeated all year (and inviting a security guru from the White House was a little too much evidence of our national transformation into a police state, so I skipped that as well). The opening keynote, Public Networking, The Worst Year in History, from the conference co-chairmen – David Passmore, Research Director of the Burton Group, and Dr. John M. McQuillan, President of McQuillan Ventures – was probably not the best way to keep attendees listening for five days, but amid the mephitic malaise they did note some important trends:
The shift in focus from the Enterprise to the Service Provider was a big mistake in the past few years, as none of the vendors really knew how to build carrier-class products – and these new carriers were built on balsa-wood supports. In the meantime, the enterprise turned out to be not as mature as vendors thought, demanding new technologies around security, availability, and the “employee-acquired technologies” of PDAs, wireless LANs, PC-based apps.
At least 75 percent of private companies will run out of money in 2002-2005.
Money, politics, or a monopoly trumps great service provider technology any day.
My colleague Rick Thompson was there with me; what follows are our notes on the various sessions, as well as what we heard in the hallways and “Birds of a Feather” sessions after hours.
Here’s a hyperlinked summary:
Optical Networking
On the Edge
Grid Networking
Wireless
VC Panel
Miscellany
— Scott Clavenna is Director of Research at Light Reading and President of PointEast Research LLC; Rick Thompson is Principal at PointEast Research.
Want to know more? The big cheeses of the optical networking industry will be discussing Next-Generation Networking at Lightspeed Europe. Check it out at Lightspeed Europe 02.
Metro
In my session on Metro RPR vs. Next-Gen Sonet vs. Ethernet, I asked for a show of hands on which of the three solutions would prevail in five-years time. Of a crowd of 150, native Ethernet and next-gen Sonet got about 60 votes each, RPR… three (3). (For more on these technologies, check out Next-Gen Sonet , Metro Ethernet and Resilient Packet Ring Technology).
Lots of talk in the hallways about GFP (generic framing procedure). This is to be watched, as it represents the next step in the evolution of Sonet. It elegantly maps a broad range of protocols into virtually concatenated Sonet/SDH channels, without protocol translations, making possible what many ILECs and PTTs are asking for today: a unified metro edge that supports TDM, Ethernet, and storage over a common transmission platform.
Long-Haul Optical
This was set up to struggle. They put this session last in the conference, during what would typically be lunch – as if these guys didn’t have it rough enough already. Presenters from Nortel Networks Corp. (NYSE/Toronto: NT), PhotonEx Corp., Ciena Corp. (Nasdaq: CIEN), and Mintera Corp. were more alike than dissimilar on this occasion: They need spending to return, plain and simple. The debates over whether to deploy 40G vs. more 10G are somewhat irrelevant until carriers start expanding their cores.
When asked what was the most important component innovation they looked forward to, almost all mentioned active dispersion compensation, particularly electronic solutions, since costs are expected to be much lower than optical solutions. Others talked about ROADMs (remote optical add/drop multiplexers), and tunables (lasers, receivers); but the focus remained on expanding backbones with the lowest possible capex.
Optical Subsystems and Components
Plenty of talk about subsystems, from intelligent amplifiers, to tunable dispersion compensation solutions, to monitoring and tunable transponders. The interesting comment here came from Charles Corbalis, president of Calient Networks Inc. He described the history of switched services, from T1s to T3s, to OCxs, and proposed that the stage was set for the proliferation of wavelength services.
Why? Because these switched TDM services arose once carriers first proved them in the trunks of the network, then built private networks for large enterprise customers, then finally rolled them out as a common service. Over the past few years, DWDM has been deployed as a transport solution in the core; more recently, large enterprises have built private DWDM backbones; so logic would dictate that switched wave services are on their way.
Could be, but my only concern would be that these switched TDM services leveraged developments in silicon and digital networking, whereas DWDM services remain in the analog domain, making them more difficult to scale, monitor, and provision.
It’s clear that innovation and funding has followed the tide in from “core” to “edge” to “access.” Now that many of the access players are well underway, from a development and funding perspective anyway (next-gen Digital Loop Carrier, Ethernet over Copper, Ethernet over Fiber, Optical Access, Wireless Access, etc.), it seems as though the innovation tide is starting to move back out, slightly, toward a new definition of edge networking.
In the first go-round, we saw many developments at both the service layer and transport layer.
Service layer innovation was largely characterized by scaleable edge routers and IP service switches integrating multiple data service functions into complex, software-intensive, ASIC-intensive platforms. One or two of the players will find success, while the rest will struggle and ultimately shut down.
Transport layer innovation was largely characterized by God boxes trying to over-integrate data and TDM functionality into the same platform. Again, a few made it, but ended up becoming next-gen Sonet boxes. The remainder struggle to reposition, as the market remains cold and the technology becomes stale.
It’s likely that access technologies will continue to be purpose-built for specific applications, services, types of customers, geographies, etc. It’s also likely that carrier uncertainty will remain, relative to defining and delivering quality IP services and effectively managing data and TDM integration. This will force the new edge of the network to be manageably flexible in nature, supporting multiple access technologies and adapting to fluid service demands. Probably sounds a bit idealistic, but it seems as if steps in that direction are looking fairly realistic:
At the Service Layer, it looks as if new-generation network processors are enabling fairly simple hardware platforms that enable software feature programmability (see Network Processors). Historically, the benefit of programmability was offset by a hit in performance. Network processor advances are now claiming better performance for wire-speed functions, allowing these potential platforms to take advantage of service changes as they are altered or newly defined. In many cases, these platforms may actually augment existing IP edge devices, routers, etc. – as opposed to competing directly with them. They could potentially offload or accelerate functions that have not been optimized in existing platforms, leaving heavy packet forwarding capabilities to existing gear.
At the Transport Layer, there has been a lot of talk about virtual concatenation for handling Ethernet services over Sonet. Although that may be a step in the right direction for transport players, it may be GFP (Generic Framing Procedure) that holds more interest for multiservice edge devices going forward. It’s true that GFP and virtual concatenation work in concert, but GFP really provides the client interface flexibility – whether it be frame-mapped or transparent-mapped – for third-generation edge or metro networking. Most frequently used when talking about Ethernet or storage, mapping and transporting other data services may also be a future potential for GFP, as chipsets become available with more functionality, greater granularity, and so on.
What’s driving the innovation of third-generation edge networking?
Market uncertainty: It’s very unclear exactly what services will drive the market and how. Everyone realizes that Ethernet, storage, and VPN flavors will all have their places, but the exact format and service definitions are still TBD. To that end, programmability and adaptability at the edge will be critical in providing a fluid gateway from access to core.
Component advances: New silicon developments at both the service layer and the transport layer are honing in on pragmatic solutions for system vendors to deliver the required edge flexibility.
Low burn-rate requirements: New companies are playing under different rules than the edge router and God box startups of 1999/2000. The simplicity in hardware, with an emphasis on software, allows newer companies to build systems within a more manageable financial model.
Hardware simplicity: Related to the bullet above, the hardware platforms are more off-the-shelf, but also potentially more flexible. In some cases the hardware is 1RU, server-like platforms that enable programmability. In other cases, new GFP framer technology enables a reduction in the number of line cards required for development, while maintaining multiservice functionality.
Software-intensive differentiation and time to market: Rather than relying on ASIC-intensive hardware for differentiation, which also has very lengthy development cycles, newer companies may be able to leverage programmable, adaptable architectures to deliver new features via software. Probably too good to be true in the purest sense, but steps are certainly being made in that direction.
In closing on this topic, a quick reference to AT&T’s keynote presentation, The Network Evolution: Journey to a Multi-Service Edge, Self-Operating Network, delivered by Dr. Hossein Eslambolchi, CTO: The short story here echoed many of the points above relative to the flexibility at the edge still being a requirement and a priority for AT&T going forward. It was a bit idealistic in its view of self-operating and self-provisioning for any service. Perhaps somewhere between the hoped-for inherently intelligent network and today’s inflexible infrastructure, a pragmatic mix of easily taught or programmable platforms will arise.
There were several “Birds of a Feather” sessions held on Wednesday evening. These amounted to small, interactive tutorials on specific technologies. Some interesting ideas came from the grid computing session (see Watch for the Grid). It started with a basic discussion of grid computing, which in reality has many definitions, but basically deals with interconnecting multiple, distributed servers to create a collaborative, high-performance computing resource. Not being a supercomputer expert by any means, I tend to think of the networking infrastructure aspects and opportunities. Below are some notes on the basic layers of grid computing/networking, and some thoughts on integration.
Basic functional layers as defined in the session:
Application: Application software for high-end distributed applications including EDA (electronic design automation) design tools; next-gen, Web-enabled CRM (customer relationship management); computational flow dynamics; various life sciences apps… the list goes on.
Serviceware: Software for things like usage metrics, billing, etc.
Middleware: Essentially network operating system software, taking grid-specific scheduling, queuing, and security issues into account.
Fabric: The hardware infrastructure used to interconnect the multiple servers – switches, routers, load balancers, etc. Cheap, simple, dense GigE or 10-GigE is desirable for many of the existing research grids.
Today, different companies deliver solutions for each specific layer. The application layer will probably always be that way, but it seems there may be an opportunity to integrate the bottom three layers (serviceware, middleware, and fabric) into a specific grid switch type of solution. The infrastructure solution could be a product for both grid application service providers and grid resource providers, the primary difference being: The former manages everything, including the application; while the latter simply provides the infrastructure from which multiple applications may be delivered.
Today’s layer-specific solutions (e.g., Avaki Corp. and Entropia Inc. – middleware layer; Force10 Networks Inc. – fabric layer) are more than adequate, as the market is in very early stages, from both the research angle and the Web services angle. As the market develops, the integrated solution can cut costs and better enable the proliferation of the types of providers mentioned above. Sun Microsystems Inc. (Nasdaq: SUNW) and IBM Corp. (NYSE: IBM) are aggressively pursuing grid computing, and Cisco Systems Inc. (Nasdaq: CSCO) is involved in ongoing investigation as well.
A few other tidbits on this topic include some of the following points:
There is a differentiation between computation grids (distributing the computational resources) and data grids (distributing the database resources). The computation grids are seeing more demand from research environments; the data grids are seeing more demand from commercial enterprises trying to capitalize on Web services. Because a company's database often consists of its intellectual property (e.g., product bill of materials, company-specific info, etc.), the concerns over security are extremely high.
One of the benefits in the fabric layer is the ability to support 10-GigE LAN and WAN PHY. This eliminates the need for back-to-back gear at the fabric layer. Interestingly, Ample Communications Inc., which recently announced a supplier relationship with Force10, has this capability.
According to an attendee from San Diego Supercomputer Center, there is a prominent PC supplier integrating grid security into the BIOS. This type of integration at the server could start to create an edge/core node relationship for grid-specific networking elements – another potential driver for inter-layer integration of grid networking infrastructure.
It seemed as if nearly half of the sessions involved WiFi, 3G, wireless security, and so on. The recent lack of interest in traditional wireline telecom networking served to put wireless in the show spotlight – not a great surprise, I guess. There was, though, a significant amount of chatter around the fact that many of the wireless technologies are also over-invested (see Funding: Startup Roundup). With confusion building around 3G and 802.11x – as to whether the technologies are competitive, complementary, or both – there certainly seems to be some opportunity for future positioning.
It’s clear that the rapid deployment of 802.11 technologies has created opportunities for startups, challenges for users (and startups), and questions or concerns for many of the cellular players (vendors and service providers). Aside from the various tutorial sessions on cellular (2G, 2.5G, 3G technology choices, architectures) and WiFi (802.11b, 802.11a, 802.11g), there were a few good sessions on coexistence and/or competition of WiFi and cellular (some notes below). Also, there is no dispute that security is a primary area of concern for WiFi going forward, and there were a number of sessions that focused on that.
* * * * *
One of the sessions included panelists from Airvana Inc., a startup working on next-gen 3G infrastructure equipment based on 1xEV-DO (optimized for packet data, in contrast to 1xEV-DV optimized for data and voice), and YAS Broadband Ventures, a VC presenting from more of a WiFi perspective. In addition to the standard security concerns, some of the biggest service provider concerns about WiFi, from the cellular view, include high backhaul costs due to the sheer number of sites, site rental and maintenance, and long-term congestion control. Two other interesting points were:
Security will happen for 802.11, but it will drive up the cost of the access points. This is cause for concern, as the low cost of the access points has historically driven deployment.
Data has always built on top of other networks (cable to TV, modem to voice, therefore wireless data to wireless voice).
The view from YAS Broadband is that simple applications like email and voice will be adequately handled via current next-generation cellular technologies, but that video will not. Security can and will be managed via many of the same mechanisms that exist in cellular networks; although much of the same security can be applied to 802.11 microcells, the challenge will be in maintaining security and connection when roaming between cellular and WiFi networks.
This seems to be the one common theme from both camps. The fact is that both technologies will exist, but integration may hold the most promise for developing new solutions. Integration could mean integrating WiFi and 3G into the same device, which is possible, but has tradeoffs such as cost, size, and power consumption (classic integration issues). The more realistic type of integration is probably managing the roaming between the two different networks from various perspectives, including authentication, billing, and connection maintenance.
At the end of the day, I would say, the most agreed upon conclusion was that 802.11 and 2.5/3G would co-exist, with slight momentum behind the thought that if one were to overtake the other it would be 802.11. A final thought is: Even if the technology permits that to happen, it may be political forces that prevent it.
* * * * *
The other interesting wireless session included panelists from Aruba Networks Inc., a startup working on a next-gen wireless LAN switch; and MeshNetworks Inc., a startup working on ad hoc networking.
Aruba presented today’s enterprise network in three layers:
Core: Characterized by stability – multigigabit, non-blocking L2/3 switching
Distribution: Characterized by functionality – gigabit ports, L2-L7 switching, redundancy
Access: Characterized by price/port – 10/100 access ports, L2 switching
Aruba seems to be of the opinion that access points will continue to be driven down in price, but WLAN functionality requirements will continue to increase – in managing 802.11b/a/g scaleability, RF visibility, secure enterprise-wide roaming, and upgradeable security (changing encryption schemes). The model above seems to suggest that the core remains wired with high-performance enterprise gear, but that the distribution grows with highly functional WLAN switches, fed by cheaper and cheaper access points.
Mesh Networks presented from more of a metro/wide-area perspective, adding to the debate between 802.11 and cellular. They’re using ad hoc networking technologies to solve coverage problems that are more prevalent for emerging 802.11 than for today’s cellular. To solve the issue of coverage they use multi-hop mesh networking to extend the coverage of 802.11. The ad hoc approach inherently suggests that the network gets stronger with the addition of each node (essentially, each node provides bandwidth for the overall network). So not only does it potentially extend range, but it also increases network capacity and scaleability as it reduces interference.
The multi-hop process can also potentially counteract increased access point cost (as security is added). Mesh expects a negligible 1ms delay per hop, with support for 30 to 40 hops (with 802.11b, 802.11a unknown). A lot of these metrics are dictated by the routing algorithms. In many ways, this is similar to applying intelligent routing to circuit-switched networks (as with GMPLS), in that it is constraint-based. In optical networks, the constraints could include trunk size or switch granularity. In wireless ad hoc networks the constraints could be, say, available capacity or power characteristics of intermediate nodes.
One other issue, is that because all nodes may be used as intermediate access points, security could become more complex than simply taking care of primary access points. This might further complicate the routing algorithms required.
* * * * *
Other random wireless musings:
Heard a lot of positive comments in various sessions regarding Flarion’s OFDM solution. (Flarion Technologies is No.1 in the Top 25 Startups of our wireless sister site, Unstrung – see Unstrung's Top 25 Startups.) The company faces challenges in changing operational paradigms, but initial success with Nextel Communications Inc. (Nasdaq: NXTL) is meaningful.
There were multiple mentions of mobile versus portable wireless networking. Many of the challenges surrounding existing WiFi are being exploited by cellular guys who are ghettoizing WiFi as more of a portable networking technology, in that it’s not suited for servicing the required applications while in motion or traversing among network domains – be it different WiFi administrative domains within one WiFi network; different WiFi networks; or handoffs between WiFi and cellular networks. It seems to me that secure, high-quality, portable solutions are fine for a majority of applications anyway. So if it takes time for mobility to catch up, does it really matter? The IEEE 802.16 MBWA (mobile broadband wireless access) working group is working on mobility issues.
Concerns about the 2.4GHz spectrum for 802.11b becoming a junk band will potentially be alleviated by 802.11a. 802.11a’s 5GHz spectrum carries baggage, however, in that it’s sometimes used by military radar in the U.S. and is licensed in Europe.
On Wednesday, October 16, there was a venture capitalist panel discussion titled After the Party’s Over, Investing in Networking Today. The panelists included representatives from ComVentures, Integral Capital Partners, and Morgenthaler. Below are some notes from this session, which centered around three primary topics.
The first topic was the state of previous VC investments. The general conclusion was that it will be another 12 to 18 months before the startup industry gets to a steady state of writedowns (i.e., normal startup death rate, as opposed to high startup death rate). On the flipside, they agreed that the VC death rate will increase and could probably be predicted by looking at current portfolios of those in the best/worst deals. They felt that the VC community, in the last 90 days alone, has really begun to accept and understand the new market reality.
The second topic was the state of current VC investments. One issue here was the fact that distinguishing winners from losers is tough. Whereas everyone was seemingly a winner in 1999/2000, only a select few will make it through.
The other issue discussed around this topic was that of hibernation and proactively shutting down companies. Mostly agreed that hibernation is not a viable approach. Unmotivated teams will not be able to bring back stale technology as needed. There were conflicting views on shutting down companies if there is a belief that the opportunity isn’t there. Some view this as the right approach, so long as it’s done gracefully. Others felt that it was the responsibility of the VC to stand behind the startup that is still passionate about the technology under development, coming from the perspective that this industry is about people at the end of the day. Sure, I guess you can look at it that way, but I’d probably vote for the shutdown philosophy myself.The third topic was current hot areas for investment. Well, I was hoping for some great insight here but was disappointed. There certainly wasn’t a long list of ideas discussed, but two that were mentioned were wireless and Ethernet. Surprised? Probably not, but to that end, there were two interesting comments. The first concerned WiFi, where the VCs are finding it quite a challenge to uncover opportunities to put $10 million in and get $100 million or $200 million out – in contrast to cellular.
The second was the idea of the service layer being decoupled from the network layer. Although it wasn’t covered in great detail, the concept tends to support some of the discussion in the grid computing/networking section above: Service-specific networks that each take advantage of a simple underlying network layer provide a potential area of interest going forward.We met with Seranoa Networks Inc., which hasn’t yet announced product details, though it has made some public reference to what it’s up to – a product focused on IP edge concentration. The short story is it’s a fairly straightforward idea from an application and development perspective. In times where tons of great innovation is going on, it would probably be glossed over as boring, but in fact the value proposition and application seem very real, while the development risk seems very manageable (from both a technological and fiscal perspective).
I tend to like what they’re up to, as they can augment an installed base of router technology to deliver more cost-effective WAN ports, better utilizing the fabric capacity of the incumbent router. In addition, based on their technology, they could also augment newer generation, Ethernet-centric router players by providing legacy interfaces.
The benefits include: significant cost savings in leveraging a continuously growing installed base of routers; minimal impact to service provider operations; better support for specific aggregation features than existing routers; potential to work across entire router product families from incumbent vendors; and potential to work across product sets from multiple incumbent router vendors. As much as they could augment existing routers, it is also a challenge they will face going forward. Eating into potential line-card revenue from incumbents will certainly heat up competitive positioning.
As discussed, it’s not rocket science – in that the product and application are pretty straightforward – but that may be the ticket to finding some success in this market. In addition, it mimics what providers sometimes do today by front-ending routers with frame switches for more optimized port aggregation. Seranoa is currently in a number of service provider trials and will likely be announcing further company and product details later in the year.
* * * * *We also got a chance to meet with Tenor Networks Inc. A lot of startups have tried to regroup and reposition as a result of the changing market conditions. Tenor, which started out as a core MPLS switchmaker, has gone quiet and refocused its energies on Ethernet and VPLS (virtual private LAN services), in addition to a few other supporting Layer 2 interworking technologies.
Although many efforts at repositioning end up in eventual shutdown, it seems as if the market may have shifted in a direction that Tenor could capitalize on. It’s still too early to tell, and there are a lot of questions around the applicability of early ASIC development to new market direction; but at first sight, it’s at least worth another look.
VPLS essentially provides multipoint Ethernet capabilities for what’s more historically known as TLS (transparent LAN services), which is point-to-point. In realizing that Frame Relay circuit connections are the edge of the network, Tenor is focusing on solutions that can take advantage of that, as FR-ATM hub sites become capacity constrained, while also being able to deliver Ethernet-based services.
In a way, it’s not unlike AT&T’s IP-enabled Frame Relay service (which is essentially FR spokes feeding into a routed VPN core). The end result for Tenor will probably be a less scaleable box, with more purpose-built functionality for the application mentioned above.
FYI: Another stealthy startup that has a lot of influence on the direction of VPLS is Timetra Networks (see Timetra Trumpets Its Edge Router ). Incumbent router vendors are undoubtedly following VPLS developments as well, but there is a thought that Ethernet access needs to prevail before VPLS can really take off in a meaningful way.
* * * * *I spent at least 45 minutes with Troy Dixler of Allegro Networks Inc. in the hotel bar, me drinking, he talking about what he calls the Great Internet Lie: If only it were enabled with QOS and security it would become a profitable service platform.
Troy believes millions have been wasted and will continue to be wasted on this notion. It is his belief that until carriers make the distinction between IP networks and the Internet, profits will be impossible.
The answer for our surviving carriers out there is to relegate the Internet to its original place in their world: a best-efforts network that you charge a little for access to, nothing more. The Internet can’t be improved upon and shouldn’t be, Troy argues. The hope for IP is in private networks. Each carrier needs to build one, own it end to end, and start providing a range of services that they can guarantee, bill for, and take responsibility for – much as they did years ago when they built out their fast-packet networks. This gets them past many of the issues around security, SLA support, and new classes of service.
Troy is on to something here, and I think much of what we heard at NGN supported his argument. So much of the money being spent on security and QOS for the Internet may ultimately be unnecessary if carriers instead devote their resources to building unique IP networks. Maybe the Internet is best left to the consumer, something AOL charges $25 per month for and we all use for sending our emails. Anything more advanced is not put onto the Internet in a titanium jacket, but instead travels over a secure private IP network that our carrier owns and operates end to end. If anything goes wrong, we know whom to call – and no one can pass the buck.
You May Also Like