'SuperNICs' offload networking processing from hosts, driving Azure cloud performance.

Mitch Wagner, Executive Editor, Light Reading

June 30, 2015

4 Min Read
Microsoft Gives Software Networking a Hardware Boost

While Microsoft relies on SDN to drive agility and cost savings for its Azure cloud service, it also uses customized hardware to give its network the performance it needs.

Microsoft Corp. (Nasdaq: MSFT) has developed its own SmartNIC to offload networking processing loads from hosts, which can dedicate their processing power to application workloads.

"With this, we can offload functionality into the device, saving CPU," says Mark Russinovich, Microsoft Azure CTO, speaking at the Open Networking Summit this month.

The SmartNIC improves performance to 30 Gbit/s on host networks using 40Gbit/s NICs.

The SmartNIC uses an Field-Programmable Gate Array (FPGA) for reconfigurable functions. Microsoft already uses FPGAs in Catapult, an FPGA-based reconfigurable fabric for large-scale data centers developed jointly by Microsoft Research and Bing. Using programmable hardware allows Microsoft to update equipment as it does software.

Microsoft programs the hardware using Generic Flow Tables (GFT), which is the language used for programming SDN.

The SmartNIC can also do crypto, QoS, storage acceleration and more, says Russinovich.

Isn't it a contradiction in terms -- using hardware for software-defined networking? Not necessarily, says Russinovich. "It's different from a specialized device. It's programmable hardware, not fixed-function," he says. "The reason we need FPGAs is our ability to evolve over time. We see this as specialized acceleration, something we need to get these kinds of performance levels."

Azure provides app services, such as identity management for web apps, as well as data services such as SQL database, and infrastructure including virtual machines, virtual networking and storage. It's the infrastructure for Microsoft's Office 365, Skype and Xbox services, with data centers located worldwide.

Azure had 100,000 compute instances in 2010, and today there are millions of virtual machines. Storage has grown from tens of petabytes to exabytes. Networking capacity has grown from tens of terabits to petabits.

Microsoft has 20 ExpressRoute locations worldwide, for hybrid clouds -- private connections between Azure data centers and on-premises or colocated infrastructure. Azure uses 1600 peered networks in over 85 Internet exchanges throughout the world "to allow customers to peer from their networks into our networks," Russinovich says.

More than 57% of the Fortune 500 use Azure, with 90,000 new Azure customers per month, and more than five million requests per second.

To achieve scale, Microsoft had to use "hyperscale SDN," breaking away from a proprietary appliance combining management, control and data plane and separating those functions. Now, the management plane exposes APIs, the control plane uses the APIs to create rules, and then passes those rules to switches.

"The key here is host SDN, where we push as much logic and processing as we can down to the host," Russinovich says.

Controllers must scale to more than 500,000 hosts in a region. Each region comprises multiple data centers. Controllers must also scale down to small sizes as well.

Azure takes a partitioned and tiered approach to controllers. Regional controllers push state to clustered controllers. The clustered controllers are stateless caches that can fail and relearn from the regional controller.

Want to know more about SDN? Visit Light Reading's SDN technology content channel.

Azure scales using Microsoft Service Fabric, a microservices platform that manages "all the gruntwork of application lifecycle management for an application that is decomposed into microservices," Russinovich says. The fabric can operate on Microsoft public clouds, private clouds, and non-Microsoft clouds as well.

Microsoft is "100% committed to open source," Russinovich says. The company uses Hadoop. Also, 20% of the virtual machines on Azure run Linux. And Microsoft has contributed core .Net code to open source, and it supports SDN and public standards.

But can we truly call it software-defined networking if it includes a custom hardware component? Does it matter? Microsoft has found a solution that works for it, and that other network operators might want to emulate.

Related posts:

— Mitch Wagner, Circle me on Google+ Follow me on TwitterVisit my LinkedIn profileFollow me on Facebook, West Coast Bureau Chief, Light Reading. Got a tip about SDN or NFV? Send it to [email protected].

About the Author(s)

Mitch Wagner

Executive Editor, Light Reading

San Diego-based Mitch Wagner is many things. As well as being "our guy" on the West Coast (of the US, not Scotland, or anywhere else with indifferent meteorological conditions), he's a husband (to his wife), dissatisfied Democrat, American (so he could be President some day), nonobservant Jew, and science fiction fan. Not necessarily in that order.

He's also one half of a special duo, along with Minnie, who is the co-habitor of the West Coast Bureau and Light Reading's primary chewer of sticks, though she is not the only one on the team who regularly munches on bark.

Wagner, whose previous positions include Editor-in-Chief at Internet Evolution and Executive Editor at InformationWeek, will be responsible for tracking and reporting on developments in Silicon Valley and other US West Coast hotspots of communications technology innovation.

Beats: Software-defined networking (SDN), network functions virtualization (NFV), IP networking, and colored foods (such as 'green rice').

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like