& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
dwx
50%
50%
dwx,
User Rank: Light Sabre
12/5/2014 | 12:48:57 PM
Re: Where to learn more
Well the VM layer is built these days where all the CPU stuff is basically pass-through and the high performance packet forwarding requires the NIC hardware to be accessible in the same way.   The Intel DPDK makes use of SR-IOV but there isn't a SR-IOV abstraction layer built into the hypervisors right now. It requires the hypervisor to dynamically create the virtual functions on the device to get the most benefit which is more complicated than creating a basic software switch.  ESX and KVM really only support it through a pass-through mode today where you tie an entire NIC to a single VM.  It is what you see with the Juniper, ALU, and Vyatta virtual routers.   

There are companies like Metronome who make their FlowNIC card which interacts with regular OVS to add hardware acceleration.  It's basically just a NPU on a PCI card.  Some might call that cheating because you aren't just using off-the-shelf components then, you are putting dedicated packet processing in your server.  I think in the end it comes down to cost and what makes the most sense for your application.  

A Xilinx FPGA solution would be similar. 

 
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
12/5/2014 | 11:02:29 AM
Re: Where to learn more
dwx,

Even without sharing the NICs and all the hardware are abstracted.  That is rather the point of a VM.  Yes, you lose efficiency.  I have stated here before that I don't think we will have the maximum performance applications being done on standard hardware with VMs.

seven

 
catalyst
50%
50%
catalyst,
User Rank: Light Beer
12/5/2014 | 1:45:55 AM
FPGA based NICs makes a lot of difference to this system
Try using server with Xilinx FPGA based NIC cards. Programmability aspect of this will not only enhances all your numbers, also provide future proofing for the platform. Needless to say, In the world of competion  every element in the equation maters (Juniper and ALU already announced 160G throughput vRouters). 

NewCatalyst
dwx
50%
50%
dwx,
User Rank: Light Sabre
12/4/2014 | 4:05:03 PM
Re: Where to learn more
I don't think you are going to see a sharing of those resources without a hardware layer to do it.  The packet processing speed is dependent on direct access to the NIC hardware.  

The HW layer to do the virtualization is going to be on the NIC where the NIC itself presents virtual NICs to the OS.   It has different names but there are NICs which do this today.  QLogic can partition their 10GbE port into four virtual NICs.  Cisco has a "virtual NIC" PCI card for UCS which presents up to 256 NICs to the underlying OS, it's one of the tenants of the UCS architecture.  

Now Intel who developed the DPDK for their 10GE NIC hardware, doesn't seem to have this today.  

 

 

  
brooks7
50%
50%
brooks7,
User Rank: Light Sabre
12/3/2014 | 7:04:03 PM
Re: Where to learn more
So question....the point of a VM is to share the underlying hardware as well as abstract it.  Are we heading to abstraction only first?

 

seven

 
cnwedit
50%
50%
cnwedit,
User Rank: Light Beer
12/3/2014 | 4:28:47 PM
Re: Where to learn more
Thanks for that addition. 
ANON1235145999186
100%
0%
ANON1235145999186,
User Rank: Light Beer
12/3/2014 | 4:25:52 PM
Where to learn more
Great Article Carol!

If you are interested, You can download the performance report about the on SDNCentral at:  https://www.sdncentral.com/download-brocade-vyatta-5600-vrouter-performance-report/


Featured Video
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events