[libvirt-users] network-performance

I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups: Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64 Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64 The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest? -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion@cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com

On 02/02/2011, at 4:39 AM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64
Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64
The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest?
Hi Orion, Just as an initial question, if the CentOS 5.5 guest using the VirtIO network drivers? It's been ages since I used CentOS 5.x, so I don't remember if they're the default or not. That's the initial "big performance boost" thing that's needed over the emulation type drivers. To check, take a look at the XML definition for the guest, and look at the networking interface. There should be an element there called "model", and it will contain the type of network card being emulated. If it's anything other than "virtio" then an emulated network driver interface is being used. (not real fast) I'd give you a direct URL for the XML to reference, but ironically the libvirt.org server appears to be offline at the moment (doesn't happen very often thankfully!). Heh. Hope that helps. Regards and best wishes, Justin Clift

On 02/01/2011 11:42 AM, Justin Clift wrote:
On 02/02/2011, at 4:39 AM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64
Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64
The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest?
Hi Orion,
Just as an initial question, if the CentOS 5.5 guest using the VirtIO network drivers? It's been ages since I used CentOS 5.x, so I don't remember if they're the default or not.
That's the initial "big performance boost" thing that's needed over the emulation type drivers.
To check, take a look at the XML definition for the guest, and look at the networking interface. There should be an element there called "model", and it will contain the type of network card being emulated. If it's anything other than "virtio" then an emulated network driver interface is being used. (not real fast)
I'd give you a direct URL for the XML to reference, but ironically the libvirt.org server appears to be offline at the moment (doesn't happen very often thankfully!). Heh.
Hope that helps.
Regards and best wishes,
Justin Clift
Thanks a lot. For some reason that particular machine was missing the crucial: <model type='virtio'/> in the interface section. Performance is much better now. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion@cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com

On 2/1/2011 12:39 PM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64
Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64
The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest?
First thing is to stop unfairly comparing things that don't even claim to do the same job. hyper-v is a hypervisor, while kvm is not, xen is. It would be closer but still unfair, to compare qemu or virtualbox for windows to kvm. You didn't say what kind of networking is being used wth hyper-v, but it's an understood fact that bridgeing in linux is easy to use and less efficient than routing or vlan or macvlan. So I guess the answer is use xen and something other than bridging. -- bkw

On Tue, Feb 01, 2011 at 02:33:28PM -0500, Brian K. White wrote:
On 2/1/2011 12:39 PM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Host: CentOS 5.5 x86_64 kvm running libvirt
First thing is to stop unfairly comparing things that don't even claim to do the same job. hyper-v is a hypervisor, while kvm is not, xen is.
Hi Brian, I don't want to sound picky, but I did a quick search in the KVM documentation and I couldn't find what category KVM is. I really thought it was playing the same league as Xen. That's from the KVM faq: http://www.linux-kvm.org/page/FAQ#What_is_the_difference_between_KVM_and_Xen... Xen is an external hypervisor ... On the other hand, KVM is part of Linux and uses the regular Linux scheduler and memory ... I just found this Linux Journal article: http://www.linuxjournal.com/article/9764 KVM is a unique hypervisor. ...

On 02/02/2011, at 7:44 PM, Francesc Guasch wrote:
On Tue, Feb 01, 2011 at 02:33:28PM -0500, Brian K. White wrote:
On 2/1/2011 12:39 PM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Host: CentOS 5.5 x86_64 kvm running libvirt
First thing is to stop unfairly comparing things that don't even claim to do the same job. hyper-v is a hypervisor, while kvm is not, xen is.
Hi Brian, I don't want to sound picky, but I did a quick search in the KVM documentation and I couldn't find what category KVM is. I really thought it was playing the same league as Xen.
That's from the KVM faq:
http://www.linux-kvm.org/page/FAQ#What_is_the_difference_between_KVM_and_Xen...
Xen is an external hypervisor ... On the other hand, KVM is part of Linux and uses the regular Linux scheduler and memory ...
I just found this Linux Journal article:
http://www.linuxjournal.com/article/9764
KVM is a unique hypervisor. ...
Heh, Brian's email was worded a bit strongly, so may have thrown you onto the wrong track. ;) With RHEL *6* (and therefore CentOS 6 when it's released), KVM is production quality. With the CentOS 5 series though, I'm personally just "not sure". Maybe Xen is the right choice for that technology series. If you want the most up-to-date feature set and best performance, in a RHEL/CentOS type of distribution, then you probably want to get a hold of RHEL 6 or wait for CentOS 6 to be released. As I work for Red Hat, I can find the right sales-y type of contact for you if you want, and they'll get you access to the RHEL 6 evaluation download. (I think it's like a 30 day trial version or something, but I'm not sure... haven't kept up with that side of things.) Anyway, hope some of that's useful, and happy to find someone for you if needed. :) Regards and best wishes, Justin Clift

On Tue, Feb 01, 2011 at 02:33:28PM -0500, Brian K. White wrote:
On 2/1/2011 12:39 PM, Orion Poplawski wrote:
I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups:
Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64
Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64
The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest?
First thing is to stop unfairly comparing things that don't even claim to do the same job. hyper-v is a hypervisor, while kvm is not, xen is. It would be closer but still unfair, to compare qemu or virtualbox for windows to kvm.
This distinction is completely irrelevant FUD. It is perfectly valid to compare Hyper-V, Xen, KVM, VMWare and VirtualBox all together, regardless of fact that they have different architectures. They are all hypervisors, simply different types of hypervisor and none of their architectures are inherantly "best", merely different. For the only industry standard virtualization benchmark (SpecVirt), KVM has the leading figures beating VMWare. This demonstrates the KVM architecture is more than a match for the classical hypervisor model of VWMare/Xen.
You didn't say what kind of networking is being used wth hyper-v, but it's an understood fact that bridgeing in linux is easy to use and less efficient than routing or vlan or macvlan.
So I guess the answer is use xen and something other than bridging.
Both Xen and KVM use the Linux hosting for network connectivity, so anything you can do with Xen networking you can do with KVM. KVM in fact has more options because it can use macvtap which isn't available to Xen yet. Regards, Daniel
participants (5)
-
Brian K. White
-
Daniel P. Berrange
-
Francesc Guasch
-
Justin Clift
-
Orion Poplawski