On Thu, Jan 21, 2010 at 11:29:07AM +0100, Didier Moens wrote:
(first post)
Dear all,
I have been wrestling with this issue for the past few days ; googling
around doesn't seem to yield anything useful, hence this cry for help.
Setup :
- I am running several RHEL5.4 KVM virtio guest instances on a Dell PE
R805 RHEL5.4 host. Host and guests are fully updated ; I am using iperf
to test available bandwidth from 3 different locations (clients) in the
network to both the host and the guests .
- To increase both bandwidth and fail-over, 3 1Gb network interfaces
(BCM5708, bnx2 driver) on the host are bonded (802.3ad) to a 3 Gb/s
bond0, which is bridged. As all guest interfaces are connected to the
bridge, I would expect total available bandwidth to all guests to be in
the range of 2-2.5 Gb/s.
- Testing with one external client connection to the bare metal host
yields approx. 940 Mb/s ;
- Testing with 3 simultaneous connections to the host yields 2.5 Gb/s,
which confirms a successful bonding setup.
Problem :
Unfortunately, available bandwidth to the guests proves to be problematic :
1. One client to one guest : 250-600 Mb/s ;
2a. One client to 3 guests : 300-350 Mb/s to each guest, total not
exceeding 980 Mb/s;
2b. Three clients to 3 guests : 300-350 Mb/s to each guest ;
2c. Three clients to host and 2 guests : 940 Mb/s (host) + 500 Mb/s to
each guest.
Conclusions :
1. I am experiencing a 40% performance hit (600 Mb/s) on each individual
virtio guest connection ;
2. Total simultaneous bandwidth to all guests seems to be capped at 1
Gb/s ; quite problematic, as this renders my server consolidation almost
useless.
I could bridge each host network interface separately and assign guest
interfaces by hand, but that would defy the whole idea of load balancing
and failover which is provided by the host bonding.
Any ideas anyone, or am I peeking in the wrong direction (clueless
setup, flawed testing methodology, ...) ?
You don't mention whether you've configured jumboframes on the NICs ? If you
have GB networking, you typically need to increase the MTU on all NICs, including
the bridge itself in order to get the most bandwidth out of your network. IIUC,
this is particularly important for guest networking since context switches for
small packets have bad effect on throughput. Also make sure that the sysctls
for
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
are all set to '0', since iptables adds overhead to the networking stack, again
made particularly bad with bridging since it means results in extra data copies
not allowing full use of TX-offload.
That all said, i'm not too familiar with best practice recommendations for
KVM virtio networking, so you might like you post your questions to the
kvm-devel mailing list too where more of the experts are likely to see it
Regards,
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://ovirt.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|