(first post)
Dear all,
I have been wrestling with this issue for the past few days ; googling
around doesn't seem to yield anything useful, hence this cry for help.
Setup :
- I am running several RHEL5.4 KVM virtio guest instances on a Dell PE
R805 RHEL5.4 host. Host and guests are fully updated ; I am using iperf
to test available bandwidth from 3 different locations (clients) in the
network to both the host and the guests .
- To increase both bandwidth and fail-over, 3 1Gb network interfaces
(BCM5708, bnx2 driver) on the host are bonded (802.3ad) to a 3 Gb/s
bond0, which is bridged. As all guest interfaces are connected to the
bridge, I would expect total available bandwidth to all guests to be in
the range of 2-2.5 Gb/s.
- Testing with one external client connection to the bare metal host
yields approx. 940 Mb/s ;
- Testing with 3 simultaneous connections to the host yields 2.5 Gb/s,
which confirms a successful bonding setup.
Problem :
Unfortunately, available bandwidth to the guests proves to be problematic :
1. One client to one guest : 250-600 Mb/s ;
2a. One client to 3 guests : 300-350 Mb/s to each guest, total not
exceeding 980 Mb/s;
2b. Three clients to 3 guests : 300-350 Mb/s to each guest ;
2c. Three clients to host and 2 guests : 940 Mb/s (host) + 500 Mb/s to
each guest.
Conclusions :
1. I am experiencing a 40% performance hit (600 Mb/s) on each individual
virtio guest connection ;
2. Total simultaneous bandwidth to all guests seems to be capped at 1
Gb/s ; quite problematic, as this renders my server consolidation almost
useless.
I could bridge each host network interface separately and assign guest
interfaces by hand, but that would defy the whole idea of load balancing
and failover which is provided by the host bonding.
Any ideas anyone, or am I peeking in the wrong direction (clueless
setup, flawed testing methodology, ...) ?
Thanks in advance for any help,
Didier
--
Didier Moens , IT services
Department for Molecular Biomedical Research (DMBR)
VIB - Ghent University