[libvirt-users] How can a bridge be optimized?

Hi, Netperf tells me that a KVM VM using a bridged connection to a system across the Gigabit LAN sees on average about 1/10th of the bandwidth the host sees. This is on very lightly loaded hosts, although with multiple low-use VMs. I've tried adjusting the CPU throttling after seeing that mentioned as a possible factor in slowing down a bridge, but it has not much if any effect here. Is there a good guide somewhere to what might make a difference with this? I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt? Thanks, Whit

On Fri, Jun 01, 2012 at 05:15:35PM -0400, Whit Blauvelt wrote:
Netperf tells me that a KVM VM using a bridged connection to a system across the Gigabit LAN sees on average about 1/10th of the bandwidth the host sees.
What NIC are you using? For example with the Intel 10 GigE cards (ixgbe) it is essential to turn off LRO using ethtool when using bridges, otherwise the performance sucks. And, surely you're using the paravirtualized virtio interface in your guest?
I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt?
Look for SR-IOV.

On Jun 1, 2012 2:17 PM, "Whit Blauvelt" <whit.virt@transpect.com> wrote:
I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt?
Linux Bridge < macvtap < SR-IOV

----- Original Message -----
From: "Dax Kelson" <dkelson@gurulabs.com> To: "Whit Blauvelt" <whit.virt@transpect.com> Cc: libvirt-users@redhat.com Sent: Saturday, June 2, 2012 11:09:17 PM Subject: Re: [libvirt-users] How can a bridge be optimized?
On Jun 1, 2012 2:17 PM, "Whit Blauvelt" < whit.virt@transpect.com > wrote:
I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt?
Linux Bridge < macvtap < SR-IOV
or openvswitch
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

On Sat, Jun 02, 2012 at 11:43:34PM -0400, Andrew Cathrow wrote:
On Jun 1, 2012 2:17 PM, "Whit Blauvelt" < whit.virt@transpect.com > wrote:
I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt?
Linux Bridge < macvtap < SR-IOV
or openvswitch
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it. Openvswitch looks more widely useable and promising, but also lacking user-level documentation. Openvswitch.org's "Documentation" section has three brief notes, all presuming you've already got it in use. Okay, so falling back to macvtap, a long post from over a year ago at http://ubuntuforums.org/showthread.php?t=1687750 says that the host cannot communicate directly with the guests through the bridge when using it. Is this correct? That would rule it out for my use. So now I fall back to the question of whether I should enable virtio where I've currently got (working, but slowish) bridges defined. (I've got virtio in use for memballoon, but not for the bridges.) I look at http://wiki.libvirt.org/page/Virtio and while helpful it underspecifies: In the <interface> section, add a virtio model, like this: <interface type='network'> ... <model type='virtio' /> </interface> That's all it says. So ... I've got entries that currently look about like: <interface type='bridge'> <mac address='52:54:00:25:0a:a2'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> What if anything should carry across to the elipsis in the doc? The doc at http://www.linux-kvm.org/page/Tuning_KVM is similarly terse, and puts it in terms of the qemu command line, rather than libvirt XML, so not completely helpful short of studying how libvirt translates the XML to that line. Ah, here's a more useful doc at http://wiki.libvirt.org/page/Networking#Guest_configuration_2: Guest configuration In order to let your virtual machines use this bridge, their configuration should include the interface definition as described in Bridge to LAN. In essence you are specifying the bridge name to connect to. Assuming a shared physical device where the bridge is called "br0", the following guest XML would be used: <interface type='bridge'> <source bridge='br0'/> <mac address='00:16:3e:1a:b3:4a'/> <model type='virtio'/> # try this if you experience problems with VLANs </interface> So the <model type='virtio'/> can just go under <interface type='bridge'>, that doesn't need to change to <interface type='network'>. Then should the current line like: <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> stay or go? Do others share my perception that it's a shame there's so little interest in writing good docs in this whole area? Regards, Whit

On 06/11/2012 06:34 PM, Whit Blauvelt wrote:
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it.
I think SR-IOV is more for high-end 10 GigE networks; with 1 GigE NICs and fairly modern hardware you should get decent performance (almost wire-speed) with just simple virtio + bridge setup. Also, make sure you have the vhost_net module loaded (or kernel compiled with CONFIG_VHOST_NET=y), recent (0.9.0 and newer I believe?) libvirt versions should detect it and enable vhost=on with QEMU/KVM automatically. That should give you some further performance boost. I agree that there is some room for improvement in libvirt documentation regarding the network configuration.

On Mon, Jun 11, 2012 at 07:22:48PM +0300, Henrik Ahlgren wrote:
On 06/11/2012 06:34 PM, Whit Blauvelt wrote:
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it.
I think SR-IOV is more for high-end 10 GigE networks; with 1 GigE NICs and fairly modern hardware you should get decent performance (almost wire-speed) with just simple virtio + bridge setup.
Also, make sure you have the vhost_net module loaded (or kernel compiled with CONFIG_VHOST_NET=y), recent (0.9.0 and newer I believe?) libvirt versions should detect it and enable vhost=on with QEMU/KVM automatically. That should give you some further performance boost.
I agree that there is some room for improvement in libvirt documentation regarding the network configuration.
Thanks. Not sure if I have the invocation or prerequisites right. If I do <interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface> I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be? What are the minimum versions of stuff for virtio to work for this? Virtio's getting passed through for memballoon (from XML in the form "<memballoon model='virtio'>") so it's there. If I take out that "address type" line throughput drops to around 100Mb. I do have vhost_net loaded now on the host (didn't before). Another VM on the same host with no virtio line tests even slower. So far in our instance slow hasn't been bad. The VMs haven't needed more for their purposes. But now we're wanting to run a few where IO requirements are higher, thus my interest in tuning this. Whit

On 06/11/2012 09:28 PM, Whit Blauvelt wrote:
On Mon, Jun 11, 2012 at 07:22:48PM +0300, Henrik Ahlgren wrote:
On 06/11/2012 06:34 PM, Whit Blauvelt wrote:
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it.
I think SR-IOV is more for high-end 10 GigE networks; with 1 GigE NICs and fairly modern hardware you should get decent performance (almost wire-speed) with just simple virtio + bridge setup.
Also, make sure you have the vhost_net module loaded (or kernel compiled with CONFIG_VHOST_NET=y), recent (0.9.0 and newer I believe?) libvirt versions should detect it and enable vhost=on with QEMU/KVM automatically. That should give you some further performance boost.
I agree that there is some room for improvement in libvirt documentation regarding the network configuration.
Thanks. Not sure if I have the invocation or prerequisites right. If I do
<interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface>
I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be? What are the minimum versions of stuff for virtio to work for this? Virtio's getting passed through for memballoon (from XML in the form "<memballoon model='virtio'>") so it's there.
If I take out that "address type" line throughput drops to around 100Mb. I do have vhost_net loaded now on the host (didn't before). Another VM on the same host with no virtio line tests even slower.
So far in our instance slow hasn't been bad. The VMs haven't needed more for their purposes. But now we're wanting to run a few where IO requirements are higher, thus my interest in tuning this.
How exactly do you test the bandwidth (tools) and do you only test the connection to an external machine or have you also tested the bandwidth between guest and host and this guest and another one on the same host? Regards, Dennis

On Tue, Jun 12, 2012 at 12:55:48AM +0200, Dennis Jacobfeuerborn wrote:
How exactly do you test the bandwidth (tools) and do you only test the connection to an external machine or have you also tested the bandwidth between guest and host and this guest and another one on the same host?
Netperf. The test is to another server on the LAN. From the KVM host itself to the server, from the same bridged interface, it tests at just under wire speed (with server-grade GigE Intel NICs on both). But from the VMs, not. We don't care about speed between guest and host or between guests. All of the production use of the VMs where speed matters is in connections to servers and workstations elsewhere on the LAN. That's not to say I can't do those tests if they are of diagnostic use. Just that the only goal that currently matters is connection speed from the VMs to other systems across the LAN. The VM host is a recent 16-core machine, lightly loaded (8 VMs at present, none doing much). I've seen mention of tunings for specific NICs. Is there a table that correlates NIC models with suggesting tuning somewhere? Whit

On Mon, Jun 11, 2012 at 03:28:59PM -0400, Whit Blauvelt wrote:
I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be?
That clearly indicates you are not running with a virtio NIC. You should see "-device virtio-net-pci" in the process list. What versions of livirt and kvm/qemu are you running? What is the guest's operating system? Does it also report rtl8139 (lshw, ethtool -i eth0)?

On Tue, Jun 12, 2012 at 03:37:00PM +0300, Henrik Ahlgren wrote:
On Mon, Jun 11, 2012 at 03:28:59PM -0400, Whit Blauvelt wrote:
I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be?
That clearly indicates you are not running with a virtio NIC. You should see "-device virtio-net-pci" in the process list.
What versions of livirt and kvm/qemu are you running? What is the guest's operating system? Does it also report rtl8139 (lshw, ethtool -i eth0)?
The host is Ubuntu 10.10 with the 2.6.35-28-server kernel. Libvirt is 0.8.3-1ubuntu19.4, qemu-kvm is 0.12.5+noroms-0ubuntu7.11. Not sure how that relates to "Get kvm version >= 60" (from http://www.linux-kvm.org/page/Virtio) but it's at least a "Kernel >= 2.6.25". The process list shows "-device virtio-balloon-pci", so virtio is at least partially there. But also, "-device rtl8139", so this part of the XML appears not to be taking: <interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface> Even though, oddly, it's consistently 50% faster with the virtio line than without. The guest is running Ubuntu 11.10 with the 3.0.0-20-virtual kernel. The guest reports: # ethtool -i eth0 driver: 8139cp The physical NIC is an Intel 82576 using the igb driver. Thanks, Whit

On Tue, Jun 12, 2012 at 10:24:24AM -0400, Whit Blauvelt wrote:
The host is Ubuntu 10.10 with the 2.6.35-28-server kernel. Libvirt is 0.8.3-1ubuntu19.4, qemu-kvm is 0.12.5+noroms-0ubuntu7.11. Not sure how that relates to "Get kvm version >= 60" (from http://www.linux-kvm.org/page/Virtio) but it's at least a "Kernel >= 2.6.25".
Reading more carefully, the kernel requirement is only for the _guest_.
The guest is running Ubuntu 11.10 with the 3.0.0-20-virtual kernel.
That linux-kvm.org page also says, "At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)". Now, I'm certainly hoping that's been fixed by now. It doesn't require a manual ifconfig, right? According to .config CONFIG_VIRTIO_NET=y for the guest. Whit

On Tue, Jun 12, 2012 at 10:24:24AM -0400, Whit Blauvelt wrote:
The host is Ubuntu 10.10 with the 2.6.35-28-server kernel. Libvirt is 0.8.3-1ubuntu19.4, qemu-kvm is 0.12.5+noroms-0ubuntu7.11. Not sure how that relates to "Get kvm version >= 60" (from http://www.linux-kvm.org/page/Virtio) but it's at least a "Kernel >= 2.6.25".
The relationship between KVM and QEMU is all too confusing, but nowadays you should only need to care about the qemu-kvm version (?). 0.12 is quite old and Ubuntu 10.10/maverick no longer receives (security) updates, so you should consider upgrading if posssible. If upgrading the whole distro is infeasible, you might be able to install qemu-kvm and libvirt from a newer Ubuntu release without bringing in too much dependencies.
<interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface>
I like to use network type interfaces and "logical" network names, but this should work fine. You might want to experiment with different offload settings (ethtool -k) to see if they make any difference with the igb card. Also, have you tried measuring the performance between the host and a guest?

On Tue, Jun 12, 2012 at 06:14:00PM +0300, Henrik Ahlgren wrote:
The relationship between KVM and QEMU is all too confusing, but nowadays you should only need to care about the qemu-kvm version (?). 0.12 is quite old and Ubuntu 10.10/maverick no longer receives (security) updates, so you should consider upgrading if posssible. If upgrading the whole distro is infeasible, you might be able to install qemu-kvm and libvirt from a newer Ubuntu release without bringing in too much dependencies.
Anyone know if I can expect qemu-kvm-1.0.1 to compile and install cleanly from source? It would be awkward to have to take the system down for a thorough upgrade, and it's not in a position where security updates are crucial.
<interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface>
I like to use network type interfaces and "logical" network names, but this should work fine.
You might want to experiment with different offload settings (ethtool -k) to see if they make any difference with the igb card. Also, have you tried measuring the performance between the host and a guest?
From host to guest, a bit above 260 Mb/s. From guest to host, a bit below 80 Mb. From guest across LAN to another system, 150 Mb +- 10. From across LAN to guest, 140-180 Mb. From across LAN to host, a consistent 947 Mb.
That's with initial offload settings of: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: off ntuple-filters: off receive-hashing: off With everything switched off, no change in VM I/O from any perspective. Whit

On 06/11/2012 03:28 PM, Whit Blauvelt wrote:
On Mon, Jun 11, 2012 at 07:22:48PM +0300, Henrik Ahlgren wrote:
On 06/11/2012 06:34 PM, Whit Blauvelt wrote:
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it. I think SR-IOV is more for high-end 10 GigE networks; with 1 GigE NICs and fairly modern hardware you should get decent performance (almost wire-speed) with just simple virtio + bridge setup.
Also, make sure you have the vhost_net module loaded (or kernel compiled with CONFIG_VHOST_NET=y), recent (0.9.0 and newer I believe?) libvirt versions should detect it and enable vhost=on with QEMU/KVM automatically. That should give you some further performance boost.
I agree that there is some room for improvement in libvirt documentation regarding the network configuration. Thanks. Not sure if I have the invocation or prerequisites right. If I do
<interface type='bridge'> <mac address='00:16:36:89:65:2e'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <model type='virtio'/> </interface>
I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be?
No. If you say <model type='virtio'/>, then the qemu commandline will have "-device virtio-net". If it doesn't then your config change hasn't taken effect. After making the change, did you completely shutdown the guest, then restart it (a reboot operation from within the guest isn't sufficient to bring in the config changes, as it doesn't terminate and restart the qemu process).
What are the minimum versions of stuff for virtio to work for this? Virtio's getting passed through for memballoon (from XML in the form "<memballoon model='virtio'>") so it's there.
Those are two separate and unrelated drivers; they just coincidentally both have "virtio" in their names.
If I take out that "address type" line throughput drops to around 100Mb.
This piece doesn't make sense. If you remove the <address> line, a new one will be automatically added, most likely at the same slot as the old one. The virtual NIC has to be attached *somewhere* on the guest's PCI bus. I'm guessing it was some other factor that changed the benchmark results,.

On Wed, Jun 13, 2012 at 11:59:45AM -0400, Laine Stump wrote:
I get around 150Mb, (on a Gb interface). But then its still showing as "-device rtl8139" on the host. Should that be?
No. If you say <model type='virtio'/>, then the qemu commandline will have "-device virtio-net". If it doesn't then your config change hasn't taken effect. After making the change, did you completely shutdown the guest, then restart it (a reboot operation from within the guest isn't sufficient to bring in the config changes, as it doesn't terminate and restart the qemu process).
Yeah I did. Probably my libvirt and/or kvm-qemu is just too old. Didn't restart the host. But totally shut down the guest several times. I'll be bringing libvirt and kvm-qemu up to current releases presently. If that doesn't fix it, then there's a deeper mystery here. Whit

On 06/11/2012 06:34 PM, Whit Blauvelt wrote:
Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it.
I think SR-IOV is more for high-end 10 GigE networks; with 1 GigE NICs and fairly modern hardware you should get decent performance (almost wire-speed) with just simple virtio + bridge setup. Also, make sure you have the vhost_net module loaded (or kernel compiled with CONFIG_VHOST_NET=y), recent (0.9.0 and newer I believe?) libvirt versions should detect it and enable vhost=on with QEMU/KVM automatically. That should give you some further performance boost. I agree that there is some room for improvement in libvirt documentation regarding the network configuration.

On 06/11/2012 11:34 AM, Whit Blauvelt wrote:
On Sat, Jun 02, 2012 at 11:43:34PM -0400, Andrew Cathrow wrote:
On Jun 1, 2012 2:17 PM, "Whit Blauvelt" < whit.virt@transpect.com > wrote:
I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt? Linux Bridge < macvtap < SR-IOV
or openvswitch Looking into background info on these, it looks like SR-IOV capability is specific to certain NICs and not documented for KVM/libvirt in any obvious place. Maybe I just didn't find it.
In a later message you indicate that your host has an Intel 82576 NIC. That *is* one of the few that has SR-IOV capabilities - each of its ports has up to 7 Virtual Functions (VF) which can each be directly assigned to a guest. The problem with this in the version of libvirt you use (0.8.3, which is very dated) is that you can only attach the device using <hostdev>, which doesn't support setting the MAC address of the device before assigning it to the guest; since SR-IOV VFs are each given new random MAC addresses on each reboot of the host, this means your guest will not be see a stable MAC address, and will thus believe that a new network device has been added after each host reboot. You can solve this with a short script to set the VFs' MAC addresses at host boot time, or if you can upgrade to libvirt-0.9.11 or later, you can use <interface type='hostdev'> to attach SR-IOV VFs to your guests: http://wiki.libvirt.org/page/Networking#PCI_Passthrough_of_host_network_devi... (it also points to a page that explains how to use <hostdev> to attach an SR-IOV (or non-SR-IOV) NIC to a guest.
Openvswitch looks more widely useable and promising, but also lacking user-level documentation. Openvswitch.org's "Documentation" section has three brief notes, all presuming you've already got it in use.
openvswitch itself is a fairly new project, and its support in libvirt is even newer. At this point, the libvirt support requires that an openvswitch bridge is already configured and running on the host. (I'm sure openvswitch would be happy to receive updates/additions to their documentation :-) Of course, since you're using libvirt-0.8.3, you wouldn't be able to easily use openvswitch with your guests anyway (and since you are using kernel-2.6.35 you wouldn't have openvswitch support in the kernel anyway, unless you add it yourself).
Okay, so falling back to macvtap, a long post from over a year ago at
http://ubuntuforums.org/showthread.php?t=1687750
says that the host cannot communicate directly with the guests through the bridge when using it. Is this correct? That would rule it out for my use.
Yes, that is true. There is a way around it, though - add a 2nd interface to each guest that is connected to an isolated bridge: http://wiki.libvirt.org/page/Guest_can_reach_outside_network%2C_but_can%27t_... I'm not really sure how much performance you would gain from switch to macvtap though (I have heard of someone who improved lxc guest network performance from 4Gb/sec to 9Gb/sec by using macvtap to connect to a vlan interface (which I previously didn't even know was supported!), but that situation is very different from yours, and frankly the low numbers you give make it sound like you have some other basic problem that won't be fixed by switching to a different connection type. (Another thing to think about with macvtap is that it was only added in kernel-2.6.34, so the version in your kernel probably still has bugs.)
So now I fall back to the question of whether I should enable virtio where I've currently got (working, but slowish) bridges defined.
Yes. Any time the guest supports virtio-net, use it (unless you are using PCI passthrough to directly use the physical device). Note that the use of the guest virtio driver is completely orthogonal to whether you use macvtap, a host bridge, or a libvirt-managed virtual network
(I've got virtio in use for memballoon, but not for the bridges.) I look at http://wiki.libvirt.org/page/Virtio and while helpful it underspecifies:
In the <interface> section, add a virtio model, like this:
<interface type='network'> ... <model type='virtio' /> </interface>
That's all it says. So ... I've got entries that currently look about like:
<interface type='bridge'> <mac address='52:54:00:25:0a:a2'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
What if anything should carry across to the elipsis in the doc?
That <interface> definition will use the default model for the hypervisor, which in this case is rtl8139. To change an existing interface definition to use a different model, just add the <model ... > line and leave the rest of the <interface> section as it is. If you are adding a new interface, you don't need to add a <mac address=.../> or <adress type='pci' .../> line, as those are automatically generated and added for you (in particular, don't add the <address> line, because each guest device needs to have a unique address, and there are some odd rules about placement of some devices that libvirt knows about and accounts for).
The doc at http://www.linux-kvm.org/page/Tuning_KVM is similarly terse, and puts it in terms of the qemu command line, rather than libvirt XML, so not completely helpful short of studying how libvirt translates the XML to that line. Ah, here's a more useful doc at http://wiki.libvirt.org/page/Networking#Guest_configuration_2:
Guest configuration
In order to let your virtual machines use this bridge, their configuration should include the interface definition as described in Bridge to LAN. In essence you are specifying the bridge name to connect to. Assuming a shared physical device where the bridge is called "br0", the following guest XML would be used:
<interface type='bridge'> <source bridge='br0'/> <mac address='00:16:3e:1a:b3:4a'/> <model type='virtio'/> # try this if you experience problems with VLANs </interface>
So the <model type='virtio'/> can just go under <interface type='bridge'>, that doesn't need to change to <interface type='network'>.
Right. Those two things are orthogonal. <model type='xxx'/> is defining what kind of device to present to the guest. The type='nnn' attribute directly in <interface> is defining how to connect the guest's network data stream to the physical network.
Then should the current line like:
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
stay or go?
If this line is already there, don't remove it. If there isn't one, don't add it - an address appropriate for the situation will be automatically added.
Do others share my perception that it's a shame there's so little interest in writing good docs in this whole area?
As much documentation as exists, there could still be a lot more, and it could stand to be better organized. We welcome any contributions to the libvirt wiki. Some specific resources that may be helpful to you: http://www.libvirt.org/formatdomain.html#elementsNICS (This is (or should be) a full reference of every option supported in <interface> definitions. Pay close attention to all of the "since x.x.x" tags - many features have been added to libvirt since 0.8.3, so a lot of what's there won't be supported by your host's libvirt anyway.) http://wiki.libvirt.org/page/Troubleshooting (in particular, there are several items in the middle of the list specifically dealing with networking problems) http://wiki.libvirt.org/page/Networking (I think you've already found that one).
participants (6)
-
Andrew Cathrow
-
Dax Kelson
-
Dennis Jacobfeuerborn
-
Henrik Ahlgren
-
Laine Stump
-
Whit Blauvelt