On Tue, Apr 07, 2009 at 06:32:43PM -0700, David Lutterkort wrote:
On Tue, 2009-04-07 at 18:39 -0300, Klaus Heinrich Kiwi wrote:
> I was thinking about the semantics I described above. It ultimately
> means that we'll have a bridge for each VLAN tag that crosses the trunk
> interface. So for example if guests A, B and C are all associated with
> VLAN ID 20, then:
>
> eth0 -> eth0.20 -> br0 -> [tap0, tap1, tap2]
>
> (where tap[0-3] are associated with guests A, B, C respectively)
Yes, I think that's how it should work; it would also mean that you'd
first set up eth0 as a separate interface, and new bridge/vlan interface
combos afterwards. AFAIK, for the bridge, only bootproto=none would make
sense.
> The things that concerns me the most are:
> 1) How scalable this really is
I don't know either ... we'll find out ;)
I don't think that's really a scalability problem from libvirt's POV. I
know people use this setup quite widely already even with plain ifcfg-XXX
scripts. Any scalability problems nmost likely fall into the kernel /
networking code and whether it is good at avoiding unneccessary data
copies when you have stacked NIC -> VLAN -> BRIDGE -> TAP
> 2) The semantics are really different from how physical, 802.1q-enabled
> switches would work.
>
> Because (2) really creates new switches for each new VLAN tag, I wonder
> how management would be different from what we have today with physical
> switches (i.e., defining a port with a VLAN ID, assigning that port to a
> physical machine) - unless we hide it behind libvirt somehow.
I think one thing to consider is the difference between the physical and
logical models. The libvirt API / representation here is fairly low
level, dealing in individuals NICs. I think management apps would likely
want to present this in a slightly alternate way dealing more in logical
entities than physical NICs. eg oVirt's network model is closer to the
one you describe where the user defines a new switch for each VLAN tag.
It then maps this into the low level physical model of individual NICs
as needed. I think it si important that libvirt use the physical model
here to give apps flexibility in how they expose it to users.
The reason we are creating all those bridges isn't the VLAN's - it's
that we want to share the same physical interface amongst several
guests. And I don't know of another way to do that.
> Are there other options? Since a tagged interface like eth0.20 is kind
> of a virtual interface itself, would it be appropriate to use those
> directly?
You can use it directly, I just don't know how else you would share it
amongst VM's without a bridge.
In the (nearish) future NICs will start appearing with SR-IOV capabilities.
This gives you one physical PCI device, whcih exposes multiple functions.
So a single physical NIC appears as 8 NICs to the OS. You can thus directly
assign each of these virtual NICs to a different VM directly,a voiding the
need to bridge them.
I don't think its worth spending too much time trying to come up with other
non-bridged NIC sharing setups when hardware is about todo it all for us :-)
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://ovirt.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|