Thank you for your comprehensive reply!
On Tue, Sep 17, 2013 at 4:15 AM, Laine Stump <laine(a)laine.org> wrote:
On 09/16/2013 07:34 PM, Ajith Antony wrote:
> The resulting ephemeral bridge(virbr1) looks like the following when my
> network(w/o vlans) and two domains are started. I don't know if the portgroup
> was meaningful, but it was accepted in the definition:
>
> $ sudo ovs-vsctl show
> <...>
> Bridge "virbr1"
> Port "vnet23"
> Interface "vnet23"
> Port "vnet25"
> Interface "vnet25"
> Port "virbr1"
> Interface "virbr1"
> type: internal
> Port "virbr1-nic"
> Interface "virbr1-nic"
> ovs_version: "1.9.3"
You apparently have openvswitch's "Linux host bridge compatibility"
package installed on your machine. If you didn't, the network definition
you have would have created a Linux host bridge rather than an
openvswitch bridge. libvirt doesn't contain any code that can create an
openvswitch bridge directly, so that's the only possible way this could
be happening. The problem is that when you use compatibility mode,
you're limited to the Linux bridge-utils API, which has no method of
specifying a vlan tag for individual ports (because Linux host bridges
lack that capability).
Aha! I did not understand this. I was under the impression that libvirt was
managing this. I understand now.
> Ultimately my goal is to prepare isolated test environments that
consist of
> several VM's attached to a similar qty of vlans. I intend to create many of
> these environments per host. I also recongnize that instead of portgroups, I
> could use separate networks altogether. From an administrative standpoint, I'd
> prefer to have one "network" per test environment, with several
portgroups,
> instead of *many* networks.
Since this is all just numbers in memory (no real cables / switches),
there is little to no practical difference between having a single
bridge with lots of vlans, or having lots of bridges with no vlans.
One big difference is that you can do the latter today with existing
libvirt code (and you don't even need to have openvswitch installed on
your host). Unless you have > 255 guests on a single vlan, or need some
other openvswitch-specific feature not available with Linux host
bridges, I would just setup multiple networks and use the existing
libvirt networks.
Yes, I'll probably go with the regular bridge behavior for now. One very
attractive feature of using openvswitch is the ability to "re-wire" the whole
set-up by reassigning the vlan tags on-the-fly. My base usecase should be
consistent with the libvirt workflow, where things like changing domain
interface configs take effect when a domain is destroyed and started again, but
the opportunity to move interfaces around without a hard power-cycle could
prove valuable.