See my comments below.
Thank you,
Oved
----- Original Message -----
From: "Laine Stump" <laine(a)laine.org>
To: libvir-list(a)redhat.com
Sent: Friday, April 29, 2011 3:45:50 PM
Subject: Re: [libvirt] migration of vnlink VMs
On 04/28/2011 04:15 AM, Oved Ourfalli wrote:
>> From: "Laine Stump"<lstump(a)redhat.com
>> On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
>>> Laine, hello.
>>>
>>> We read your proposal for abstraction of guest<--> host network
>>> connection in libvirt.
>>>
>>> You has an open issue there regarding the vepa/vnlink attributes:
>>> "3) What about the parameters in the<virtualport> element that
are
>>> currently used by vepa/vnlink. Do those belong with the host, or
>>> with the guest?"
>>>
>>> The parameters for the virtualport element should be on the guest,
>>> and not the host, because a specific interface can run multiple
>>> profiles,
>> Are you talking about host interface or guest interface? If you
>> mean
>> that multiple different profiles can be used when connecting to a
>> particular switch - as long as there are only a few different
>> profiles,
>> rather than each guest having its own unique profile, then it still
>> seems better to have the port profile live with the network
>> definition
>> (and just define multiple networks, one for each port profile).
>>
> The profile names can be changed regularly, so it looks like it will
> be better to put them in the guest level, so that the network host
> file won't have to be changed on all hosts once something has
> changed in the profiles.
>
> Also, you will have a duplication of data, writing all the profile
> name on all the hosts that are connected to the vn-link/vepa switch.
But is it potentially the same for many/all guests, or is it
necessarily
different for every guest? If it's the former, then do you have more
guests, or more hosts?
I guess it will be the same for many guests. There will be some profiles, and each
group of guests will use the same profile, according to its demands.
>>> so it will be a mistake to define a profile to be interface
>>> specific on the host. Moreover, putting it in the guest level
>>> will
>>> enable us in the future (if supported by libvirt/qemu) to
>>> migrate
>>> a vm from a host with vepa/vnlink interfaces, to another host
>>> with
>>> a bridge, for example.
>> It seems to me like doing exactly the opposite would make it easier
>> to
>> migrate to a host that used a different kind of switching (from
>> vepa
>> to
>> vnlink, or from a bridged interface to vepa, etc), since the port
>> profile required for a particular host's network would be at the
>> host
>> waiting to be used.
> You are right, but we would want to have the option to prevent that
> from happening in case we wouldn't want to allow it.
> We can make the ability to migrate between different network types
> configurable, and we would like an easy way to tell libvirt -
> "please allow/don't allow it".
I *think* what you're getting at is this situation:
HostA has a group of interfaces that are connected to a vepa-capable
switch, HostB has a group of interfaces connected to a vnlink-capable
switch. Guest1 is allowed to connect either via a vnlink switch or a
vepa switch, but Guest2 should only use vepa.
In that case, HostA would have a network that had a pool of interfaces
and type "vepa", while HostB would have a pool of interfaces and a
type
"vnlink". Guest1 could be accommodated by giving both networks the
same
name, or Guest2 could be accommodated by giving each network a
different
name (when migrating, if the dest. host doesn't have the desired
network, the migration would fail). However, using just the network
naming, it wouldn't be possible to allow both.
I don't think keeping the virtualport parameters only with the guest
would help (or hurt) this though. What would be needed would be to
have
the information about network type *optionally* specified in the guest
interface config (as well as in the network config); if present the
migration would only succeed if the given network on the dest host
matched the given type (and parameters, if any) in the guest config.
That would be great. It will enable the flexibility we need.
>>> So, in the networks at the host level you will have:
>>> <network type='direct'>
>>> <name>red-network</name>
>>> <source mode='vepa'>
>>> <pool>
>>> <interface>
>>> <name>eth0</name>
>>> .....
>>> </interface>
>>> <interface>
>>> <name>eth4</name>
>>> .....
>>> </interface>
>>> <interface>
>>> <name>eth18</name>
>>> .....
>>> </interface>
>>> </pool>
>>> </source>
>>> </network>
>>>
>>> And in the guest you will have (for vepa):
>>> <interface type='network'>
>>> <source network='red-network'/>
>>> <virtualport type="802.1Qbg">
>>> <parameters managerid="11" typeid="1193047"
typeidversion="2"
>>> instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
>>> </virtualport>
>>> </interface>
>>>
>>> Or (for vnlink):
>>> <interface type='network'>
>>> <source network='red-network'/>
>>> <virtualport type="802.1Qbh">
>>> <parameters profile_name="profile1"/>
>>> </virtualport>
>>> </interface>
What would the interface for a 2nd guest of each type look like? Could
it be identical? Or might some parameters change for every single
guest?
For vn-link it will be the same, just the profile_name.
As for vepa, the instanceid is vm specific so it should be on the guest (taken from
http://libvirt.org/formatdomain.html):
"managerid - The VSI Manager ID identifies the database containing the VSI type and
instance definitions. This is an integer value and the value 0 is reserved.
typeid - The VSI Type ID identifies a VSI type characterizing the network access. VSI
types are typically managed by network administrator. This is an integer value.
typeidversion - The VSI Type Version allows multiple versions of a VSI Type. This is an
integer value.
instanceid - The VSI Instance ID Identifier is generated when a VSI instance (i.e. a
virtual interface of a virtual machine) is created. This is a globally unique
identifier."
That's what we know on vepa an vn-link now. I guess that when we'll have the
possibility to test these environments we will learn more on them.
Perhaps it would be best to have virtualport parameters on both
network
and guest interface XML, and merge the two to arrive at what's used
(the
network definition could contain all the attributes that would be
common
to all guests using that network on that host, and the guest interface
definition would contain extra parameters specific to that host. In
the
case of a parameter being specified in both places, if they were not
identical, the migration would fail).
Sounds good.
>> This illustrates the problem I was wondering about - in
your
>> example
>> it
>> would not be possible for the guest to migrate from the host using
>> a
>> vepa switch to the host using a vnlink switch (and it would be
>> possible
> You are right. When trying to migrate between vepa and vnlink there
> will be missing attributes in each in case we leave it on the host.
(you mean if we leave the config on the *guest*, I guess...)
>> to migrate to a host using a standard bridge only if the
>> virtualport
>> element was ignored). If the virtualport element lived with the
>> network
>> definition of red-network on each host, it could be migrated
>> without
>> problem.
>>
>> The only problematic thing would be if any of the attributes within
>> <parameters> was unique for each guest (I don't know anything about
>> the
>> individual attributes, but "instanceid" sounds like it might be
>> different for each guest).
>>> Then, when migrating from a vepa/vnlink host to another
>>> vepa/vnlink
>>> host containing red-network, the profile attributes will be
>>> available at the guest domain xml.
>>> In case the target host has a red-network, which isn't
>>> vepa/vnlink,
>>> we want to be able to choose whether to make the use of the
>>> profile
>>> attributes optional (i.e., libvirt won't fail in case of migrating
>>> to a network of another type), or mandatory (i.e., libvirt will
>>> fail
>>> in case of migration to a non-vepa/vnlink network).
>>>
>>> We have something similar in CPU flags:
>>> <cpu match="exact">
>>> <model>qemu64</model>
>>> <topology sockets="S" cores="C"
threads="T"/>
>>> <feature
policy="require/optional/disable......"
>>> name="sse2"/>
>>> </cpu>
>> In this analogy, does "CPU flags" == "mode
(vepa/vnlink/bridge)" or
>> does
>> "CPU flags" == "virtualport parameters" ? It seems like what
you're
>> wanting can be satisfied by simply not defining "red-network" on
>> the
>> hosts that don't have the proper networking setup available (maybe
>> what
>> you *really* want to call it is "red-vnlink-network").
> What I meant to say in that is that we would like to have the
> ability to say if an attribute must me used, or not.
Sure, it sounds useful. Would what I outlined above be sufficient? (It
would allow you to say "this guest must have a vepa network
connection"
or "this guest can have any network connection, as long as it's named
"red-network". It *won't* allow saying "this guest must have vepa or
vnlink, bridge is not allowed, even if the network name is the same".
You could also put most of the config with the host network
definition,
but allow, eg instanceid to be specified in the guest config.
I think this would indeed be enough.
> The issues you mention are indeed interesting. I'm cc-ing
> libvirt-list to see what other people think.
> Putting it on the guest will indeed make it problematic to migrate
> between networks that need different parameters (vnlink/vepa for
> example).
--
libvir-list mailing list
libvir-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list