Inline...
-----Original Message-----
From: libvir-list-bounces(a)redhat.com [mailto:libvir-list-
bounces(a)redhat.com] On Behalf Of Oved Ourfalli
Sent: Thursday, April 28, 2011 1:15 AM
To: Laine Stump; libvir-list(a)redhat.com
Subject: Re: [libvirt] migration of vnlink VMs
----- Original Message -----
> From: "Laine Stump" <lstump(a)redhat.com>
> To: "Oved Ourfalli" <ovedo(a)redhat.com>
> Cc: "Ayal Baron" <abaron(a)redhat.com>, "Barak Azulay"
<bazulay(a)redhat.com>, "Shahar Havivi" <shaharh(a)redhat.com>,
> "Itamar Heim" <iheim(a)redhat.com>, "Dan Kenigsberg"
<danken(a)redhat.com>
> Sent: Thursday, April 28, 2011 10:20:35 AM
> Subject: Re: migration of vnlink VMs
> Oved,
>
> Would it be okay to repost this message to the thread on libvir-list
> so
> that other parties can add their thoughts?
>
Of course. I'm sending my answer to the libvirt list.
> On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
> > Laine, hello.
> >
> > We read your proposal for abstraction of guest<--> host network
> > connection in libvirt.
> >
> > You has an open issue there regarding the vepa/vnlink attributes:
> > "3) What about the parameters in the<virtualport> element that are
> > currently used by vepa/vnlink. Do those belong with the host, or
> > with the guest?"
> >
> > The parameters for the virtualport element should be on the guest,
> > and not the host, because a specific interface can run multiple
> > profiles,
>
> Are you talking about host interface or guest interface? If you mean
> that multiple different profiles can be used when connecting to a
> particular switch - as long as there are only a few different
> profiles,
> rather than each guest having its own unique profile, then it still
> seems better to have the port profile live with the network
definition
> (and just define multiple networks, one for each port profile).
>
The profile names can be changed regularly, so it looks like it will
be
better to put them in the guest level, so that the network host file
won't have to be changed on all hosts once something has changed in
the
profiles.
Also, you will have a duplication of data, writing all the profile
name
on all the hosts that are connected to the vn-link/vepa switch.
>
> > so it will be a mistake to define a profile to be interface
> > specific on the host. Moreover, putting it in the guest level
will
> > enable us in the future (if supported by libvirt/qemu) to
migrate
> > a vm from a host with vepa/vnlink interfaces, to another
host
with
> > a bridge, for example.
>
> It seems to me like doing exactly the opposite would make it easier
to
> migrate to a host that used a different kind of switching (from vepa
> to
> vnlink, or from a bridged interface to vepa, etc), since the port
> profile required for a particular host's network would be at the
host
> waiting to be used.
You are right, but we would want to have the option to prevent that
from happening in case we wouldn't want to allow it.
We can make the ability to migrate between different network types
configurable, and we would like an easy way to tell libvirt - "please
allow/don't allow it".
>
> > So, in the networks at the host level you will have:
> > <network type='direct'>
> > <name>red-network</name>
> > <source mode='vepa'>
> > <pool>
> > <interface>
> > <name>eth0</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth4</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth18</name>
> > .....
> > </interface>
> > </pool>
> > </source>
> > </network>
> >
> > And in the guest you will have (for vepa):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbg">
> > <parameters managerid="11" typeid="1193047"
typeidversion="2"
> > instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
> > </virtualport>
> > </interface>
> >
> > Or (for vnlink):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbh">
> > <parameters profile_name="profile1"/>
> > </virtualport>
> > </interface>
>
> This illustrates the problem I was wondering about - in your example
> it
> would not be possible for the guest to migrate from the host using a
> vepa switch to the host using a vnlink switch (and it would be
> possible
You are right. When trying to migrate between vepa and vnlink there
will be missing attributes in each in case we leave it on the host.
> to migrate to a host using a standard bridge only if the virtualport
> element was ignored). If the virtualport element lived with the
> network
> definition of red-network on each host, it could be migrated without
> problem.
>
> The only problematic thing would be if any of the attributes within
> <parameters> was unique for each guest (I don't know anything about
> the
> individual attributes, but "instanceid" sounds like it might be
> different for each guest).
Whether a given parameter is unique for each guest (or let's say,
whether
it can be shared by two or more guests) may also be a config/policy
detail
for certain parameters.
You should take into account that new parameters may be added later on
(not only for vepa/vn-link), and could be both unique or shared.
For this reason you design should be able to handle such a case.
BTW, "instanceid" identifies the virtual nic, therefore it is unique
(it is a UUID).
instanceid
The VSI Instance ID Identifier is generated when a VSI instance
(i.e. a virtual interface of a virtual machine) is created.
This is a globally unique identifier.
Please note that the IEEE specs (802.1Qbh/1Qbg) have undergone some
changes
in the last few months. The two protos are going to share the same proto
VDP for the port profile configuration. Also, new parameters may be
introduced and already existent ones may change type.
Later this year, when the IEEE drafts will be closer to final versions,
there
will be updates to the configuration parameters to reflect the changes
in
the drafts.
> > Then, when migrating from a vepa/vnlink host to another
vepa/vnlink
> > host containing red-network, the profile attributes will
be
> > available at the guest domain xml.
> > In case the target host has a red-network, which isn't
vepa/vnlink,
> > we want to be able to choose whether to make the use of the
profile
> > attributes optional (i.e., libvirt won't fail in case
of migrating
> > to a network of another type), or mandatory (i.e., libvirt will
fail
> > in case of migration to a non-vepa/vnlink network).
> >
> > We have something similar in CPU flags:
> > <cpu match="exact">
> > <model>qemu64</model>
> > <topology sockets="S" cores="C"
threads="T"/>
> > <feature policy="require/optional/disable......"
> > name="sse2"/>
> > </cpu>
>
> In this analogy, does "CPU flags" == "mode (vepa/vnlink/bridge)"
or
> does
> "CPU flags" == "virtualport parameters" ? It seems like what
you're
> wanting can be satisfied by simply not defining "red-network" on the
> hosts that don't have the proper networking setup available (maybe
> what
> you *really* want to call it is "red-vnlink-network").
What I meant to say in that is that we would like to have the ability
to say if an attribute must me used, or not.
"must" in the sense that it is "mandatory"?
I can see two cases:
1) src and dst networks are of different types
(NOTE: I consider vepa/vnlink different for now, but this will
probably change when BH and BG will both use VDP)
In this case I do not see why you need to worry about whether
parameter param-X used by src host should or should not be used
by dst host: it should only if it is a generic parameter and
(as such) it does not fall inside the config section that is
specific to the network type.
Trying to translate parameters between different network types
may not be always easy and clean. Even the property
"mandatory vs optional" may change with different network types.
2) src and dst networks are of the same type
In this case it _does_ make sense to have the possibility of
specifying whether a given param is needed or not. However, I
believe it would make sense mainly for those parameters that
represent optional/desirable features of the proto/net: such
config would then be used to decide whether migration will or
will not be possible, right?
I like the idea of the abstraction, especially the pool of interfaces.
However, I think you would have to lose a bit of abstraction in order
to make it possible to have migrations between network of different
types.
I guess your goal is not to make migration possible between each
possible combination of network types, is it?
There are parameters that are specific to a given network type.
How can you expect a migration from network type X to network type Y (Y
!= X)
if you only configure the parameters for type X (and assuming Y comes
with at least one mandatory parameter)?
What are the combinations of (src/net, dst/net) that you would like to
support?
Out of curiosity, have you taken into consideration the possibility of
defining an abstracted network config as a pool of network types?
For example something like this:
HOST: Pool of three network sub-types
<network type='network'>
<name>red-network</name>
<source type='direct' mode='vepa'>
...
</source>
<source type=XXX ...>
...
</source>
<source type=YYY ...>
...
</source>
</network>
GUEST WHICH ONLY ACCEPTS ONE SPECIFIC TYPE (direct/private):
<interface type='network'>
<source network='red-network'/>
<option prio=1 type='direct' mode='private'>
<virtualport type="802.1Qbh">
<parameters profile_name="profile_123"/>
</virtualport>
</option>
</interface>
GUEST WHICH ACCEPTS TWO TYPES (direct/private, direct/vepa):
<interface type='network'>
<source network='red-network'/>
<option prio=1 type='direct' mode='private'>
<virtualport type="802.1Qbh">
<parameters profile_name="profile_123"/>
</virtualport>
</option>
<option prio=2 type='direct' mode='vepa'>
<virtualport type="802.1Qbg">
<parameters managerid="11" typeid="1193047"
typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
</virtualport>
</option>
</interface>
During migration, the dst host would use the options/prios in the guest
config XML to select among the options available for the same 'network'
on the dst host, just like any other similar handshake.
I am not suggesting this approach, I just would like to know if you ever
thought about this option and, if you did, why you discarded it.
An example of scenario where the above pool of network types may make
sense would be the combination vepa+bridge or vnlink+bridge: if
something
goes wrong during migration and the vepa/OR/vnlink cannot associate the
port profile, the guest can at least have a backup net connection
through
the bridge. It could optionally also re-try the vepa/vnlink association
a number of times (libvirt would do it if configured to do so) ... while
maintaining temporary connectivity through the bridge.
/Chris
The issues you mention are indeed interesting. I'm cc-ing
libvirt-list
to see what other people think.
Putting it on the guest will indeed make it problematic to migrate
between networks that need different parameters (vnlink/vepa for
example).
Oved
--
libvir-list mailing list
libvir-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list