On Tue, Feb 11, 2020 at 10:05:53AM +0100, Martin Kletzander wrote:
On Wed, Feb 05, 2020 at 05:32:50PM +0000, Daniel P. Berrangé wrote:
> On Mon, Feb 03, 2020 at 12:43:32PM +0000, Daniel P. Berrangé wrote:
> > From: Shaju Abraham <shaju.abraham(a)nutanix.com>
> >
> > There are various config paths that a VM uses. The monitor paths and
> > other lib paths are examples. These paths are tied to the VM name or
> > UUID. The local migration breaks the assumption that there will be only
> > one VM with a unique UUID and name. During local migrations there can be
> > multiple VMs with same name and UUID in the same host. Append the
> > domain-id field to the path so that there is no duplication of path
> > names.
>
> This is the really critical problem with localhost migration.
>
> Appending the domain-id looks "simple" but this is a significant
> behavioural / functional change for applications, and I don't think
> it can fully solve the problem either.
>
> This is changing thue paths used in various places where libvirt
> internally generates unique paths (eg QMP socket, huge page or
> file based memory paths, and defaults used for auto-filling
> device paths (<channel> if not specified).
>
> Some of these paths are functionally important to external
> applications and cannot be changed in this way. eg stuff
> integrating with QEMU can be expecting certain memory backing
> file paths, or certain <channel> paths & is liable to break
> if we change the naming convention.
>
> For sake of argument, lets assume we can changing the naming
> convention without breaking anything...
>
This was already done in (I would say) most places as they use
virDomainDefGetShortName() to get a short, unique name of a directory -- it uses
the domain ID as a prefix.
> This only applies to paths libvirt generates at VM startup.
>
> There are plenty of configuration elements in the guest XML
> that are end user / mgmt app defined, and reference paths in
> the host OS.
>
> For example <graphics>, <serial>, <console>, <channel>,
> all support UNIX domain sockets and TCP sockets. A UNIX
> domain socket cannot be listened on by multiple VMs
> at once. If the UNIX socket is in client mode, we cannot
> assume the thing QEMU is connecting to allows multiple
> concurrent connections. eg 2 QEMU's could have their
> <serial> connected together over a UNIX socket pair.
> Similarly if automatic TCP port assignment is not used
> we cannot have multiple QEMU's listening on the same
> host.
>
> One answer is to say that localhost migration is just
> not supported for such VMs, but I don't find that very
> convincing because the UNIX domain socket configs
> affected are in common use.
>
I would be okay with saying that these either need to be changed in a provided
destination XML or the migration will probably break. I do not think it is
unreasonable to say that if users are trying to shoot themselves, we should not
spend a ridiculous time just so we can prevent that. Otherwise we will get to
the same point as we are now -- there might be a case where a local migration
would work, but users cannot execute it even if they were very cautious and went
through all the things that can prevent it from the technical point of view,
libvirt will still disallow that.
If there are clashing resources, we can't rely on QEMU reporting an
error. For example with a UNIX domain socket, the first thing QEMU
does is unlink(/socket/path), which will blow away the UNIX domain
socket belonging to the original QEMU. As a result if migration
fails, and we rollback, the original QEMU will be broken.
Preventing users from shooting themselves in the foot is a core
part of the value that libvirt is adding for QEMU migration. We
do this with stable device ABI, and controlled locking / disk
labelling, and CPU compatibility checking, and so on.
We're not perfect by any means, but the one thing we've tried
very hard to ensure is that if the destination QEMU fails for
any reason during migration, the user should always be able to
rollback to use the original source QEMU without problems.
The localhost migration support makes it harder to guarantee
the the source QEMU is not broken, so I do think we need to
make extra affect to protect users, if we're going to try to
allow this.
This series has taken the approach of trying to make the localhost
migration work as if it were just a normal remote migration, with
just the minimum change to alter some auto-generated paths on disk,
and keeping the second list of domains.
So we still the same begin/prepare/perform/finish/confirm
phases fully separated. IOW, essentially the migration code has
very little, almost no, knowlege of the fact that this is a
localhost migration.
This is understandaable as a way to minimize the invasiveness
of any changes, but I think it misses the point that localhost
migration needs more than just changing a few paths on disk.
Adding at least partial support where we could say we rely on QEMU
failing
reasonably would allow couple of mgmt apps to do more than they can do now. And
they might have taken care of the path collisions (e.g. when running libvirtd in
containers or so).
If libvirtd is running inside a container, then from libvirt's
POV this is no longer localhost migration - it is regular
cross-host migration. The only caveat is that the container
must be reporting a unique hostname + UUID. We can at least
override UUID in libvirtd.conf if that isn't the case.
> If localhost migration is only usable in a small subset
> scenarios, I'm not convinced it is worth the support
> burden. Rarely used & tested features are liable to
> bitrot, and migration is already a serious point of
> instability in QEMU in general.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|