On 10/10/2015 02:40 PM, Spencer Baugh wrote:
Cole Robinson <crobinso(a)redhat.com> writes:
> The proper way to make sure shared VMs aren't started across multiple machines
> is libvirt's locking support:
https://libvirt.org/locking.html
>
> It requires running a separate daemon though so isn't trivial, and I have no
> idea if it can be made to work with qemu:///session.
Hmm, but what do you mean by shared VMs here? Just shared disk images,
or also shared configuration, or also marked autostart on multiple
hosts?
Just shared libvirt XML and disk images, not specifically autostart.
It seems like there are two closely related problems here:
1. For some shared VM and host, should we start that VM on that host?
2. Preventing VMs from running on multiple hosts at the same time
Problem 2 is definitely solved pretty well by libvirt's locking support;
even if that locking doesn't currently work properly with
qemu:///session, I accept the idea that the correct answer is to get
that working.
And problem 1 is solved as a side effect of solving problem 2, because
now one can just mark the VMs autostart on all hosts and let them race
to get a lock on the disk image, arbitrarily distributing them over the
hosts.
But surely there is a better way to solve problem 1 than that!
I suppose one could punt that question to a cross-host VM scheduler like
OpenStack. In that case I guess what one might do is not mark anything
autostart, and have the external scheduler come in on bootup and kick
specific things into action. Then you would rely on the cross-host
scheduler knowing what should run where.
But that doesn't really scale elegantly between the one-host and
multi-host cases, since in the one-host case it would be somewhat
awkward and unnecessary to use a multi-host scheduler.
Hmm, I suppose if the external scheduler instead specified a mapping
from /etc/machine-id to a list of autostarted VMs, that mapping could be
shared on all hosts, and so would work for both cases rather nicely.
Alternatively, even better: the autostart mechanism could be made
generic, allowing multiple ways to get the list of VMs that should be
autostarted. Then the most simple way (certainly not the best way) to
get the desired functionality would be to write a script that returns a
list of VMs that should be running on the current host.
Does that seem like a plausible approach?
Plausible/workable, sure. But is it worth implementing and maintaining in
libvirt? IMO no. I see libvirt's built-in autostart feature as just a simple
convenience feature for end users, and this strays from that. Also you would
already need to configure each host to provide this custom list of VMs to
autostart, so you aren't getting away from per-host config anyways. At that
point really how much more work is it to implement this external to libvirt?
And it doesn't seem useful for existing tools: high level multi host
management like openstack, ovirt don't need to use libvirt's autostart
feature, since it's trivial to roll their own already have a daemon running on
each physical machine, and they need to do a ton of other host tasks. Adding a
'virsh start' equivalent is easy.
So really I think if the existing autostart impl isn't sufficient, and the use
case is niche (which it is), it's better to roll your own solution.
Of course, I'm not the authority here, just a guy that writes libvirt patches
on occasion :)
- Cole