On Tue, Nov 13, 2018 at 03:21:03PM -0500, John Ferlan wrote:
Sending as an RFC primarily because I'm looking for whether
either
or both mechanisms in the series is more or less desired. Likewise,
if it's felt that the current process of telling customers to just
delete the cache is acceptible, then so be it. If there's other ideas
I'm willing to give them a go too. I did consider adding a virsh
option to "virsh capabilities" (still possible) and/or a virt-admin
option to force the refresh. These just were "easier" and didn't
require an API adjustment to implement.
Patch1 is essentially a means to determine if the kernel config
was changed to allow nested virtualization and to force a refresh
of the capabilities in that case. Without doing so the CPU settings
for a guest may not add the vmx=on depending on configuration and
for the user that doesn't make sense. There is a private bz on this
so I won't bother posting it.
Patch2 and Patch3 make use of the 'service libvirtd reload' function
in order to invalidate all the entries in the internal QEMU capabilities
hash table and then to force a reread. This perhaps has downsides related
to guest usage and previous means to use reload and not refresh if a guest
was running. On the other hand, we tell people to just clear the QEMU
capabilities cache (e.g. rm /var/cache/libvirt/qemu/capabilities/*.xml)
and restart libvirtd, so in essence, the same result. It's not clear
how frequently this is used (it's essentially a SIGHUP to libvirtd).
IMHO the fact that we cache stuff should be completely invisible outside
of libvirt. Sure we've had some bugs in this area, but they are not very
frequent so I'm not enthusiastic to expose any knob to force rebuild beyond
just deleting files.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|