On Wed, Jul 05, 2023 at 04:27:46PM +0200, Boris Fiuczynski wrote:
On 7/5/23 3:08 PM, Daniel P. Berrangé wrote:
> On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote:
> > On Wed, 5 Jul 2023 13:26:32 +0100
> > Daniel P. Berrangé <berrange(a)redhat.com> wrote:
> >
> > [...]
> >
> > > > > I rather think mgmt apps need to explicitly opt-in to async
teardown,
> > > > > so they're aware that they need to take account of delayed
RAM
> > > > > availablity in their accounting / guest placement logic.
> > > >
> > > > what would you think about enabling it by default only for guests
that
> > > > are capable to run in Secure Execution mode?
> > >
> > > IIUC, that's basically /all/ guests if running on new enough hardware
> > > with prot_virt=1 enabled on the host OS, so will still present challenges
> > > to mgmt apps needing to be aware of this behaviour AFAICS.
> >
> > I think there is some fencing still? I don't think it's automatic
>
> IIUC, the following sequence is possible
>
> 1. Start QEMU with -m 500G
> -> QEMU spawns async teardown helper process
> 2. Stop QEMU
> -> Async teardown helper process remains running while
> kernel releases RAM
> 3. Start QEMU with -m 500G
> -> Fails with ENOMEM
> ...time passes...
> 4. Async teardown helper finally terminates
> -> The full original 500G is only now released for use
>
> Basically if you can't do
>
> while true
> do
> virsh start $guest
> virsh stop $guest
> done
>
> then it is a change in libvirt API semantics, as so will require
> explicit opt-in from the mgmt app to use this feature.
What is your expectation if libvirt ["virsh stop $guest"] fails to wait for
qemu to terminate e.g. after 20+ minutes. I think that libvirt does have a
timeout trying to stop qemu and than gives up.
Wouldn't you encounter the same problem that way?
Yes, that would be a bug. We've tried to address these in the past.
For example, when there are PCI host devs assigned, the kernel takes
quite a bit longer to terminate QEMU. In that case, we extended the
timeout we wait for QEMU to exit.
Essentially the idea is that when 'virsh destroy' returns we want the
caller to have a strong guarantee that all resources are released.
IOW, if it sees an error code the expectation is that QEMU has suffered
a serious problem - such as stuck in an uninterruptible sleep in kernel
space. We don't want the caller to see errors in "normal" scenarios.
With regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|