On 01/31/2012 03:30 AM, Michal Privoznik wrote:
On 30.01.2012 21:30, Laine Stump wrote:
> On 01/30/2012 06:02 AM, Daniel P. Berrange wrote:
>> On Fri, Jan 27, 2012 at 01:35:35PM -0500, Laine Stump wrote:
>>> When libvirt is shutting down the qemu process, it first sends
>>> SIGTERM, then waits for 1.6 seconds and, if it sees the process still
>>> there, sends a SIGKILL.
>>>
>>> There have been reports that this behavior can lead to data loss
>>> because the guest running in qemu doesn't have time to flush it's
disk
>>> cache buffers before it's unceremoniously whacked.
>>>
>>> One suggestion on how to solve that problem was to remove SIGKILL from
>>> the normal virDomainDestroyFlags, but still provide the ability to
>>> kill qemu with SIGKILL by using a new flag to virDomainDestroyFlags.
>>> This patch is a quick attempt at that in order to start a
>>> conversation on the topic.
>>>
>>> So what are your opinions? Is this the right way to solve the problem?
>> No, we can't change the default semantics of virDomainDestroy in
>> this case. Applications expect that we do absolutely everything
>> possible to kill of the guest.
But not that we get it all done within 1.6 seconds, right? :-)
>> This is particularly important for
>> cluster fencing usage. If we only use SIGTERM, then we're introducing
>> unacceptable risk to apps relying on this.
>>
>> We could do the opposite though - have a flag to do a gracefully
>> destroy, leaving the default as un-graceful.
> virDomainShutdown ends up calling qemuProcessKill() too. So, I guess we
> need to add a flag there too.
>
> In the meantime, shouldn't we at least wait longer before resorting to
> SIGKILL? (especially since it appears the current timeout is quite often
> too short). (If we don't at least do that, what we're saying is "the
> behavior of virDomainShutdown / virDomainDestroy is to lose your data
> unless you're lucky. If you don't want this behavior, you need to use
> virDomainXXXFlags, and specify the VIR_DOMAIN_DONT_TRASH_MY_DATA flag"
> :-P).
I should probably hop into this as I've tried to solve this issue
earlier but got sidetracked and then forgot about it.
Increasing the delay could be temporary workaround, but we should keep
in mind that if we change the delay to X (units of time), I bet in some
cases it will take qemu X+1 units to flush caches.
Therefore I lean to the flag DONT_SENT_SIGKILL and leave the default to
what it is now.
Sure, even if you increase the timeout, there will still be cases where
it fails. But we can't magically cause every piece of management
software to switch to using the DONT_SEND_SIGKILL flag overnight, and in
the meantime increasing the timeout will lead to fewer failures, and
won't cause a longer delay than currently *unless it was going to cause
a failure anyway* (since the loop exits as soon as it detects the
process has exited).
So, I don't see a downside to increasing the timeout (in addition to
adding the flag, *not* instead of it). The current timeout is arbitrary,
so why not make it a bit larger arbitrary amount?
However, as a qemu developer pointed out (Luiz?), even
with -no-shutdown qemu will terminate itself after receiving SIGINT and
flushing own caches. So this might be the right way to solve this.
Please define "this" more specifically :-) (maybe in patch form?)