On 27.10.2016 16:00, John Ferlan wrote:
On 10/04/2016 10:27 AM, Nikolay Shirokovskiy wrote:
> This fix the crash [1]
>
> This happens on race conditions of simultaneous exiting libvirtd and qemu process.
> Let's just finish all workers that keep driver pointer before disposing
> driver object or any of its members.
>
> [1] crash backtrace: (bt is shortened a bit):
Before pushing I'll reword the above just a little bit for readability
>
> 0 0x00007ffff7282f2b in virClassIsDerivedFrom (klass=0xdeadbeef,
parent=0x55555581d650) at util/virobject.c:169
> 1 0x00007ffff72835fd in virObjectIsClass (anyobj=0x7fffd024f580,
klass=0x55555581d650) at util/virobject.c:365
> 2 0x00007ffff7283498 in virObjectLock (anyobj=0x7fffd024f580) at
util/virobject.c:317
> 3 0x00007ffff722f0a3 in virCloseCallbacksUnset (closeCallbacks=0x7fffd024f580,
vm=0x7fffd0194db0,
> cb=0x7fffdf1af765 <qemuProcessAutoDestroy>) at
util/virclosecallbacks.c:164
> 4 0x00007fffdf1afa7b in qemuProcessAutoDestroyRemove (driver=0x7fffd00f3a60,
vm=0x7fffd0194db0)
> at qemu/qemu_process.c:6365
> 5 0x00007fffdf1adff1 in qemuProcessStop (driver=0x7fffd00f3a60, vm=0x7fffd0194db0,
> reason=VIR_DOMAIN_SHUTOFF_CRASHED, asyncJob=QEMU_ASYNC_JOB_NONE, flags=0) at
qemu/qemu_process.c:5877
> 6 0x00007fffdf1f711c in processMonitorEOFEvent (driver=0x7fffd00f3a60,
vm=0x7fffd0194db0)
> at qemu/qemu_driver.c:4545
> 7 0x00007fffdf1f7313 in qemuProcessEventHandler (data=0x555555832710,
opaque=0x7fffd00f3a60)
> at qemu/qemu_driver.c:4589
> 8 0x00007ffff72a84c4 in virThreadPoolWorker (opaque=0x555555805da0) at
util/virthreadpool.c:167
>
> Thread 1 (Thread 0x7ffff7fb1880 (LWP 494472)):
> 1 0x00007ffff72a7898 in virCondWait (c=0x7fffd01c21f8, m=0x7fffd01c21a0) at
util/virthread.c:154
> 2 0x00007ffff72a8a22 in virThreadPoolFree (pool=0x7fffd01c2160) at
util/virthreadpool.c:290
> 3 0x00007fffdf1edd44 in qemuStateCleanup () at qemu/qemu_driver.c:1102
> 4 0x00007ffff736570a in virStateCleanup () at libvirt.c:807
> 5 0x000055555556f991 in main (argc=1, argv=0x7fffffffe458) at libvirtd.c:1660
> ---
> src/qemu/qemu_driver.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 12ddbc0..58753ea 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -1068,6 +1068,7 @@ qemuStateCleanup(void)
> if (!qemu_driver)
> return -1;
>
> + virThreadPoolFree(qemu_driver->workerPool);
For this one - your thoughts on swapping the order to be
NWFilterUnRegister and ThreadPoolFree(workerPool)?
Nothing in the NW filter call back jumps out at me as generating a
workerPool job, but since the order of Initialization is workerPool,
then NWFilterCallbackRegister I figure the order to Cleanup should just
be inversed.
Looks like these two are independent... So we can use any order and
cleaning up in reverse order to initializing is probably good style.
ACK for what's here, but I'll await yours or any other feedback on
swapping before pushing.
John
> virNWFilterUnRegisterCallbackDriver(&qemuCallbackDriver);
> virObjectUnref(qemu_driver->config);
> virObjectUnref(qemu_driver->hostdevMgr);
> @@ -1099,7 +1100,6 @@ qemuStateCleanup(void)
> virLockManagerPluginUnref(qemu_driver->lockManager);
>
> virMutexDestroy(&qemu_driver->lock);
> - virThreadPoolFree(qemu_driver->workerPool);
> VIR_FREE(qemu_driver);
>
> return 0;
>