I ran clang on the very latest and it spotted this problem:
From qemu_driver.c, around line 11100,
else {
/* qemu is a little funny with running guests and the restoration
* of snapshots. If the snapshot was taken online,
* then after a "loadvm" monitor command, the VM is set running
* again. If the snapshot was taken offline, then after a "loadvm"
* monitor command the VM is left paused. Unpausing it leads to
* the memory state *before* the loadvm with the disk *after* the
* loadvm, which obviously is bound to corrupt something.
* Therefore we destroy the domain and set it to "off" in this case.
*/
if (virDomainObjIsActive(vm)) {
qemudShutdownVMDaemon(driver, vm);
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_STOPPED,
VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT);
if (!vm->persistent) {
if (qemuDomainObjEndJob(vm) > 0)
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
}
if (qemuDomainSnapshotSetActive(vm, driver->snapshotDir) < 0)
goto endjob;
}
vm->state = snap->def->state;
ret = 0;
endjob:
if (vm && qemuDomainObjEndJob(vm) == 0)
vm = NULL;
Note how "vm" is set to NULL.
Then, it can be dereferenced both via qemuDomainSnapshotSetActive
and via the vm->state = ... assignment.