[libvirt] [PATCHv2 0/2] Enable compression of external snapshots and managed save images
by Peter Krempa
Version 2 treats save images and managed save images as same, using the same config option.
Peter Krempa (2):
qemu: managedsave: Add support for compressing managed save images
qemu: snapshot: Add support for compressing external snapshot memory
src/qemu/qemu.conf | 12 +++++++++---
src/qemu/qemu_conf.c | 2 ++
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_driver.c | 46 ++++++++++++++++++++++++++++++++++++++++++----
4 files changed, 54 insertions(+), 7 deletions(-)
--
1.8.3.2
11 years, 2 months
[libvirt] [PATCH]virsh: support readonly in attach-disk command
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
support readonly in attach-disk virsh command
with option --readonly
Signed-off-by: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
---
tools/virsh-domain.c | 7 +++++++
tools/virsh.pod | 5 +++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 3479a1c..d334ebe 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -315,6 +315,10 @@ static const vshCmdOptDef opts_attach_disk[] = {
.type = VSH_OT_BOOL,
.help = N_("shareable between domains")
},
+ {.name = "readonly",
+ .type = VSH_OT_BOOL,
+ .help = N_("allow guest read-only access to disk")
+ },
{.name = "rawio",
.type = VSH_OT_BOOL,
.help = N_("needs rawio capability")
@@ -612,6 +616,9 @@ cmdAttachDisk(vshControl *ctl, const vshCmd *cmd)
if (vshCommandOptBool(cmd, "shareable"))
virBufferAddLit(&buf, " <shareable/>\n");
+ if (vshCommandOptBool(cmd, "readonly"))
+ virBufferAddLit(&buf, " <readonly/>\n");
+
if (straddr) {
if (str2DiskAddress(straddr, &diskAddr) != 0) {
vshError(ctl, _("Invalid address."));
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 0ae5178..91b4429 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -1908,8 +1908,8 @@ expected.
[[[I<--live>] [I<--config>] | [I<--current>]] | [I<--persistent>]]
[I<--driver driver>] [I<--subdriver subdriver>] [I<--cache cache>]
[I<--type type>] [I<--mode mode>] [I<--config>] [I<--sourcetype soucetype>]
-[I<--serial serial>] [I<--wwn wwn>] [I<--shareable>] [I<--rawio>]
-[I<--address address>] [I<--multifunction>] [I<--print-xml>]
+[I<--serial serial>] [I<--wwn wwn>] [I<--shareable>] [I<--readonly>]
+[I<--rawio>] [I<--address address>] [I<--multifunction>] [I<--print-xml>]
Attach a new disk device to the domain.
I<source> is path for the files and devices. I<target> controls the bus or
@@ -1931,6 +1931,7 @@ I<cache> can be one of "default", "none", "writethrough", "writeback",
"directsync" or "unsafe".
I<serial> is the serial of disk device. I<wwn> is the wwn of disk device.
I<shareable> indicates the disk device is shareable between domains.
+I<readonly> indicates the disk device is read-only.
I<rawio> indicates the disk needs rawio capability.
I<address> is the address of disk device in the form of pci:domain.bus.slot.function,
scsi:controller.bus.unit or ide:controller.bus.unit.
--
1.8.2.1
11 years, 2 months
Re: [libvirt] [Users] Migration issues with ovirt 3.3
by Dan Kenigsberg
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
> On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
>
> >
> >>
> >> But migration still fails
> >>
> >
> > It seems like an unrelated failure. I do not know what's blocking
> > migration traffic. Could you see if libvirtd.log and qemu logs at source
> > and destinaiton have clues?
> >
>
> It seems that on VM.log under qemu of desdt host I have:
> ...
> -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
>
>
> See all:
> - In libvirtd.log of source host
> 2013-10-07 23:20:54.471+0000: 1209: debug :
> qemuMonitorOpenInternal:751 : QEMU_MONITOR_NEW: mon=0x7fc66412e820
> refs=2 fd=30
> 2013-10-07 23:20:54.472+0000: 1209: warning :
> qemuDomainObjEnterMonitorInternal:1136 : This thread seems to be the
> async job owner; entering monitor without asking for a nested job is
> dangerous
> 2013-10-07 23:20:54.472+0000: 1209: debug :
> qemuMonitorSetCapabilities:1145 : mon=0x7fc66412e820
> 2013-10-07 23:20:54.472+0000: 1209: debug : qemuMonitorSend:887 :
> QEMU_MONITOR_SEND_MSG: mon=0x7fc66412e820
> msg={"execute":"qmp_capabilities","id":"libvirt-1"}
> fd=-1
> 2013-10-07 23:20:54.769+0000: 1199: error : qemuMonitorIORead:505 :
> Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:638 : Error
> on monitor Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:672 :
> Triggering error callback
> 2013-10-07 23:20:54.769+0000: 1199: debug :
> qemuProcessHandleMonitorError:351 : Received error on 0x7fc664124fb0
> 'c8again32'
> 2013-10-07 23:20:54.769+0000: 1209: debug : qemuMonitorSend:899 : Send
> command resulted in error Unable to read from monitor: Connection
> reset by peer
> 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:638 : Error
> on monitor Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.770+0000: 1209: debug : virFileMakePathHelper:1283
> : path=/var/run/libvirt/qemu mode=0777
> 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:661 :
> Triggering EOF callback
> 2013-10-07 23:20:54.770+0000: 1199: debug :
> qemuProcessHandleMonitorEOF:294 : Received EOF on 0x7fc664124fb0
> 'c8again32'
> 2013-10-07 23:20:54.770+0000: 1209: debug : qemuProcessStop:3992 :
> Shutting down VM 'c8again32' pid=18053 flags=0
> 2013-10-07 23:20:54.771+0000: 1209: error :
> virNWFilterDHCPSnoopEnd:2135 : internal error ifname "vnet0" not in
> key map
> 2013-10-07 23:20:54.782+0000: 1209: debug : virCommandRunAsync:2251 :
> About to run /bin/sh -c 'IPT="/usr/sbin/iptables"
> $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0
> $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
> $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
> $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
> $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
> $IPT -F FO-vnet0
> $IPT -X FO-vnet0
> $IPT -F FI-vnet0
> $IPT -X FI-vnet0
> $IPT -F HI-vnet0
> $IPT -X HI-vnet0
> IPT="/usr/sbin/ip6tables"
> $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0
> $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
> $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
> $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
> $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
> $IPT -F FO-vnet0
> $IPT -X FO-vnet0
> $IPT -F FI-vnet0
> $IPT -X FI-vnet0
> $IPT -F HI-vnet0
> $IPT -X HI-vnet0
> EBT="/usr/sbin/ebtables"
> $EBT -t nat -D PREROUTING -i vnet0 -j libvirt-I-vnet0
> $EBT -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0
> EBT="/usr/sbin/ebtables"
> collect_chains()
> {
> for tmp2 in $*; do
> for tmp in $($EBT -t nat -L $tmp2 | \
> sed -n "/Bridge chain/,\$ s/.*-j \\([IO]-.*\\)/\\1/p");
> do
> echo $tmp
> collect_chains $tmp
> done
> done
> }
> rm_chains()
> {
> for tmp in $*; do $EBT -t nat -F $tmp; done
> for tmp in $*; do $EBT -t nat -X $tmp; done
> }
> tmp='\''
> '\''
> IFS='\'' '\'''\'' '\''$tmp
> chains="$(collect_chains libvirt-I-vnet0 libvirt-O-vnet0)"
> $EBT -t nat -F libvirt-I-vnet0
> $EBT -t nat -F libvirt-O-vnet0
> rm_chains $chains
> $EBT -t nat -F libvirt-I-vnet0
> $EBT -t nat -X libvirt-I-vnet0
> $EBT -t nat -F libvirt-O-vnet0
> $EBT -t nat -X libvirt-O-vnet0
> '
> 2013-10-07 23:20:54.784+0000: 1209: debug : virCommandRunAsync:2256 :
> Command result 0, with PID 18076
> 2013-10-07 23:20:54.863+0000: 1209: debug : virCommandRun:2125 :
> Result exit status 255, stdout: '' stderr: 'iptables v1.4.18: goto
> 'FO-vnet0' is not a chain
>
> Try `iptables -h' or 'iptables --help' for more information.
> iptables v1.4.18: goto 'FO-vnet0' is not a chain
>
> Try `iptables -h' or 'iptables --help' for more information.
> iptables v1.4.18: goto 'FI-vnet0' is not a chain
> Try `iptables -h' or 'iptables --help' for more information.
> iptables v1.4.18: goto 'HI-vnet0' is not a chain
>
> Try `iptables -h' or 'iptables --help' for more information.
> iptables: Bad rule (does a matching rule exist in that chain?).
> iptables: No chain/target/match by that name.
> iptables: No chain/target/match by that name.
> iptables: No chain/target/match by that name.
> iptables: No chain/target/match by that name.
> iptables: No chain/target/match by that name.
> iptables: No chain/target/match by that name.
> ip6tables v1.4.18: goto 'FO-vnet0' is not a chain
>
> Try `ip6tables -h' or 'ip6tables --help' for more information.
> ip6tables v1.4.18: goto 'FO-vnet0' is not a chain
>
> Try `ip6tables -h' or 'ip6tables --help' for more information.
> ip6tables v1.4.18: goto 'FI-vnet0' is not a chain
>
> Try `ip6tables -h' or 'ip6tables --help' for more information.
> ip6tables v1.4.18: goto 'HI-vnet0' is not a chain
>
> Try `ip6tables -h' or 'ip6tables --help' for more information.
> ip6tables: Bad rule (does a matching rule exist in that chain?).
> ip6tables: No chain/target/match by that name.
> ip6tables: No chain/target/match by that name.
> ip6tables: No chain/target/match by that name.
> ip6tables: No chain/target/match by that name.
> ip6tables: No chain/target/match by that name.
> ip6tables: No chain/target/match by that name.
> Illegal target name 'libvirt-O-vnet0'.
> Chain 'libvirt-O-vnet0' doesn't exist.
> Chain 'libvirt-O-vnet0' doesn't exist.
> Chain 'libvirt-O-vnet0' doesn't exist.
> Chain 'libvirt-O-vnet0' doesn't exist.
> '
> 2013-10-07 23:20:54.863+0000: 1209: debug : qemuMonitorClose:821 :
> QEMU_MONITOR_CLOSE: mon=0x7fc66412e820 refs=2
> 2013-10-07 23:20:54.863+0000: 1209: debug : qemuProcessKill:3951 :
> vm=c8again32 pid=18053 flags=5
> 2013-10-07 23:20:54.863+0000: 1209: debug :
> virProcessKillPainfully:269 : vpid=18053 force=1
> 2013-10-07 23:20:54.863+0000: 1209: debug : qemuDomainCleanupRun:2132
> : driver=0x7fc664024cd0, vm=c8again32
> 2013-10-07 23:20:54.863+0000: 1209: debug :
> qemuProcessAutoDestroyRemove:4504 : vm=c8again32
> 2013-10-07 23:20:54.863+0000: 1209: debug :
> virQEMUCloseCallbacksUnset:744 : vm=c8again32,
> uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, cb=0x7fc66b6fe570
> 2013-10-07 23:20:54.864+0000: 1209: error :
> virPortAllocatorRelease:174 : port 0 must be in range (5900, 65535)
> 2013-10-07 23:20:54.865+0000: 1209: debug : qemuDomainObjEndJob:1070 :
> Stopping job: none (async=migration in)
> 2013-10-07 23:20:54.865+0000: 1209: debug :
> qemuDomainObjEndAsyncJob:1088 : Stopping async job: migration in
> 2013-10-07 23:20:54.865+0000: 1199: debug :
> qemuProcessHandleMonitorEOF:306 : Domain 0x7fc664124fb0 is not active,
> ignoring EOF
> Caught Segmentation violation dumping internal log buffer:
This line one seems ominous. Can libvir-list help with it?
11 years, 2 months
[libvirt] [PATCH]lxc: fix an improper comment in lxc_process
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
Fix an improper comment when libvirt has released all resources
for lxc.
Then original comment says "stopped" rather than "released".
Signed-off-by: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
---
src/lxc/lxc_process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/lxc/lxc_process.c b/src/lxc/lxc_process.c
index d07ff13..7746c9b 100644
--- a/src/lxc/lxc_process.c
+++ b/src/lxc/lxc_process.c
@@ -217,7 +217,7 @@ static void virLXCProcessCleanup(virLXCDriverPtr driver,
virSystemdTerminateMachine(vm->def->name, "lxc", true);
- /* now that we know it's stopped call the hook if present */
+ /* The "release" hook cleans up additional resources */
if (virHookPresent(VIR_HOOK_DRIVER_LXC)) {
char *xml = virDomainDefFormat(vm->def, 0);
--
1.8.2.1
11 years, 2 months
[libvirt] Guide to view the libvirt source.
by cooldharma06
hi all,
i am new to this libvirt. Any guide or reference is available to know more
things libvirt (individual programs). because it will give me some idea to
get about the libvirt and its corresponding programs.
If there is available means please guide and share me.
i want to give my contribution to this community.
Lot of thanks in advance.
Regards,
cooldharma06.
11 years, 2 months
[libvirt] [PATCH 0/3] Improve LXC startup error reporting
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
LXC has long suffered from pretty poor error reporting of failures
at startup. This series addresses those problems.
Daniel P. Berrange (3):
Fix exit status of lxc controller
Improve error reporting with LXC controller
Don't ignore all dbus connection errors
src/lxc/lxc_controller.c | 2 +-
src/lxc/lxc_process.c | 31 +++++++++++++++++++++++++------
src/nwfilter/nwfilter_driver.c | 5 +++--
src/util/virdbus.c | 22 +++++++++++++++++++---
src/util/virsystemd.c | 6 ++++--
5 files changed, 52 insertions(+), 14 deletions(-)
--
1.8.3.1
11 years, 2 months
[libvirt] [PATCH] rpc: Retrieve peer PID via new getsockopt() for Mac
by Doug Goldstein
While LOCAL_PEERCRED on the BSDs does not return the pid information of
the peer, Mac OS X 10.8 added LOCAL_PEERPID to retrieve the pid so we
should use that when its available to get that information.
---
src/rpc/virnetsocket.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index e8cdfa6..09a0a12 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -1195,12 +1195,26 @@ int virNetSocketGetUNIXIdentity(virNetSocketPtr sock,
return -1;
}
- /* PID and process creation time are not supported on BSDs */
+ /* PID and process creation time are not supported on BSDs by
+ * LOCAL_PEERCRED.
+ */
*pid = -1;
*timestamp = -1;
*uid = cr.cr_uid;
*gid = cr.cr_gid;
+# ifdef LOCAL_PEERPID
+ /* Exists on Mac OS X 10.8 for retrieving the peer's PID */
+ cr_len = sizeof(*pid);
+
+ if (getsockopt(sock->fd, VIR_SOL_PEERCRED, LOCAL_PEERPID, pid, &cr_len) < 0) {
+ virReportSystemError(errno, "%s",
+ _("Failed to get client socket PID"));
+ virObjectUnlock(sock);
+ return -1;
+ }
+# endif
+
virObjectUnlock(sock);
return 0;
}
--
1.8.1.5
11 years, 2 months
[libvirt] PATCH: better error checking for LOCAL_PEERCRED
by Brian Candler
I was debugging libvirt with OSX today, and got as far as finding the
problem with LOCAL_PEERCRED, then googled this only to find that Ryota
Ozaki had fixed the problems a few days ago!
However you still may find the following patch useful. It tightens up
the checking in the LOCAL_PEERCRED block, and in particular fixes the
unlocking of the socket in the error return path for invalid groups, by
using the same logic from SO_PEERCRED - have a 'goto cleanup' in all
return paths.
(Detail: I found that when getsockopt was being called with SOL_SOCKET,
cr_ngroups was typically <0, probably because it was uninitialised.
However once the check for this was tightened, it hung because the
socket wasn't being unlocked on return. So better to (a) initialise it
to a negative value anyway, and (b) fix the return path)
However I have not checked that NGROUPS is defined on other BSD-like
systems.
Regards,
Brian Candler.
--- src/rpc/virnetsocket.c.orig 2013-10-10 22:37:49.000000000 +0100
+++ src/rpc/virnetsocket.c 2013-10-12 22:51:57.000000000 +0100
@@ -1157,8 +1157,10 @@
{
struct xucred cr;
socklen_t cr_len = sizeof(cr);
+ int ret = -1;
virObjectLock(sock);
+ cr.cr_ngroups = -1;
# if defined(__APPLE__)
if (getsockopt(sock->fd, SOL_LOCAL, LOCAL_PEERCRED, &cr, &cr_len)
< 0) {
# else
@@ -1166,20 +1168,19 @@
# endif
virReportSystemError(errno, "%s",
_("Failed to get client socket identity"));
- virObjectUnlock(sock);
- return -1;
+ goto cleanup;
}
if (cr.cr_version != XUCRED_VERSION) {
virReportError(VIR_ERR_SYSTEM_ERROR, "%s",
_("Failed to get valid client socket identity"));
- return -1;
+ goto cleanup;
}
- if (cr.cr_ngroups == 0) {
+ if (cr.cr_ngroups <= 0 || cr.cr_ngroups > NGROUPS) {
virReportError(VIR_ERR_SYSTEM_ERROR, "%s",
_("Failed to get valid client socket identity
groups"));
- return -1;
+ goto cleanup;
}
/* PID and process creation time are not supported on BSDs */
@@ -1188,8 +1189,11 @@
*uid = cr.cr_uid;
*gid = cr.cr_gid;
+ ret = 0;
+
+cleanup:
virObjectUnlock(sock);
- return 0;
+ return ret;
}
#else
int virNetSocketGetUNIXIdentity(virNetSocketPtr sock ATTRIBUTE_UNUSED,
11 years, 2 months
[libvirt] [BUG] libvirtd on destination crash frequently while migrating vms concurrently
by Wangyufei (A)
Hello,
I found a problem that libvirtd on destination crash frequently while migrating vms concurrently. For example, if I migrate 10 vms concurrently ceaselessly, then after about 30 minutes the libvirtd on destination will crash. So I analyzed and found two bugs during migration process.
First, during migration prepare phase on destination, libvirtd assigns ports to qemu to be startd on destination. But the port increase operation is not aomic, so there's a chance that multi vms get the same port, and only the first one can start successfully, others will fail to start. I've applied a patch to solve this bug, and I test it, it works well. If only this bug exists, libvirtd will not crash. The second bug is fatal.
Second, I found the libvirtd crash because of segment fault which is produced by accessing vm released. Apparently it's caused by multi-thread operation, thread A access vm data which has released by thread B. At last I proved my thought right.
Step 1. Because of bug one, the port is already occupied, so qemu on destination failed to start and sent a HANGUP signal to libvirtd, then libvirtd received this VIR_EVENT_HANDLE_HANGUP event, thread A dealing with events called qemuProcessHandleMonitorEOF as following:
#0 qemuProcessHandleMonitorEOF (mon=0x7f4dcd9c3130, vm=0x7f4dcd9c9780)
at qemu/qemu_process.c:399
#1 0x00007f4dc18d9e87 in qemuMonitorIO (watch=68, fd=27, events=8,
opaque=0x7f4dcd9c3130) at qemu/qemu_monitor.c:668
#2 0x00007f4dccae6604 in virEventPollDispatchHandles (nfds=18,
fds=0x7f4db4017e70) at util/vireventpoll.c:500
#3 0x00007f4dccae7ff2 in virEventPollRunOnce () at util/vireventpoll.c:646
#4 0x00007f4dccae60e4 in virEventRunDefaultImpl () at util/virevent.c:273
#5 0x00007f4dccc40b25 in virNetServerRun (srv=0x7f4dcd8d26b0)
at rpc/virnetserver.c:1106
#6 0x00007f4dcd6164c9 in main (argc=3, argv=0x7fff8d8f9f88)
at libvirtd.c:1518
static int virEventPollDispatchHandles(int nfds, struct pollfd *fds) {
......
/*
deleted flag is still false now, so we pass through to qemuProcessHandleMonitorEOF
*/
if (eventLoop.handles[i].deleted) {
EVENT_DEBUG("Skip deleted n=%d w=%d f=%d", i,
eventLoop.handles[i].watch, eventLoop.handles[i].fd);
continue;
}
Step 2: Thread B dealing with migration on destination set deleted flag in virEventPollRemoveHandle as following:
#0 virEventPollRemoveHandle (watch=74) at util/vireventpoll.c:176
#1 0x00007f4dccae5e6f in virEventRemoveHandle (watch=74)
at util/virevent.c:97
#2 0x00007f4dc18d8ca8 in qemuMonitorClose (mon=0x7f4dbc030910)
at qemu/qemu_monitor.c:831
#3 0x00007f4dc18bec63 in qemuProcessStop (driver=0x7f4dcd9bd400,
vm=0x7f4dbc00ed20, reason=VIR_DOMAIN_SHUTOFF_FAILED, flags=0)
at qemu/qemu_process.c:4302
#4 0x00007f4dc18c1a83 in qemuProcessStart (conn=0x7f4dbc031020,
driver=0x7f4dcd9bd400, vm=0x7f4dbc00ed20,
migrateFrom=0x7f4dbc01af90 "tcp:[::]:49152", stdin_fd=-1,
stdin_path=0x0, snapshot=0x0,
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6)
at qemu/qemu_process.c:4145
#5 0x00007f4dc18cc688 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,
Step 3: Thread B cleanup vm in qemuMigrationPrepareAny after qemuProcessStart failed.
#0 virDomainObjDispose (obj=0x7f4dcd9c9780) at conf/domain_conf.c:2009
#1 0x00007f4dccb0ccd9 in virObjectUnref (anyobj=0x7f4dcd9c9780)
at util/virobject.c:266
#2 0x00007f4dccb42340 in virDomainObjListRemove (doms=0x7f4dcd9bd4f0,
dom=0x7f4dcd9c9780) at conf/domain_conf.c:2342
#3 0x00007f4dc189ac33 in qemuDomainRemoveInactive (driver=0x7f4dcd9bd400,
vm=0x7f4dcd9c9780) at qemu/qemu_domain.c:1993
#4 0x00007f4dc18ccad5 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,
Step 4: Thread A access priv which is released by thread B before, then libvirtd crash, bomb!
static void
qemuProcessHandleMonitorEOF(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
virDomainObjPtr vm)
{
virQEMUDriverPtr driver = qemu_driver;
virDomainEventPtr event = NULL;
qemuDomainObjPrivatePtr priv;
int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN;
int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
const char *auditReason = "shutdown";
VIR_DEBUG("Received EOF on %p '%s'", vm, vm->def->name);
virObjectLock(vm);
priv = vm->privateData;
(gdb) p priv
$1 = (qemuDomainObjPrivatePtr) 0x0
if (priv->beingDestroyed) {
At last if anything bad happened to make qemuProcessStart failed during migration on destination, we'll be in the big trouble that accessing some memory freed. I didn't find any locks or flags exist could stop this happening. Please help me out, thanks a lot.
Best Regards,
-WangYufei
11 years, 2 months