[libvirt] [PATCH] conf: fix memleak in qemuRestoreCgroupState
by Luyao Huang
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x52A026F: virReallocN (viralloc.c:245)
by 0x52BFCB5: saferead_lim (virfile.c:1268)
by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
by 0x52C019A: virFileReadAll (virfile.c:1351)
by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
by 0x52FD171: virThreadHelper (virthread.c:206)
by 0x82B8DF4: start_thread (pthread_create.c:308)
by 0x85C31AC: clone (clone.S:113)
Signed-off-by: Luyao Huang <lhuang(a)redhat.com>
---
src/qemu/qemu_cgroup.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c
index 7d64ce7..f872525 100644
--- a/src/qemu/qemu_cgroup.c
+++ b/src/qemu/qemu_cgroup.c
@@ -796,6 +796,7 @@ qemuRestoreCgroupState(virDomainObjPtr vm)
virCgroupSetCpusetMems(cgroup_temp, nodeset) < 0)
goto cleanup;
+ VIR_FREE(nodeset);
virCgroupFree(&cgroup_temp);
}
@@ -806,6 +807,7 @@ qemuRestoreCgroupState(virDomainObjPtr vm)
virCgroupSetCpusetMems(cgroup_temp, nodeset) < 0)
goto cleanup;
+ VIR_FREE(nodeset);
virCgroupFree(&cgroup_temp);
}
--
1.8.3.1
9 years, 8 months
[libvirt] [PATCHv2 0/2] vbox: Add support for virDomainSendKey
by Dawid Zamirski
Those patches implement support for virDomainSendKey in the VBOX driver. Since
VBOX SDK does not support "holdtime" natively, it's being simulated by using
usleep to wait for that time before "key-up" scancodes are sent. The "key-up"
scancodes are automatically generated by add 0x80 to "key-down" scancodes. This
is done to make the implementation match the behavior of how QEMU driver handles
this, and therefore is different from what native VBoxManage command does - e.g.
one has to type in "key-up" scancodes explicitely and no hold time support at
all.
---
v1 (for reference):
https://www.redhat.com/archives/libvir-list/2015-March/msg01028.html
v2:
- add virReportError in all potentially failing code paths
- coding style adjustment
- mark for 1.2.15
Dawid Zamirski (2):
vbox: Register IKeyboard with the unified API.
vbox: Implement virDomainSendKey
src/vbox/vbox_common.c | 120 ++++++++++++++++++++++++++++++++++++++++++
src/vbox/vbox_common.h | 1 +
src/vbox/vbox_tmpl.c | 27 ++++++++++
src/vbox/vbox_uniformed_api.h | 8 +++
4 files changed, 156 insertions(+)
--
2.3.4
9 years, 8 months
[libvirt] [PATCH] qemu_driver: check caps after starting block job
by Michael Chapman
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:
# virsh domstate example
shut off
# virsh blockjob example vda
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.
Signed-off-by: Michael Chapman <mike(a)very.puzzling.org>
---
src/qemu/qemu_driver.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index becf415..cb9295e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16476,12 +16476,6 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path,
}
priv = vm->privateData;
- if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKJOB_ASYNC) &&
- !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKJOB_SYNC)) {
- virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
- _("block jobs not supported with this QEMU binary"));
- goto cleanup;
- }
if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
@@ -16492,6 +16486,13 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path,
goto endjob;
}
+ if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKJOB_ASYNC) &&
+ !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKJOB_SYNC)) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("block jobs not supported with this QEMU binary"));
+ goto endjob;
+ }
+
device = qemuDiskPathToAlias(vm, path, &idx);
if (!device)
goto endjob;
--
2.1.0
9 years, 8 months
[libvirt] [PATCH] qemu_migrate: use nested job when adding NBD to cookie
by Michael Chapman
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike(a)very.puzzling.org>
---
src/qemu/qemu_migration.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 5607d1a..d770aea 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -571,7 +571,9 @@ qemuMigrationCookieAddNBD(qemuMigrationCookiePtr mig,
if (!(stats = virHashCreate(10, virHashValueFree)))
goto cleanup;
- qemuDomainObjEnterMonitor(driver, vm);
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ priv->job.asyncJob) < 0)
+ goto cleanup;
rc = qemuMonitorBlockStatsUpdateCapacity(priv->mon, stats, false);
if (qemuDomainObjExitMonitor(driver, vm) < 0)
goto cleanup;
--
2.1.0
9 years, 8 months
[libvirt] [PATCH v2 0/3] A few NUMA fixes
by Michal Privoznik
diff to v1:
-reworked to follow Jan's review. Hopefully.
Michal Privoznik (3):
vircgroup: Introduce virCgroupControllerAvailable
qemuProcessHook: Call virNuma*() iff needed
virLXCControllerSetupResourceLimits: Call virNuma*() iff needed
src/libvirt_private.syms | 1 +
src/lxc/lxc_controller.c | 22 ++++++++++++++++------
src/qemu/qemu_process.c | 21 +++++++++++++++++----
src/util/vircgroup.c | 19 +++++++++++++++++++
src/util/vircgroup.h | 1 +
tests/vircgrouptest.c | 31 +++++++++++++++++++++++++++++++
6 files changed, 85 insertions(+), 10 deletions(-)
--
2.0.5
9 years, 8 months
[libvirt] [PATCH 0/3] qemu: fix broken block job handling
by Peter Krempa
Block job handling violates our usage of domain jobs and changes disk source
definition behind our back.
Peter Krempa (3):
qemu: process: Export qemuProcessFindDomainDiskByAlias
qemu: event: Don't fiddle with disk backing trees without a job
qemu: Disallow concurrent block jobs on a single disk
src/conf/domain_conf.h | 1 +
src/qemu/qemu_domain.c | 23 +++++++
src/qemu/qemu_domain.h | 4 ++
src/qemu/qemu_driver.c | 170 +++++++++++++++++++++++++++++++++++++++++++-----
src/qemu/qemu_process.c | 131 +++++++------------------------------
src/qemu/qemu_process.h | 3 +
6 files changed, 211 insertions(+), 121 deletions(-)
--
2.2.2
9 years, 8 months
[libvirt] [PATCH] parallels: delete old networks in prlsdkDoApplyConfig before adding new ones
by Maxim Nestratov
In order to change an existing domain we delete all existing devices and add
new from scratch. In case of network devices we should also delete corresponding
virtual networks (if any) before removing actual devices from xml. In the patch,
we do it by extending prlsdkDoApplyConfig with a new parameter, which stands for
old xml, and calling prlsdkDelNet every time old xml is specified.
Signed-off-by: Maxim Nestratov <mnestratov(a)parallels.com>
---
src/parallels/parallels_sdk.c | 24 +++++++++++++++---------
1 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/src/parallels/parallels_sdk.c b/src/parallels/parallels_sdk.c
index c36b772..64a2d15 100644
--- a/src/parallels/parallels_sdk.c
+++ b/src/parallels/parallels_sdk.c
@@ -2935,7 +2935,8 @@ prlsdkAddFS(PRL_HANDLE sdkdom, virDomainFSDefPtr fs)
static int
prlsdkDoApplyConfig(virConnectPtr conn,
PRL_HANDLE sdkdom,
- virDomainDefPtr def)
+ virDomainDefPtr def,
+ virDomainDefPtr olddef)
{
PRL_RESULT pret;
size_t i;
@@ -2997,6 +2998,16 @@ prlsdkDoApplyConfig(virConnectPtr conn,
if (prlsdkRemoveBootDevices(sdkdom) < 0)
goto error;
+ if(olddef) {
+ for (i = 0; i < olddef->nnets; i++)
+ prlsdkDelNet(conn->privateData, olddef->nets[i]);
+ }
+
+ for (i = 0; i < def->nnets; i++) {
+ if (prlsdkAddNet(sdkdom, conn->privateData, def->nets[i]) < 0)
+ goto error;
+ }
+
if (prlsdkApplyGraphicsParams(sdkdom, def) < 0)
goto error;
@@ -3008,11 +3019,6 @@ prlsdkDoApplyConfig(virConnectPtr conn,
goto error;
}
- for (i = 0; i < def->nnets; i++) {
- if (prlsdkAddNet(sdkdom, conn->privateData, def->nets[i]) < 0)
- goto error;
- }
-
for (i = 0; i < def->ndisks; i++) {
bool bootDisk = false;
@@ -3060,7 +3066,7 @@ prlsdkApplyConfig(virConnectPtr conn,
if (PRL_FAILED(waitJob(job, privconn->jobTimeout)))
return -1;
- ret = prlsdkDoApplyConfig(conn, sdkdom, new);
+ ret = prlsdkDoApplyConfig(conn, sdkdom, new, dom->def);
if (ret == 0) {
job = PrlVm_CommitEx(sdkdom, PVCF_DETACH_HDD_BUNDLE);
@@ -3100,7 +3106,7 @@ prlsdkCreateVm(virConnectPtr conn, virDomainDefPtr def)
pret = PrlVmCfg_SetOfflineManagementEnabled(sdkdom, 0);
prlsdkCheckRetGoto(pret, cleanup);
- ret = prlsdkDoApplyConfig(conn, sdkdom, def);
+ ret = prlsdkDoApplyConfig(conn, sdkdom, def, NULL);
if (ret)
goto cleanup;
@@ -3162,7 +3168,7 @@ prlsdkCreateCt(virConnectPtr conn, virDomainDefPtr def)
}
- ret = prlsdkDoApplyConfig(conn, sdkdom, def);
+ ret = prlsdkDoApplyConfig(conn, sdkdom, def, NULL);
if (ret)
goto cleanup;
--
1.7.1
9 years, 8 months
[libvirt] [PATCH 0/4] Various bugfixes for cancelled VM migrations
by Michael Chapman
This patch series contains fixes for several bugs I encountered while deliberately forcing a VM migration to abort by killing the libvirt client. My VM has storage on local disk, which needs to be mirrored during the migration, and this gives ample time for this abort to take place. The particular bug I encountered depended on precisely which phase the migration had made it to (e.g. whether disk mirroring had actually commenced).
Patch 1 fixes a crash on the destination libvirt daemon due to a use-after-free of the domain object. Patches 2 and 4 fix some bugs related to the close callback handling on the source libvirt side. Patch 3 ensures that the VM on the source libvirt does not get into an invalid state if its migration is aborted during disk mirroring.
All patches are independent of one another and can be applied separately.
Michael Chapman (4):
qemu: fix crash in qemuProcessAutoDestroy
qemu: fix error propagation in qemuMigrationBegin
qemu: fix race between disk mirror fail and cancel
util: fix removal of callbacks in virCloseCallbacksRun
src/qemu/qemu_domain.c | 5 +++++
src/qemu/qemu_migration.c | 12 +++++++++++-
src/qemu/qemu_process.c | 4 +++-
src/util/virclosecallbacks.c | 10 ++++++----
4 files changed, 25 insertions(+), 6 deletions(-)
--
2.1.0
9 years, 8 months
[libvirt] [PATCH 0/2] fail out if enable userns but disable netns
by Chen Hanxiao
Chen Hanxiao (2):
Revert "LXC: create a bind mount for sysfs when enable userns but
disable netns"
LXC: make sure netns been enabled when trying to enable userns
src/lxc/lxc_container.c | 45 ++++++++++++++++-----------------------------
1 file changed, 16 insertions(+), 29 deletions(-)
--
2.1.0
9 years, 8 months
[libvirt] [RFC PATCH 0/2] Vbox: Add support for virDomainSendKey
by Dawid Zamirski
Hello,
Those small patches implement virDomainSendKey support in VBOX driver. However,
the VBOX SDK does not support "holdtime" so I used usleep to wait for that time
before sending "key-up" scancodes. This makes it behave similarly to QEMU
driver, however I'm not sure if that way of handling this would be preferred.
Another option, would be to ignore holdtime argument and make virDomainSendKey
work the same as via VBoxManage cli tool where one has to send
"key-down" scancodes followed by "key-up" scancodes. For this RFC paches, I've
choosen to make it work as close to the public API documentation.
Dawid Zamirski (2):
vbox: Register IKeyboard with the unified API.
vbox: Implement virDomainSendKey
src/vbox/vbox_common.c | 107 ++++++++++++++++++++++++++++++++++++++++++
src/vbox/vbox_common.h | 1 +
src/vbox/vbox_tmpl.c | 27 +++++++++++
src/vbox/vbox_uniformed_api.h | 8 ++++
4 files changed, 143 insertions(+)
--
2.3.3
9 years, 8 months