[libvirt] [PATCH 0/4] Couple of trivial c89 style fixes
by Michal Privoznik
*** BLURB HERE ***
BLURB
Michal Privoznik (4):
examples: Properly include getopt.h
vmwarevertest: drop VIR_FROM_THIS definition
lib: Fix c99 style comments
cpu: Avoid c99 style of assembler
examples/admin/logging.c | 14 +++++++-------
examples/openauth/openauth.c | 2 +-
src/cpu/cpu_ppc64.c | 4 ++--
src/cpu/cpu_x86.c | 44 +++++++++++++++++++++----------------------
src/lxc/lxc_controller.c | 2 +-
src/remote/remote_driver.c | 2 +-
src/rpc/virnetservermdns.c | 2 +-
src/security/security_stack.c | 8 ++++----
src/uml/uml_conf.c | 2 +-
src/util/virhostcpu.c | 4 ++--
src/util/virhostmem.c | 4 ++--
src/vbox/vbox_snapshot_conf.c | 2 +-
tests/eventtest.c | 2 +-
tests/virhostcpumock.c | 2 +-
tests/vmwarevertest.c | 2 --
15 files changed, 47 insertions(+), 49 deletions(-)
--
2.10.2
7 years, 8 months
[libvirt] [PATCH 0/2] qemu: migration: bugfixes for cancelling drive mirror
by Nikolay Shirokovskiy
Nikolay Shirokovskiy (2):
qemu: take current async job into account in qemuBlockNodeNamesDetect
qemu: migration: fix race on cancelling drive mirror
src/qemu/qemu_block.c | 6 ++++--
src/qemu/qemu_block.h | 4 +++-
src/qemu/qemu_blockjob.c | 9 ++++++---
src/qemu/qemu_blockjob.h | 4 ++++
src/qemu/qemu_driver.c | 11 ++++++-----
src/qemu/qemu_migration.c | 43 +++++++++++++++++++++++++++++++------------
src/qemu/qemu_process.c | 2 +-
7 files changed, 55 insertions(+), 24 deletions(-)
--
1.8.3.1
7 years, 8 months
Re: [libvirt] [PATCH v3] Add support for Veritas HyperScale (VxHS) block device protocol
by ashish mittal
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs
patch to support passing TLS X509 arguments to qemu, similar to the
following -
Sample QEMU command line passing TLS credentials to the VxHS block
device (run in secure mode):
./qemu-io --object
tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read
-v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999",
"vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to
locate it. Any pointers will be appreciated.
Thanks,
Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan(a)redhat.com> wrote:
> [...]
> Pressed send too soon, sigh.
>
>
>>>>
>>>> #1. Based on Peter's v2 comments, we don't want to support the
>>>> older/legacy syntax for VxHS, so it's something that should be removed -
>>>> although we should check for it being present and fail if found.
>>>>
>>>
>>> I am testing with changed code to return error if legacy syntax is
>>> found for VxHS. Also added a test case to check for failure on legacy
>>> syntax and it seems to pass (test #41 below).
>>>
>>> Then I added a pass test case to check conversion from new native
>>> syntax to XML (test #40 below). That test fails with error
>>> 'qemuParseCommandLineDisk:901 : internal error: missing file parameter
>>> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
>>
>> The qemu_parse_command.c changes while nice to have weren't even updated
>> when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28')
>> Check the changes to add the new s
>>
>> IOW: This code knows how to parse something like:
>>
>> -drive
>> 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
>>
>> but it's clueless for:
>>
>> -drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\
>> file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\
>> file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\
>> file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
>> if=none,id=drive-virtio-disk2 \
>> -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\
>> id=virtio-disk2
>>
>> See
>>>
>>> Looks like none of the existing tests in qemuargv2xmltest test for the
>>> parsing of new syntax, and qemuParseCommandLineDisk() expects to find
>>> 'file=' for a drive or it errors out. If this is true, will it be able
>>> to parse the new syntax? Some help here please!
>
> So I wouldn't expect the VxHS code to be able to do that unless you
> wanted to be adventurous. The good news is that this code is primarily
> for developers that need to take a qemu command line to generate the
> libvirt syntax. It has not really been kept up to date with all the most
> recent command line changes. I started to try over a year ago, but got
> very side tracked.
>
>>>
>>> Output from the newly added test cases (40 should pass and 41 checks
>>> for error) :
>>>
>>> 40) QEMU ARGV-2-XML disk-drive-network-vxhs
>>> ... Got unexpected warning from qemuParseCommandLineString:
>>> 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0
>>> 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain
>>> 2017-01-28 00:57:30.814+0000: 10391: error :
>>> qemuParseCommandLineDisk:901 : internal error: missing file parameter
>>> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none'
>>> libvirt: QEMU Driver error : internal error: missing file parameter in
>>> drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none'
>>> FAILED
>>>
>>> 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail
>>> ... Got expected error from qemuParseCommandLineString:
>>> libvirt: QEMU Driver error : internal error: VxHS protocol does not
>>> support URI syntax
>>> 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251'
>>> OK
>>> 42) QEMU ARGV-2-XML disk-usb ... OK
>>>
>>>
>>>
>>>> #2. Is the desire to ever support more than 1 host? If not, then is the
>>>> "server" syntax you've borrowed from the Gluster code necessary? Could
>>>> you just go with the single "host" like NBD and SSH. As it relates to
>>>> the qemu command line - I'm not quite as clear. From the example I see
>>>> in commit id '7b7da9e28', the gluster syntax would have:
>>>>
>>>
>>> Present understanding is to have only one host. You are right, the
>>> "server" part is not necessary. Will have to check with the qemu
>>> community on this change.
>>>
>>>> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\
>>>> +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\
>>>> +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
>>>>
>>>> whereas, the VxHS syntax is:
>>>> +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
>>>>
>>>> FWIW: I also note there is no ".type=tcp" in your output - so perhaps
>>>> the "default" is tcp unless otherwise specified, but I'm sure of the
>>>> qemu syntax requirements in this area. I assume that since there's only
>>>> 1 server, the ".0, .1, .2" become unnecessary (something added by commit
>>>> id 'f1bbc7df4' for multiple gluster hosts).
>>>>
>>>
>>> That's correct. TCP is the default.
>>>
>>>> I haven't closedly followed the qemu syntax discussion, but it would it
>>>> would be possible to use:
>>>>
>>>> +file.host=192.168.0.1,file.port=9999
>>>>
>>>
>>> That is correct. Above syntax would also work for us. I will pose this
>>> suggestion to the qemu community and update with their response.
>>>
>
> It's not that important... I was looking for a simplification and
> generation of only what's required. You can continue using the server
> syntax - perhaps just leave a note/comment in the code indicating the
> decision point and move on.
>
> [...]
>
> John
7 years, 8 months
[libvirt] [PATCH] qemu: numa: Don't return automatic nodeset for inactive domain
by Peter Krempa
qemuDomainGetNumaParameters would return the automatic nodeset even for
the persistent config if the domain was running. This is incorrect since
the automatic nodeset will be re-queried upon starting the vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1445325
---
src/qemu/qemu_driver.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e39de625d..1ba3e0943 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9461,6 +9461,8 @@ qemuDomainGetNumaParameters(virDomainPtr dom,
char *nodeset = NULL;
int ret = -1;
virDomainDefPtr def = NULL;
+ bool live = false;
+ virBitmapPtr autoNodeset = NULL;
virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
VIR_DOMAIN_AFFECT_CONFIG |
@@ -9473,9 +9475,12 @@ qemuDomainGetNumaParameters(virDomainPtr dom,
if (virDomainGetNumaParametersEnsureACL(dom->conn, vm->def) < 0)
goto cleanup;
- if (!(def = virDomainObjGetOneDef(vm, flags)))
+ if (!(def = virDomainObjGetOneDefState(vm, flags, &live)))
goto cleanup;
+ if (live)
+ autoNodeset = priv->autoNodeset;
+
if ((*nparams) == 0) {
*nparams = QEMU_NB_NUMA_PARAM;
ret = 0;
@@ -9496,8 +9501,7 @@ qemuDomainGetNumaParameters(virDomainPtr dom,
break;
case 1: /* fill numa nodeset here */
- nodeset = virDomainNumatuneFormatNodeset(def->numa,
- priv->autoNodeset, -1);
+ nodeset = virDomainNumatuneFormatNodeset(def->numa, autoNodeset, -1);
if (!nodeset ||
virTypedParameterAssign(param, VIR_DOMAIN_NUMA_NODESET,
VIR_TYPED_PARAM_STRING, nodeset) < 0)
--
2.12.2
7 years, 8 months
[libvirt] [PATCH] qemu: Properly reset non-p2p migration
by Jiri Denemark
While peer-to-peer migration enters the Confirm phase even if the
Perform phase fails, the client which initiated a non-p2p migration will
never call virDomainMigrateConfirm* API if the Perform phase failed.
Thus we need to explicitly reset migration before reporting a failure
from the Perform phase API.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_migration.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 09adb0484..42da706c9 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -4914,10 +4914,13 @@ qemuMigrationPerformPhase(virQEMUDriverPtr driver,
goto endjob;
endjob:
- if (ret < 0)
+ if (ret < 0) {
+ qemuMigrationReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT);
qemuMigrationJobFinish(driver, vm);
- else
+ } else {
qemuMigrationJobContinue(vm);
+ }
+
if (!virDomainObjIsActive(vm))
qemuDomainRemoveInactive(driver, vm);
--
2.12.2
7 years, 8 months
[libvirt] [PATCH] util: Remove unused variate @errbuf in virPCIGetDeviceAddressFromSysfsLink
by Wang King
From: w00185384 <king.wang(a)huawei.com>
Since refactoring by commit id 'a7035662', @errbuf is no longer used.
---
src/util/virpci.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/util/virpci.c b/src/util/virpci.c
index c89b94b..83c7e74 100644
--- a/src/util/virpci.c
+++ b/src/util/virpci.c
@@ -2618,7 +2618,6 @@ virPCIGetDeviceAddressFromSysfsLink(const char *device_link)
virPCIDeviceAddressPtr bdf = NULL;
char *config_address = NULL;
char *device_path = NULL;
- char errbuf[64];
if (!virFileExists(device_link)) {
VIR_DEBUG("'%s' does not exist", device_link);
@@ -2627,7 +2626,6 @@ virPCIGetDeviceAddressFromSysfsLink(const char *device_link)
device_path = canonicalize_file_name(device_link);
if (device_path == NULL) {
- memset(errbuf, '\0', sizeof(errbuf));
virReportSystemError(errno,
_("Failed to resolve device link '%s'"),
device_link);
--
2.8.3
7 years, 8 months
[libvirt] [PATCH] locking: Add support for sanlock_strerror
by Jiri Denemark
The recently added sanlock_strerror function can be used to translate
sanlock's numeric errors into human readable strings.
https://bugzilla.redhat.com/show_bug.cgi?id=1409511
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
m4/virt-sanlock.m4 | 7 ++
src/locking/lock_driver_sanlock.c | 154 ++++++++++++++++++++++++++------------
2 files changed, 115 insertions(+), 46 deletions(-)
diff --git a/m4/virt-sanlock.m4 b/m4/virt-sanlock.m4
index e4476cae4..00de7980e 100644
--- a/m4/virt-sanlock.m4
+++ b/m4/virt-sanlock.m4
@@ -61,6 +61,13 @@ AC_DEFUN([LIBVIRT_CHECK_SANLOCK],[
[whether sanlock supports sanlock_write_lockspace])
fi
+ AC_CHECK_LIB([sanlock_client], [sanlock_strerror],
+ [sanlock_strerror=yes], [sanlock_strerror=no])
+ if test "x$sanlock_strerror" = "xyes" ; then
+ AC_DEFINE_UNQUOTED([HAVE_SANLOCK_STRERROR], 1,
+ [whether sanlock supports sanlock_strerror])
+ fi
+
CPPFLAGS="$old_cppflags"
LIBS="$old_libs"
fi
diff --git a/src/locking/lock_driver_sanlock.c b/src/locking/lock_driver_sanlock.c
index 280219f72..b5e69c472 100644
--- a/src/locking/lock_driver_sanlock.c
+++ b/src/locking/lock_driver_sanlock.c
@@ -97,6 +97,25 @@ struct _virLockManagerSanlockPrivate {
bool registered;
};
+
+static bool
+ATTRIBUTE_NONNULL(2)
+virLockManagerSanlockError(int err,
+ char **message)
+{
+ if (err <= -200) {
+#if HAVE_SANLOCK_STRERROR
+ ignore_value(VIR_STRDUP_QUIET(*message, sanlock_strerror(err)));
+#else
+ ignore_value(virAsprintfQuiet(message, _("sanlock error %d"), err));
+#endif
+ return true;
+ } else {
+ return false;
+ }
+}
+
+
/*
* sanlock plugin for the libvirt virLockManager API
*/
@@ -263,14 +282,17 @@ virLockManagerSanlockSetupLockspace(virLockManagerSanlockDriverPtr driver)
}
if ((rv = sanlock_align(&ls.host_id_disk)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to query sector size %s: error %d"),
- path, rv);
- else
+ _("Unable to query sector size %s: %s"),
+ path, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to query sector size %s"),
path);
+ }
goto error_unlink;
}
@@ -292,14 +314,17 @@ virLockManagerSanlockSetupLockspace(virLockManagerSanlockDriverPtr driver)
}
if ((rv = virLockManagerSanlockInitLockspace(driver, &ls) < 0)) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to initialize lockspace %s: error %d"),
- path, rv);
- else
+ _("Unable to initialize lockspace %s: %s"),
+ path, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to initialize lockspace %s"),
path);
+ }
goto error_unlink;
}
VIR_DEBUG("Lockspace %s has been initialized", path);
@@ -362,14 +387,17 @@ virLockManagerSanlockSetupLockspace(virLockManagerSanlockDriverPtr driver)
goto retry;
}
if (-rv != EEXIST) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to add lockspace %s: error %d"),
- path, rv);
- else
+ _("Unable to add lockspace %s: %s"),
+ path, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to add lockspace %s"),
path);
+ }
goto error;
} else {
VIR_DEBUG("Lockspace %s is already registered", path);
@@ -694,14 +722,17 @@ virLockManagerSanlockCreateLease(virLockManagerSanlockDriverPtr driver,
}
if ((rv = sanlock_align(&res->disks[0])) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to query sector size %s: error %d"),
- res->disks[0].path, rv);
- else
+ _("Unable to query sector size %s: %s"),
+ res->disks[0].path, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to query sector size %s"),
res->disks[0].path);
+ }
goto error_unlink;
}
@@ -723,14 +754,17 @@ virLockManagerSanlockCreateLease(virLockManagerSanlockDriverPtr driver,
}
if ((rv = sanlock_init(NULL, res, 0, 0)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to initialize lease %s: error %d"),
- res->disks[0].path, rv);
- else
+ _("Unable to initialize lease %s: %s"),
+ res->disks[0].path, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to initialize lease %s"),
res->disks[0].path);
+ }
goto error_unlink;
}
VIR_DEBUG("Lease %s has been initialized", res->disks[0].path);
@@ -867,10 +901,12 @@ virLockManagerSanlockRegisterKillscript(int sock,
}
if ((rv = sanlock_killpath(sock, 0, path, args)) < 0) {
- if (rv <= -200) {
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to register lock failure action:"
- " error %d"), rv);
+ _("Failed to register lock failure action: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
} else {
virReportSystemError(-rv, "%s",
_("Failed to register lock failure"
@@ -934,13 +970,16 @@ static int virLockManagerSanlockAcquire(virLockManagerPtr lock,
if (priv->vm_pid == getpid()) {
VIR_DEBUG("Register sanlock %d", flags);
if ((sock = sanlock_register()) < 0) {
- if (sock <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(sock, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to open socket to sanlock daemon: error %d"),
- sock);
- else
+ _("Failed to open socket to sanlock daemon: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-sock, "%s",
_("Failed to open socket to sanlock daemon"));
+ }
goto error;
}
@@ -971,14 +1010,17 @@ static int virLockManagerSanlockAcquire(virLockManagerPtr lock,
if ((rv = sanlock_state_to_args((char *)state,
&res_count,
&res_args)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to parse lock state %s: error %d"),
- state, rv);
- else
+ _("Unable to parse lock state %s: %s"),
+ state, NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv,
_("Unable to parse lock state %s"),
state);
+ }
goto error;
}
res_free = true;
@@ -992,12 +1034,16 @@ static int virLockManagerSanlockAcquire(virLockManagerPtr lock,
if ((rv = sanlock_acquire(sock, priv->vm_pid, 0,
priv->res_count, priv->res_args,
opt)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_RESOURCE_BUSY,
- _("Failed to acquire lock: error %d"), rv);
- else
+ _("Failed to acquire lock: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv, "%s",
_("Failed to acquire lock"));
+ }
goto error;
}
}
@@ -1016,12 +1062,16 @@ static int virLockManagerSanlockAcquire(virLockManagerPtr lock,
if (flags & VIR_LOCK_MANAGER_ACQUIRE_RESTRICT) {
if ((rv = sanlock_restrict(sock, SANLK_RESTRICT_ALL)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to restrict process: error %d"), rv);
- else
+ _("Failed to restrict process: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv, "%s",
_("Failed to restrict process"));
+ }
goto error;
}
}
@@ -1068,12 +1118,16 @@ static int virLockManagerSanlockRelease(virLockManagerPtr lock,
if (state) {
if ((rv = sanlock_inquire(-1, priv->vm_pid, 0, &res_count, state)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to inquire lock: error %d"), rv);
- else
+ _("Failed to inquire lock: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv, "%s",
_("Failed to inquire lock"));
+ }
return -1;
}
@@ -1083,12 +1137,16 @@ static int virLockManagerSanlockRelease(virLockManagerPtr lock,
if ((rv = sanlock_release(-1, priv->vm_pid, 0, res_count,
priv->res_args)) < 0) {
- if (rv <= -200)
+ char *err = NULL;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to release lock: error %d"), rv);
- else
+ _("Failed to release lock: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv, "%s",
_("Failed to release lock"));
+ }
return -1;
}
@@ -1118,12 +1176,16 @@ static int virLockManagerSanlockInquire(virLockManagerPtr lock,
}
if ((rv = sanlock_inquire(-1, priv->vm_pid, 0, &res_count, state)) < 0) {
- if (rv <= -200)
+ char *err;
+ if (virLockManagerSanlockError(rv, &err)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to inquire lock: error %d"), rv);
- else
+ _("Failed to inquire lock: %s"),
+ NULLSTR(err));
+ VIR_FREE(err);
+ } else {
virReportSystemError(-rv, "%s",
_("Failed to inquire lock"));
+ }
return -1;
}
--
2.12.2
7 years, 8 months
[libvirt] [PATCH] qemu: Ignore missing query-migrate-parameters
by Jiri Denemark
Trivially no migration parameters are supported when
query-migrate-parameters QMP command is missing. There's no need to
report an error in such case especially when doing so breaks
compatibility with old QEMU.
https://bugzilla.redhat.com/show_bug.cgi?id=1441934
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_monitor_json.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 98e3c53f5..083729003 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2660,6 +2660,11 @@ qemuMonitorJSONGetMigrationParams(qemuMonitorPtr mon,
if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0)
goto cleanup;
+ if (qemuMonitorJSONHasError(reply, "CommandNotFound")) {
+ ret = 0;
+ goto cleanup;
+ }
+
if (qemuMonitorJSONCheckError(cmd, reply) < 0)
goto cleanup;
--
2.12.2
7 years, 8 months
[libvirt] [ROC] migration: set cpu throttle value by workload
by Chao Fan
Hi all,
When migrating a guest which consumes too much CPU & memory, dirty
pages amount will increase significantly, so does the migration
time, migration can not even complete, at worst.
So I made an RFC patch in QEMU to set cpu throttle value by workload
when migration. The test result and the RFC patch are here:
https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03479.html
But this idea was not accepted by QEMU community. So I want to do a
similar feature in libvirt:
Step 1: Add --auto-converge-smart parameter to migrate.
Step 2: Add a timer in qemu-driver to get the 'info migrate' to check
if the dirty-pages-rate updated every 1 second or 0.5 second
when migration.
Step 3: If updated, change the cpu throttle value according to the
dirty-pages-rate and page-size by
'migrate_set_parameter cpu-throttle-increment'
I think this feature makes auto-converge smarter than leaving the
cpu throttle value with default 20/10 or set by users.
And also it can save time.
Any comments will be welcome.
Thanks,
Chao Fan
7 years, 8 months