[libvirt] [PATCH v2 0/8] libxl: PVHv2 support
by Marek Marczykowski-Górecki
This is a respin of my old PVHv1 patch[1], converted to PVHv2.
Should the code use "PVH" name (as libxl does internally), or "PVHv2" as in
many places in Xen documentation? I've chosen the former, but want to confirm
it.
Also, not sure about VIR_DOMAIN_OSTYPE_XENPVH (as discussed on PVHv1 patch) -
while it will be messy in many cases, there is
libxl_domain_build_info.u.{hvm,pv,pvh} which would be good to not mess up.
Also, PVHv2 needs different kernel features than PV (CONFIG_XEN_PVH vs
CONFIG_XEN_PV), so keeping the same VIR_DOMAIN_OSTYPE_XEN could be
confusing.
On the other hand, libxl_domain_build_info.u.pv is used in very few places (one
section of libxlMakeDomBuildInfo), so guarding u.hvm access with
VIR_DOMAIN_OSTYPE_HVM may be enough.
For now I've reused VIR_DOMAIN_OSTYPE_XEN - in the driver itself, most of
the code is the same as for PV.
Since PVHv2 relies on features in newer Xen versions, I needed to convert also
some older code. For example b_info->u.hvm.nested_hvm was deprecated in favor
of b_info->nested_hvm. While the code do handle both old and new versions
(obviously refusing PVHv2 if Xen is too old), this isn't the case for tests.
How it should be handled, if at all?
First few preparatory patches can be applied independently.
[1] https://www.redhat.com/archives/libvir-list/2016-August/msg00376.html
Changes in v2:
- drop "docs: don't refer to deprecated 'linux' ostype in example" patch -
migrating further away from "linux" os type is offtopic to this series and
apparently is a controversial thing
- drop "docs: update domain schema for machine attribute" patch -
already applied
- apply review comments from Jim
- rebase on master
Marek Marczykowski-Górecki (8):
docs: add documentation of arch element of capabilities.xml
libxl: set shadow memory for any guest type, not only HVM
libxl: prefer new location of nested_hvm in libxl_domain_build_info
libxl: reorder libxlMakeDomBuildInfo for upcoming PVH support
libxl: add support for PVH
tests: add basic Xen PVH test
xenconfig: add support for parsing type= xl config entry
xenconfig: add support for type="pvh"
docs/formatcaps.html.in | 22 ++++-
docs/formatdomain.html.in | 11 +-
docs/schemas/domaincommon.rng | 1 +-
src/libxl/libxl_capabilities.c | 33 ++++++-
src/libxl/libxl_conf.c | 81 ++++++++++++-----
src/libxl/libxl_driver.c | 6 +-
src/xenconfig/xen_common.c | 27 +++++-
src/xenconfig/xen_xl.c | 5 +-
tests/libxlxml2domconfigdata/basic-pv.json | 1 +-
tests/libxlxml2domconfigdata/basic-pvh.json | 49 ++++++++++-
tests/libxlxml2domconfigdata/basic-pvh.xml | 28 ++++++-
tests/libxlxml2domconfigdata/fullvirt-cpuid.json | 2 +-
tests/libxlxml2domconfigdata/multiple-ip.json | 1 +-
tests/libxlxml2domconfigdata/vnuma-hvm.json | 2 +-
tests/libxlxml2domconfigtest.c | 1 +-
tests/testutilsxen.c | 3 +-
tests/xlconfigdata/test-fullvirt-type.cfg | 21 ++++-
tests/xlconfigdata/test-fullvirt-type.xml | 27 ++++++-
tests/xlconfigdata/test-paravirt-type.cfg | 13 +++-
tests/xlconfigdata/test-paravirt-type.xml | 25 +++++-
tests/xlconfigdata/test-pvh-type.cfg | 13 +++-
tests/xlconfigdata/test-pvh-type.xml | 25 +++++-
tests/xlconfigtest.c | 3 +-
23 files changed, 359 insertions(+), 41 deletions(-)
create mode 100644 tests/libxlxml2domconfigdata/basic-pvh.json
create mode 100644 tests/libxlxml2domconfigdata/basic-pvh.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-type.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-type.xml
create mode 100644 tests/xlconfigdata/test-paravirt-type.cfg
create mode 100644 tests/xlconfigdata/test-paravirt-type.xml
create mode 100644 tests/xlconfigdata/test-pvh-type.cfg
create mode 100644 tests/xlconfigdata/test-pvh-type.xml
base-commit: 16858439deaec0832de61c5ddb93d8e80adccf6c
--
git-series 0.9.1
6 years, 6 months
[libvirt] [PATCH] qemu: agent: Reset agentError when qemuConnectAgent success
by Wang Yechao
qemuAgentClose and qemuAgentIO have race condition,
as follows:
main thread: second thread:
virEventPollRunOnce processSerialChangedEvent
virEventPollDispatchHandles
virMutexUnlock(&eventLoop.lock)
qemuAgentClose
virObjectLock(mon)
virEventRemoveHandle
VIR_FORCE_CLOSE(mon->fd)
virObjectUnlock(mon)
priv->agentError = false
qemuAgentIO
virObjectLock(mon)
mon->fd != fd --> error = true
qemuProcessHandleAgentError
priv->agentError = true
virObjectUnlock(mon)
virMutexLock(&eventLoop.lock)
qemuAgentClose set the mon->fd to '-1', and then qemuAgentIO
check the mon->fd not equals to fd that registered before.
qemuProcessHandleAgentError will be called to set
priv->agentError to 'true', then the priv->agentError is
always 'true' except restart libvirtd or restart
qemu-guest-agent process in guest. We can't send any
qemu-agent-command anymore even if qemuConnectAgent return
success later.
This is accidently occurs when hot-add-vcpu in windows2012.
virsh setvcpus ...
virsh qemu-agent-command $vm '{"execute":"guest-get-vcpus"}'
Reset the priv->agentError to 'false' when qemuConnectAgent sucess
to fix this problem.
Signed-off-by: Wang Yechao <wang.yechao255(a)zte.com.cn>
---
src/qemu/qemu_process.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 29b0ba1..4fbb955 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -269,6 +269,7 @@ qemuConnectAgent(virQEMUDriverPtr driver, virDomainObjPtr vm)
virResetLastError();
}
+ priv->agentError = false;
return 0;
}
--
1.8.3.1
6 years, 6 months
[libvirt] [PATCH] Revert "vircgroup: cleanup controllers not managed by systemd on error"
by Pavel Hrdina
This reverts commit 1602aa28f820ada66f707cef3e536e8572fbda1e.
There is no need to call virCgroupRemove() nor virCgroupFree() if
virCgroupEnableMissingControllers() fails because it will not modify
'group' at all. The cleanup is done in virCgroupMakeGroup().
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/util/vircgroup.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index f90906e4ad..548c873da8 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -1055,7 +1055,6 @@ virCgroupNewMachineSystemd(const char *name,
int rv;
virCgroupPtr init;
VIR_AUTOFREE(char *) path = NULL;
- virErrorPtr saved = NULL;
VIR_DEBUG("Trying to setup machine '%s' via systemd", name);
if ((rv = virSystemdCreateMachine(name,
@@ -1088,24 +1087,20 @@ virCgroupNewMachineSystemd(const char *name,
if (virCgroupEnableMissingControllers(path, pidleader,
controllers, group) < 0) {
- goto error;
+ return -1;
}
- if (virCgroupAddProcess(*group, pidleader) < 0)
- goto error;
+ if (virCgroupAddProcess(*group, pidleader) < 0) {
+ virErrorPtr saved = virSaveLastError();
+ virCgroupRemove(*group);
+ virCgroupFree(group);
+ if (saved) {
+ virSetError(saved);
+ virFreeError(saved);
+ }
+ }
return 0;
-
- error:
- saved = virSaveLastError();
- virCgroupRemove(*group);
- virCgroupFree(group);
- if (saved) {
- virSetError(saved);
- virFreeError(saved);
- }
-
- return -1;
}
--
2.17.1
6 years, 6 months
[libvirt] [PATCH] vircgroup: fix NULL pointer dereferencing
by Marc Hartmayer
When virCgroupEnableMissingControllers fails it's possible that *group
is still set to NULL. Therefore let's add a guard and an attribute for
this.
[#0] virCgroupRemove(group=0x0)
[#1] virCgroupNewMachineSystemd
[#2] virCgroupNewMachine
[#3] qemuInitCgroup
[#4] qemuSetupCgroup
[#5] qemuProcessLaunch
[#6] qemuProcessStart
[#7] qemuDomainObjStart
[#8] qemuDomainCreateWithFlags
[#9] qemuDomainCreate
...
Fixes: 1602aa28f820ada66f707cef3e536e8572fbda1e
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.ibm.com>
Reviewed-by: Bjoern Walk <bwalk(a)linux.ibm.com>
Signed-off-by: Marc Hartmayer <mhartmay(a)linux.ibm.com>
---
src/util/vircgroup.c | 3 ++-
src/util/vircgroup.h | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index 23957c82c7fa..06e1d158febb 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -1104,7 +1104,8 @@ virCgroupNewMachineSystemd(const char *name,
error:
saved = virSaveLastError();
- virCgroupRemove(*group);
+ if (*group)
+ virCgroupRemove(*group);
virCgroupFree(group);
if (saved) {
virSetError(saved);
diff --git a/src/util/vircgroup.h b/src/util/vircgroup.h
index 1f676f21c380..9e1ae3706b1e 100644
--- a/src/util/vircgroup.h
+++ b/src/util/vircgroup.h
@@ -268,7 +268,8 @@ int virCgroupGetCpusetMemoryMigrate(virCgroupPtr group, bool *migrate);
int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus);
int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus);
-int virCgroupRemove(virCgroupPtr group);
+int virCgroupRemove(virCgroupPtr group)
+ ATTRIBUTE_NONNULL(1);
int virCgroupKillRecursive(virCgroupPtr group, int signum);
int virCgroupKillPainfully(virCgroupPtr group);
--
2.17.0
6 years, 6 months
[libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
by Kirti Wankhede
Generally a single instance of mdev device, a share of physical device, is
assigned to user space application or a VM. There are cases when multiple
instances of mdev devices of same or different types are required by User
space application or VM. For example in case of vGPU, multiple mdev devices
of type which represents whole GPU can be assigned to one instance of
application or VM.
All types of mdev devices may not support assigning multiple mdev devices
to a user space application. In that case vendor driver can fail open()
call of mdev device. But there is no way to know User space application
about the configuration supported by vendor driver.
To expose supported configuration, vendor driver should add
'multiple_mdev_support' attribute to type-id directory if vendor driver
supports assigning multiple mdev devices of a particular type-id to one
instance of user space application or a VM.
User space application should read if 'multiple_mdev_support' attibute is
present in type-id directory of all mdev devices which are going to be
used. If all read 1 then user space application can proceed with multiple
mdev devices.
This is optional and readonly attribute.
Signed-off-by: Neo Jia <cjia(a)nvidia.com>
Signed-off-by: Kirti Wankhede <kwankhede(a)nvidia.com>
---
Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
index 452dbe39270e..69e1291479ce 100644
--- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
+++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
@@ -85,6 +85,19 @@ Users:
a particular <type-id> that can help in understanding the
features provided by that type of mediated device.
+What: /sys/.../mdev_supported_types/<type-id>/multiple_mdev_support
+Date: September 2018
+Contact: Kirti Wankhede <kwankhede(a)nvidia.com>
+Description:
+ Reading this attribute will return 0 or 1. Returning 1 indicates
+ multiple mdev devices of a particular <type-id> assigned to one
+ User space application is supported by vendor driver. This is
+ optional and readonly attribute.
+Users:
+ Userspace applications interested in knowing if multiple mdev
+ devices of a particular <type-id> can be assigned to one
+ instance of application.
+
What: /sys/.../<device>/<UUID>/
Date: October 2016
Contact: Kirti Wankhede <kwankhede(a)nvidia.com>
--
2.7.0
6 years, 6 months
[libvirt] [PATCH 0/4] Couple of metadata locking fixes
by Michal Privoznik
Strictly speaking, only the last patch actually fixes a real problem.
The first three are more of a cleanup than anything. However, I am still
sending them because they make the code better.
Anyway, if my explanation in 4/4 is not clear enough, here it is in
simpler terms:
It's important to bear in mind that at any point when connection to
virtlockd is closed, virtlockd goes through its hash table and if it
finds any resources acquired for the process that has closed the
connection, it will kill the process. In other words, resources are per
PID not per connection.
For instance, if libvirtd would open two connections, acquired two
different resources through each of them and then closed one it would
get killed instantly because it still owns some resources.
So now that this virtlockd behaviour is clear (I must say intended and
desired behaviour), we can start looking into how libvirtd runs qemu.
Basically, problem is that instead of opening connection to virtlockd
once and keeping it open through whole run of libvirtd I used this
shortcut: open the connection in virSecurityManagerMetadataLock(), dup()
the connection FD, and close it in virSecurityManagerMetadataUnlock().
In theory, this works pretty well - when the duplicated FD is closed,
libvirtd doesn't own any locks anymore so virtlockd doesn't kill it.
What my design did not count with is fork(). On fork() the duplicated
FD is cloned into the child and thus connection is closed nobody knows
when. Meantime, libvirtd might continue its execution (e.g. starting
another domain from another thread) and call
virSecurityManagerMetadataLock(). However, before it gets to call
Unlock() the child process decides to close the FD (either via exec() or
exit()). Given virtlockd behaviour described couple of paragraphs above
I guess it is clear to see now that virtlockd kills libvirtd.
My solution to this is to always run secdriver transactions from a
separate child process because on fork() only the thread that calls it
is cloned. So only the thread that runs the transaction is cloned, not
the one that is starting a different domain. Therefore no other fork()
can occur here and therefore we are safe.
I know, I know, it's complicated. But it always is around fork() and
IPC.
If you want to see my fix in action, just enable metadata locking, and
try to start two or more domains in a loop:
for ((i=0;i<1000;i++)); do \
virsh start u1604-1 & virsh start u1604-2 &\
sleep 3;\
virsh destroy u1604-1; virsh destroy u1604-2;\
done
At some point you'll see virsh reporting I/O error (this is because
virtlockd killed libvirtd). With my patch, I run the test multiple times
without any hiccup.
@Bjoern: I'm still unable to reproduce the issue you reported. However,
whilst trying to do so I've came across this bug. However, my gut
feeling is that this might help you.
Michal Prívozník (4):
security: Grab a reference to virSecurityManager for transactions
virNetSocket: Be more safe with fork() around virNetSocketDupFD()
virLockManagerLockDaemonAcquire: Duplicate client FD with CLOEXEC flag
security: Always spawn process for transactions
src/libxl/libxl_migration.c | 4 ++--
src/locking/domain_lock.c | 18 ++++++++++++++++++
src/locking/lock_driver_lockd.c | 2 +-
src/qemu/qemu_migration.c | 2 +-
src/rpc/virnetclient.c | 5 +++--
src/rpc/virnetclient.h | 2 +-
src/rpc/virnetsocket.c | 7 ++-----
src/rpc/virnetsocket.h | 2 +-
src/security/security_dac.c | 17 +++++++++--------
src/security/security_selinux.c | 17 +++++++++--------
10 files changed, 47 insertions(+), 29 deletions(-)
--
2.16.4
6 years, 6 months
[libvirt] [PATCH 0/2] A few spec file fixes
by Jiri Denemark
Jiri Denemark (2):
spec: Set correct TLS priority
spec: Build ceph and gluster support everywhere
libvirt.spec.in | 20 +++-----------------
1 file changed, 3 insertions(+), 17 deletions(-)
--
2.19.0
6 years, 6 months
[libvirt] [PATCH v2 0/9] cgroup cleanups and preparation for v2
by Pavel Hrdina
Pavel Hrdina (9):
vircgroup: cleanup controllers not managed by systemd on error
vircgroup: fix bug in virCgroupEnableMissingControllers
vircgroup: rename virCgroupAdd.*Task to virCgroupAdd.*Process
vircgroup: introduce virCgroupTaskFlags
vircgroup: introduce virCgroupAddThread
vircgroupmock: cleanup unused cgroup files
vircgroupmock: rewrite cgroup fopen mocking
vircgrouptest: call virCgroupDetectMounts directly
vircgrouptest: call virCgroupNewSelf instead virCgroupDetectMounts
src/libvirt-lxc.c | 2 +-
src/libvirt_private.syms | 6 +-
src/lxc/lxc_controller.c | 4 +-
src/qemu/qemu_process.c | 4 +-
src/qemu/qemu_tpm.c | 2 +-
src/util/vircgroup.c | 143 +++++++-----
src/util/vircgroup.h | 5 +-
src/util/vircgrouppriv.h | 4 -
tests/vircgroupdata/all-in-one.cgroups | 7 +
tests/vircgroupdata/all-in-one.mounts | 2 +-
tests/vircgroupdata/all-in-one.parsed | 12 +-
tests/vircgroupdata/all-in-one.self.cgroup | 1 +
tests/vircgroupdata/cgroups1.cgroups | 11 +
tests/vircgroupdata/cgroups1.self.cgroup | 11 +
tests/vircgroupdata/cgroups2.cgroups | 10 +
tests/vircgroupdata/cgroups2.self.cgroup | 10 +
tests/vircgroupdata/cgroups3.cgroups | 12 +
tests/vircgroupdata/cgroups3.self.cgroup | 12 +
tests/vircgroupdata/fedora-18.cgroups | 10 +
tests/vircgroupdata/fedora-18.self.cgroup | 9 +
tests/vircgroupdata/fedora-21.cgroups | 12 +
tests/vircgroupdata/fedora-21.self.cgroup | 10 +
tests/vircgroupdata/kubevirt.cgroups | 10 +
tests/vircgroupdata/kubevirt.self.cgroup | 10 +
tests/vircgroupdata/logind.cgroups | 10 +
tests/vircgroupdata/logind.mounts | 2 +
tests/vircgroupdata/logind.self.cgroup | 1 +
tests/vircgroupdata/no-cgroups.cgroups | 8 +
tests/vircgroupdata/no-cgroups.parsed | 10 -
tests/vircgroupdata/no-cgroups.self.cgroup | 0
tests/vircgroupdata/ovirt-node-6.6.cgroups | 9 +
.../vircgroupdata/ovirt-node-6.6.self.cgroup | 8 +
tests/vircgroupdata/ovirt-node-7.1.cgroups | 11 +
.../vircgroupdata/ovirt-node-7.1.self.cgroup | 10 +
tests/vircgroupdata/rhel-7.1.cgroups | 11 +
tests/vircgroupdata/rhel-7.1.self.cgroup | 10 +
tests/vircgroupdata/systemd.cgroups | 8 +
tests/vircgroupdata/systemd.mounts | 11 +
tests/vircgroupdata/systemd.self.cgroup | 6 +
tests/vircgroupmock.c | 206 ++----------------
tests/vircgrouptest.c | 48 ++--
41 files changed, 399 insertions(+), 289 deletions(-)
create mode 100644 tests/vircgroupdata/all-in-one.cgroups
create mode 100644 tests/vircgroupdata/all-in-one.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups1.cgroups
create mode 100644 tests/vircgroupdata/cgroups1.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups2.cgroups
create mode 100644 tests/vircgroupdata/cgroups2.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups3.cgroups
create mode 100644 tests/vircgroupdata/cgroups3.self.cgroup
create mode 100644 tests/vircgroupdata/fedora-18.cgroups
create mode 100644 tests/vircgroupdata/fedora-18.self.cgroup
create mode 100644 tests/vircgroupdata/fedora-21.cgroups
create mode 100644 tests/vircgroupdata/fedora-21.self.cgroup
create mode 100644 tests/vircgroupdata/kubevirt.cgroups
create mode 100644 tests/vircgroupdata/kubevirt.self.cgroup
create mode 100644 tests/vircgroupdata/logind.cgroups
create mode 100644 tests/vircgroupdata/logind.mounts
create mode 100644 tests/vircgroupdata/logind.self.cgroup
create mode 100644 tests/vircgroupdata/no-cgroups.cgroups
delete mode 100644 tests/vircgroupdata/no-cgroups.parsed
create mode 100644 tests/vircgroupdata/no-cgroups.self.cgroup
create mode 100644 tests/vircgroupdata/ovirt-node-6.6.cgroups
create mode 100644 tests/vircgroupdata/ovirt-node-6.6.self.cgroup
create mode 100644 tests/vircgroupdata/ovirt-node-7.1.cgroups
create mode 100644 tests/vircgroupdata/ovirt-node-7.1.self.cgroup
create mode 100644 tests/vircgroupdata/rhel-7.1.cgroups
create mode 100644 tests/vircgroupdata/rhel-7.1.self.cgroup
create mode 100644 tests/vircgroupdata/systemd.cgroups
create mode 100644 tests/vircgroupdata/systemd.mounts
create mode 100644 tests/vircgroupdata/systemd.self.cgroup
--
2.17.1
6 years, 6 months
[libvirt] [PATCH for 4.8.0] qemu: Temporarily disable metadata locking
by Michal Privoznik
Turns out, there are couple of bugs that prevent this feature
from being operational. Given how close to the release we are
disable the feature temporarily. Hopefully, it can be enabled
back after all the bugs are fixed.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
There are two major problems:
1) when using regular locking the lockspace code gets confused and
starting domain is denied after hitting 60 second timeout.
2) the virtlockd connection FD might leak into children process
resulting in sudden killing of libvirtd.
I posted a patch for 2), and I'm working on patch for 1).
src/qemu/libvirtd_qemu.aug | 1 -
src/qemu/qemu.conf | 9 ---------
src/qemu/qemu_conf.c | 11 -----------
src/qemu/test_libvirtd_qemu.aug.in | 1 -
4 files changed, 22 deletions(-)
diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 42e325d4fb..ddc4bbfd1d 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -98,7 +98,6 @@ module Libvirtd_qemu =
| bool_entry "relaxed_acs_check"
| bool_entry "allow_disk_format_probing"
| str_entry "lock_manager"
- | str_entry "metadata_lock_manager"
let rpc_entry = int_entry "max_queued"
| int_entry "keepalive_interval"
diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf
index 84492719c4..8391332cb4 100644
--- a/src/qemu/qemu.conf
+++ b/src/qemu/qemu.conf
@@ -659,15 +659,6 @@
#lock_manager = "lockd"
-# To serialize two or more daemons trying to change metadata on a
-# file (e.g. a file on NFS share), libvirt offers a locking
-# mechanism. Currently, only "lockd" is supported (or no locking
-# at all if unset). Note that this is independent of lock_manager
-# described above.
-#
-#metadata_lock_manager = "lockd"
-
-
# Set limit of maximum APIs queued on one domain. All other APIs
# over this threshold will fail on acquiring job lock. Specially,
# setting to zero turns this feature off.
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 33508174cb..fc84186a7e 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -838,17 +838,6 @@ int virQEMUDriverConfigLoadFile(virQEMUDriverConfigPtr cfg,
if (virConfGetValueString(conf, "lock_manager", &cfg->lockManagerName) < 0)
goto cleanup;
- if (virConfGetValueString(conf, "metadata_lock_manager",
- &cfg->metadataLockManagerName) < 0)
- goto cleanup;
- if (cfg->metadataLockManagerName &&
- STRNEQ(cfg->metadataLockManagerName, "lockd")) {
- virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
- _("unknown metadata lock manager name %s"),
- cfg->metadataLockManagerName);
- goto cleanup;
- }
-
if (virConfGetValueString(conf, "stdio_handler", &stdioHandler) < 0)
goto cleanup;
if (stdioHandler) {
diff --git a/src/qemu/test_libvirtd_qemu.aug.in b/src/qemu/test_libvirtd_qemu.aug.in
index 451e73126e..f1e8806ad2 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -81,7 +81,6 @@ module Test_libvirtd_qemu =
{ "mac_filter" = "1" }
{ "relaxed_acs_check" = "1" }
{ "lock_manager" = "lockd" }
-{ "metadata_lock_manager" = "lockd" }
{ "max_queued" = "0" }
{ "keepalive_interval" = "5" }
{ "keepalive_count" = "5" }
--
2.16.4
6 years, 6 months