[libvirt] [PATCH] qemu: agent: Reset agentError when qemuConnectAgent success
by Wang Yechao
qemuAgentClose and qemuAgentIO have race condition,
as follows:
main thread: second thread:
virEventPollRunOnce processSerialChangedEvent
virEventPollDispatchHandles
virMutexUnlock(&eventLoop.lock)
qemuAgentClose
virObjectLock(mon)
virEventRemoveHandle
VIR_FORCE_CLOSE(mon->fd)
virObjectUnlock(mon)
priv->agentError = false
qemuAgentIO
virObjectLock(mon)
mon->fd != fd --> error = true
qemuProcessHandleAgentError
priv->agentError = true
virObjectUnlock(mon)
virMutexLock(&eventLoop.lock)
qemuAgentClose set the mon->fd to '-1', and then qemuAgentIO
check the mon->fd not equals to fd that registered before.
qemuProcessHandleAgentError will be called to set
priv->agentError to 'true', then the priv->agentError is
always 'true' except restart libvirtd or restart
qemu-guest-agent process in guest. We can't send any
qemu-agent-command anymore even if qemuConnectAgent return
success later.
This is accidently occurs when hot-add-vcpu in windows2012.
virsh setvcpus ...
virsh qemu-agent-command $vm '{"execute":"guest-get-vcpus"}'
Reset the priv->agentError to 'false' when qemuConnectAgent sucess
to fix this problem.
Signed-off-by: Wang Yechao <wang.yechao255(a)zte.com.cn>
---
src/qemu/qemu_process.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 29b0ba1..4fbb955 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -269,6 +269,7 @@ qemuConnectAgent(virQEMUDriverPtr driver, virDomainObjPtr vm)
virResetLastError();
}
+ priv->agentError = false;
return 0;
}
--
1.8.3.1
6 years, 1 month
[libvirt] [PATCH] Revert "vircgroup: cleanup controllers not managed by systemd on error"
by Pavel Hrdina
This reverts commit 1602aa28f820ada66f707cef3e536e8572fbda1e.
There is no need to call virCgroupRemove() nor virCgroupFree() if
virCgroupEnableMissingControllers() fails because it will not modify
'group' at all. The cleanup is done in virCgroupMakeGroup().
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/util/vircgroup.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index f90906e4ad..548c873da8 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -1055,7 +1055,6 @@ virCgroupNewMachineSystemd(const char *name,
int rv;
virCgroupPtr init;
VIR_AUTOFREE(char *) path = NULL;
- virErrorPtr saved = NULL;
VIR_DEBUG("Trying to setup machine '%s' via systemd", name);
if ((rv = virSystemdCreateMachine(name,
@@ -1088,24 +1087,20 @@ virCgroupNewMachineSystemd(const char *name,
if (virCgroupEnableMissingControllers(path, pidleader,
controllers, group) < 0) {
- goto error;
+ return -1;
}
- if (virCgroupAddProcess(*group, pidleader) < 0)
- goto error;
+ if (virCgroupAddProcess(*group, pidleader) < 0) {
+ virErrorPtr saved = virSaveLastError();
+ virCgroupRemove(*group);
+ virCgroupFree(group);
+ if (saved) {
+ virSetError(saved);
+ virFreeError(saved);
+ }
+ }
return 0;
-
- error:
- saved = virSaveLastError();
- virCgroupRemove(*group);
- virCgroupFree(group);
- if (saved) {
- virSetError(saved);
- virFreeError(saved);
- }
-
- return -1;
}
--
2.17.1
6 years, 1 month
[libvirt] [PATCH] vircgroup: fix NULL pointer dereferencing
by Marc Hartmayer
When virCgroupEnableMissingControllers fails it's possible that *group
is still set to NULL. Therefore let's add a guard and an attribute for
this.
[#0] virCgroupRemove(group=0x0)
[#1] virCgroupNewMachineSystemd
[#2] virCgroupNewMachine
[#3] qemuInitCgroup
[#4] qemuSetupCgroup
[#5] qemuProcessLaunch
[#6] qemuProcessStart
[#7] qemuDomainObjStart
[#8] qemuDomainCreateWithFlags
[#9] qemuDomainCreate
...
Fixes: 1602aa28f820ada66f707cef3e536e8572fbda1e
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.ibm.com>
Reviewed-by: Bjoern Walk <bwalk(a)linux.ibm.com>
Signed-off-by: Marc Hartmayer <mhartmay(a)linux.ibm.com>
---
src/util/vircgroup.c | 3 ++-
src/util/vircgroup.h | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index 23957c82c7fa..06e1d158febb 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -1104,7 +1104,8 @@ virCgroupNewMachineSystemd(const char *name,
error:
saved = virSaveLastError();
- virCgroupRemove(*group);
+ if (*group)
+ virCgroupRemove(*group);
virCgroupFree(group);
if (saved) {
virSetError(saved);
diff --git a/src/util/vircgroup.h b/src/util/vircgroup.h
index 1f676f21c380..9e1ae3706b1e 100644
--- a/src/util/vircgroup.h
+++ b/src/util/vircgroup.h
@@ -268,7 +268,8 @@ int virCgroupGetCpusetMemoryMigrate(virCgroupPtr group, bool *migrate);
int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus);
int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus);
-int virCgroupRemove(virCgroupPtr group);
+int virCgroupRemove(virCgroupPtr group)
+ ATTRIBUTE_NONNULL(1);
int virCgroupKillRecursive(virCgroupPtr group, int signum);
int virCgroupKillPainfully(virCgroupPtr group);
--
2.17.0
6 years, 1 month
[libvirt] [RFC PATCH v1 1/1] Add attribute multiple_mdev_support for mdev type-id
by Kirti Wankhede
Generally a single instance of mdev device, a share of physical device, is
assigned to user space application or a VM. There are cases when multiple
instances of mdev devices of same or different types are required by User
space application or VM. For example in case of vGPU, multiple mdev devices
of type which represents whole GPU can be assigned to one instance of
application or VM.
All types of mdev devices may not support assigning multiple mdev devices
to a user space application. In that case vendor driver can fail open()
call of mdev device. But there is no way to know User space application
about the configuration supported by vendor driver.
To expose supported configuration, vendor driver should add
'multiple_mdev_support' attribute to type-id directory if vendor driver
supports assigning multiple mdev devices of a particular type-id to one
instance of user space application or a VM.
User space application should read if 'multiple_mdev_support' attibute is
present in type-id directory of all mdev devices which are going to be
used. If all read 1 then user space application can proceed with multiple
mdev devices.
This is optional and readonly attribute.
Signed-off-by: Neo Jia <cjia(a)nvidia.com>
Signed-off-by: Kirti Wankhede <kwankhede(a)nvidia.com>
---
Documentation/ABI/testing/sysfs-bus-vfio-mdev | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
index 452dbe39270e..69e1291479ce 100644
--- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
+++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
@@ -85,6 +85,19 @@ Users:
a particular <type-id> that can help in understanding the
features provided by that type of mediated device.
+What: /sys/.../mdev_supported_types/<type-id>/multiple_mdev_support
+Date: September 2018
+Contact: Kirti Wankhede <kwankhede(a)nvidia.com>
+Description:
+ Reading this attribute will return 0 or 1. Returning 1 indicates
+ multiple mdev devices of a particular <type-id> assigned to one
+ User space application is supported by vendor driver. This is
+ optional and readonly attribute.
+Users:
+ Userspace applications interested in knowing if multiple mdev
+ devices of a particular <type-id> can be assigned to one
+ instance of application.
+
What: /sys/.../<device>/<UUID>/
Date: October 2016
Contact: Kirti Wankhede <kwankhede(a)nvidia.com>
--
2.7.0
6 years, 1 month
[libvirt] [PATCH 0/4] Couple of metadata locking fixes
by Michal Privoznik
Strictly speaking, only the last patch actually fixes a real problem.
The first three are more of a cleanup than anything. However, I am still
sending them because they make the code better.
Anyway, if my explanation in 4/4 is not clear enough, here it is in
simpler terms:
It's important to bear in mind that at any point when connection to
virtlockd is closed, virtlockd goes through its hash table and if it
finds any resources acquired for the process that has closed the
connection, it will kill the process. In other words, resources are per
PID not per connection.
For instance, if libvirtd would open two connections, acquired two
different resources through each of them and then closed one it would
get killed instantly because it still owns some resources.
So now that this virtlockd behaviour is clear (I must say intended and
desired behaviour), we can start looking into how libvirtd runs qemu.
Basically, problem is that instead of opening connection to virtlockd
once and keeping it open through whole run of libvirtd I used this
shortcut: open the connection in virSecurityManagerMetadataLock(), dup()
the connection FD, and close it in virSecurityManagerMetadataUnlock().
In theory, this works pretty well - when the duplicated FD is closed,
libvirtd doesn't own any locks anymore so virtlockd doesn't kill it.
What my design did not count with is fork(). On fork() the duplicated
FD is cloned into the child and thus connection is closed nobody knows
when. Meantime, libvirtd might continue its execution (e.g. starting
another domain from another thread) and call
virSecurityManagerMetadataLock(). However, before it gets to call
Unlock() the child process decides to close the FD (either via exec() or
exit()). Given virtlockd behaviour described couple of paragraphs above
I guess it is clear to see now that virtlockd kills libvirtd.
My solution to this is to always run secdriver transactions from a
separate child process because on fork() only the thread that calls it
is cloned. So only the thread that runs the transaction is cloned, not
the one that is starting a different domain. Therefore no other fork()
can occur here and therefore we are safe.
I know, I know, it's complicated. But it always is around fork() and
IPC.
If you want to see my fix in action, just enable metadata locking, and
try to start two or more domains in a loop:
for ((i=0;i<1000;i++)); do \
virsh start u1604-1 & virsh start u1604-2 &\
sleep 3;\
virsh destroy u1604-1; virsh destroy u1604-2;\
done
At some point you'll see virsh reporting I/O error (this is because
virtlockd killed libvirtd). With my patch, I run the test multiple times
without any hiccup.
@Bjoern: I'm still unable to reproduce the issue you reported. However,
whilst trying to do so I've came across this bug. However, my gut
feeling is that this might help you.
Michal Prívozník (4):
security: Grab a reference to virSecurityManager for transactions
virNetSocket: Be more safe with fork() around virNetSocketDupFD()
virLockManagerLockDaemonAcquire: Duplicate client FD with CLOEXEC flag
security: Always spawn process for transactions
src/libxl/libxl_migration.c | 4 ++--
src/locking/domain_lock.c | 18 ++++++++++++++++++
src/locking/lock_driver_lockd.c | 2 +-
src/qemu/qemu_migration.c | 2 +-
src/rpc/virnetclient.c | 5 +++--
src/rpc/virnetclient.h | 2 +-
src/rpc/virnetsocket.c | 7 ++-----
src/rpc/virnetsocket.h | 2 +-
src/security/security_dac.c | 17 +++++++++--------
src/security/security_selinux.c | 17 +++++++++--------
10 files changed, 47 insertions(+), 29 deletions(-)
--
2.16.4
6 years, 1 month
[libvirt] [PATCH 0/2] A few spec file fixes
by Jiri Denemark
Jiri Denemark (2):
spec: Set correct TLS priority
spec: Build ceph and gluster support everywhere
libvirt.spec.in | 20 +++-----------------
1 file changed, 3 insertions(+), 17 deletions(-)
--
2.19.0
6 years, 1 month
[libvirt] [PATCH v2 0/9] cgroup cleanups and preparation for v2
by Pavel Hrdina
Pavel Hrdina (9):
vircgroup: cleanup controllers not managed by systemd on error
vircgroup: fix bug in virCgroupEnableMissingControllers
vircgroup: rename virCgroupAdd.*Task to virCgroupAdd.*Process
vircgroup: introduce virCgroupTaskFlags
vircgroup: introduce virCgroupAddThread
vircgroupmock: cleanup unused cgroup files
vircgroupmock: rewrite cgroup fopen mocking
vircgrouptest: call virCgroupDetectMounts directly
vircgrouptest: call virCgroupNewSelf instead virCgroupDetectMounts
src/libvirt-lxc.c | 2 +-
src/libvirt_private.syms | 6 +-
src/lxc/lxc_controller.c | 4 +-
src/qemu/qemu_process.c | 4 +-
src/qemu/qemu_tpm.c | 2 +-
src/util/vircgroup.c | 143 +++++++-----
src/util/vircgroup.h | 5 +-
src/util/vircgrouppriv.h | 4 -
tests/vircgroupdata/all-in-one.cgroups | 7 +
tests/vircgroupdata/all-in-one.mounts | 2 +-
tests/vircgroupdata/all-in-one.parsed | 12 +-
tests/vircgroupdata/all-in-one.self.cgroup | 1 +
tests/vircgroupdata/cgroups1.cgroups | 11 +
tests/vircgroupdata/cgroups1.self.cgroup | 11 +
tests/vircgroupdata/cgroups2.cgroups | 10 +
tests/vircgroupdata/cgroups2.self.cgroup | 10 +
tests/vircgroupdata/cgroups3.cgroups | 12 +
tests/vircgroupdata/cgroups3.self.cgroup | 12 +
tests/vircgroupdata/fedora-18.cgroups | 10 +
tests/vircgroupdata/fedora-18.self.cgroup | 9 +
tests/vircgroupdata/fedora-21.cgroups | 12 +
tests/vircgroupdata/fedora-21.self.cgroup | 10 +
tests/vircgroupdata/kubevirt.cgroups | 10 +
tests/vircgroupdata/kubevirt.self.cgroup | 10 +
tests/vircgroupdata/logind.cgroups | 10 +
tests/vircgroupdata/logind.mounts | 2 +
tests/vircgroupdata/logind.self.cgroup | 1 +
tests/vircgroupdata/no-cgroups.cgroups | 8 +
tests/vircgroupdata/no-cgroups.parsed | 10 -
tests/vircgroupdata/no-cgroups.self.cgroup | 0
tests/vircgroupdata/ovirt-node-6.6.cgroups | 9 +
.../vircgroupdata/ovirt-node-6.6.self.cgroup | 8 +
tests/vircgroupdata/ovirt-node-7.1.cgroups | 11 +
.../vircgroupdata/ovirt-node-7.1.self.cgroup | 10 +
tests/vircgroupdata/rhel-7.1.cgroups | 11 +
tests/vircgroupdata/rhel-7.1.self.cgroup | 10 +
tests/vircgroupdata/systemd.cgroups | 8 +
tests/vircgroupdata/systemd.mounts | 11 +
tests/vircgroupdata/systemd.self.cgroup | 6 +
tests/vircgroupmock.c | 206 ++----------------
tests/vircgrouptest.c | 48 ++--
41 files changed, 399 insertions(+), 289 deletions(-)
create mode 100644 tests/vircgroupdata/all-in-one.cgroups
create mode 100644 tests/vircgroupdata/all-in-one.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups1.cgroups
create mode 100644 tests/vircgroupdata/cgroups1.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups2.cgroups
create mode 100644 tests/vircgroupdata/cgroups2.self.cgroup
create mode 100644 tests/vircgroupdata/cgroups3.cgroups
create mode 100644 tests/vircgroupdata/cgroups3.self.cgroup
create mode 100644 tests/vircgroupdata/fedora-18.cgroups
create mode 100644 tests/vircgroupdata/fedora-18.self.cgroup
create mode 100644 tests/vircgroupdata/fedora-21.cgroups
create mode 100644 tests/vircgroupdata/fedora-21.self.cgroup
create mode 100644 tests/vircgroupdata/kubevirt.cgroups
create mode 100644 tests/vircgroupdata/kubevirt.self.cgroup
create mode 100644 tests/vircgroupdata/logind.cgroups
create mode 100644 tests/vircgroupdata/logind.mounts
create mode 100644 tests/vircgroupdata/logind.self.cgroup
create mode 100644 tests/vircgroupdata/no-cgroups.cgroups
delete mode 100644 tests/vircgroupdata/no-cgroups.parsed
create mode 100644 tests/vircgroupdata/no-cgroups.self.cgroup
create mode 100644 tests/vircgroupdata/ovirt-node-6.6.cgroups
create mode 100644 tests/vircgroupdata/ovirt-node-6.6.self.cgroup
create mode 100644 tests/vircgroupdata/ovirt-node-7.1.cgroups
create mode 100644 tests/vircgroupdata/ovirt-node-7.1.self.cgroup
create mode 100644 tests/vircgroupdata/rhel-7.1.cgroups
create mode 100644 tests/vircgroupdata/rhel-7.1.self.cgroup
create mode 100644 tests/vircgroupdata/systemd.cgroups
create mode 100644 tests/vircgroupdata/systemd.mounts
create mode 100644 tests/vircgroupdata/systemd.self.cgroup
--
2.17.1
6 years, 1 month
[libvirt] [PATCH for 4.8.0] qemu: Temporarily disable metadata locking
by Michal Privoznik
Turns out, there are couple of bugs that prevent this feature
from being operational. Given how close to the release we are
disable the feature temporarily. Hopefully, it can be enabled
back after all the bugs are fixed.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
There are two major problems:
1) when using regular locking the lockspace code gets confused and
starting domain is denied after hitting 60 second timeout.
2) the virtlockd connection FD might leak into children process
resulting in sudden killing of libvirtd.
I posted a patch for 2), and I'm working on patch for 1).
src/qemu/libvirtd_qemu.aug | 1 -
src/qemu/qemu.conf | 9 ---------
src/qemu/qemu_conf.c | 11 -----------
src/qemu/test_libvirtd_qemu.aug.in | 1 -
4 files changed, 22 deletions(-)
diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 42e325d4fb..ddc4bbfd1d 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -98,7 +98,6 @@ module Libvirtd_qemu =
| bool_entry "relaxed_acs_check"
| bool_entry "allow_disk_format_probing"
| str_entry "lock_manager"
- | str_entry "metadata_lock_manager"
let rpc_entry = int_entry "max_queued"
| int_entry "keepalive_interval"
diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf
index 84492719c4..8391332cb4 100644
--- a/src/qemu/qemu.conf
+++ b/src/qemu/qemu.conf
@@ -659,15 +659,6 @@
#lock_manager = "lockd"
-# To serialize two or more daemons trying to change metadata on a
-# file (e.g. a file on NFS share), libvirt offers a locking
-# mechanism. Currently, only "lockd" is supported (or no locking
-# at all if unset). Note that this is independent of lock_manager
-# described above.
-#
-#metadata_lock_manager = "lockd"
-
-
# Set limit of maximum APIs queued on one domain. All other APIs
# over this threshold will fail on acquiring job lock. Specially,
# setting to zero turns this feature off.
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 33508174cb..fc84186a7e 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -838,17 +838,6 @@ int virQEMUDriverConfigLoadFile(virQEMUDriverConfigPtr cfg,
if (virConfGetValueString(conf, "lock_manager", &cfg->lockManagerName) < 0)
goto cleanup;
- if (virConfGetValueString(conf, "metadata_lock_manager",
- &cfg->metadataLockManagerName) < 0)
- goto cleanup;
- if (cfg->metadataLockManagerName &&
- STRNEQ(cfg->metadataLockManagerName, "lockd")) {
- virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
- _("unknown metadata lock manager name %s"),
- cfg->metadataLockManagerName);
- goto cleanup;
- }
-
if (virConfGetValueString(conf, "stdio_handler", &stdioHandler) < 0)
goto cleanup;
if (stdioHandler) {
diff --git a/src/qemu/test_libvirtd_qemu.aug.in b/src/qemu/test_libvirtd_qemu.aug.in
index 451e73126e..f1e8806ad2 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -81,7 +81,6 @@ module Test_libvirtd_qemu =
{ "mac_filter" = "1" }
{ "relaxed_acs_check" = "1" }
{ "lock_manager" = "lockd" }
-{ "metadata_lock_manager" = "lockd" }
{ "max_queued" = "0" }
{ "keepalive_interval" = "5" }
{ "keepalive_count" = "5" }
--
2.16.4
6 years, 1 month
[libvirt] [PATCH v4 00/23] Introduce metadata locking
by Michal Privoznik
Technically, this is v4 of:
https://www.redhat.com/archives/libvir-list/2018-August/msg01627.html
However, this is implementing different approach than any of the
previous versions.
One of the problems with previous version was that it was too
complicated. The main reason for that was that we could not close the
connection whilst there was a file locked. So we had to invent a
mechanism that would prevent that (on the client side).
These patches implement different approach. They rely on secdriver's
transactions which bring all the paths we want to label into one place
so that they can be relabelled within different namespace.
I'm extending this idea so that transactions run all the time
(regardless of domain namespacing) and only at the very last moment is
decided which namespace would the relabeling run in.
Metadata locking is then as easy as putting lock/unlock calls around one
function.
You can find the patches at my github too:
https://github.com/zippy2/libvirt/tree/disk_metadata_lock_v4_alt
Michal Prívozník (23):
qemu_security: Fully implement qemuSecurityDomainSetPathLabel
qemu_security: Fully implement
qemuSecurity{Set,Restore}SavedStateLabel
qemu_security: Require full wrappers for APIs that might touch a file
virSecurityManagerTransactionCommit: Accept pid == -1
qemu_security: Run transactions more frequently
virlockspace: Allow caller to specify start and length offset in
virLockSpaceAcquireResource
lock_driver_lockd: Introduce
VIR_LOCK_SPACE_PROTOCOL_ACQUIRE_RESOURCE_METADATA flag
lock_driver: Introduce new VIR_LOCK_MANAGER_OBJECT_TYPE_DAEMON
_virLockManagerLockDaemonPrivate: Move @hasRWDisks into dom union
lock_driver: Introduce VIR_LOCK_MANAGER_RESOURCE_TYPE_METADATA
lock_driver: Introduce VIR_LOCK_MANAGER_ACQUIRE_ROLLBACK
lock_daemon_dispatch: Check for ownerPid rather than ownerId
lock_manager: Allow disabling configFile for virLockManagerPluginNew
qemu_conf: Introduce metadata_lock_manager
security_manager: Load lock plugin on init
security_manager: Introduce metadata locking APIs
security_dac: Move transaction handling up one level
security_dac: Fix info messages when chown()-ing
security_dac: Lock metadata when running transaction
virSecuritySELinuxRestoreFileLabel: Rename 'err' label
virSecuritySELinuxRestoreFileLabel: Adjust code pattern
security_selinux: Move transaction handling up one level
security_dac: Lock metadata when running transaction
cfg.mk | 4 +-
src/locking/lock_daemon_dispatch.c | 25 ++-
src/locking/lock_driver.h | 12 ++
src/locking/lock_driver_lockd.c | 417 +++++++++++++++++++++++++------------
src/locking/lock_driver_lockd.h | 1 +
src/locking/lock_driver_sanlock.c | 44 ++--
src/locking/lock_manager.c | 10 +-
src/lxc/lxc_controller.c | 3 +-
src/lxc/lxc_driver.c | 2 +-
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf | 8 +
src/qemu/qemu_conf.c | 13 ++
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_domain.c | 3 +-
src/qemu/qemu_driver.c | 10 +-
src/qemu/qemu_process.c | 15 +-
src/qemu/qemu_security.c | 272 +++++++++++++++++-------
src/qemu/qemu_security.h | 18 +-
src/qemu/test_libvirtd_qemu.aug.in | 1 +
src/security/security_dac.c | 134 ++++++++----
src/security/security_manager.c | 171 ++++++++++++++-
src/security/security_manager.h | 9 +
src/security/security_selinux.c | 118 ++++++++---
src/util/virlockspace.c | 15 +-
src/util/virlockspace.h | 4 +
tests/seclabeltest.c | 2 +-
tests/securityselinuxlabeltest.c | 2 +-
tests/securityselinuxtest.c | 2 +-
tests/testutilsqemu.c | 2 +-
tests/virlockspacetest.c | 29 ++-
30 files changed, 1006 insertions(+), 342 deletions(-)
--
2.16.4
6 years, 1 month