[libvirt] [PATCH 0/8] Memory locking fixes

A problem reported recently to the mailing list[1] uncovered some flaws in the way we deal with setting the memory locking limit for QEMU guests. This series solves those issues and adds test suite coverage to hopefully prevent more bugs from popping up in the future. [1] https://www.redhat.com/archives/libvir-list/2017-March/msg00534.html Andrea Bolognani (8): Revert "qemu: Forbid <memoryBacking><locked> without <memtune><hard_limit>" qemu: Remove qemuDomainRequiresMemLock() qemu: Fix memory locking limit calculation process: Translate "unlimited" correctly command: Introduce virCommandGetMaxMemLock() tests: Introduce virTestCompareToULL() tests: Introduce QEMU memory locking limit tests docs: Improve documentation related to memory locking docs/formatdomain.html.in | 36 +++-- src/libvirt_private.syms | 1 + src/qemu/qemu_command.c | 4 +- src/qemu/qemu_domain.c | 75 ++++----- src/qemu/qemu_domain.h | 1 - src/util/vircommand.c | 9 ++ src/util/vircommand.h | 2 + src/util/virprocess.c | 20 ++- tests/Makefile.am | 9 ++ .../qemumemlock-pc-hardlimit+hostdev.xml | 20 +++ .../qemumemlock-pc-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pc-hardlimit+locked.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-kvm.xml | 11 ++ .../qemumemlock-pc-locked+hostdev.xml | 20 +++ tests/qemumemlockdata/qemumemlock-pc-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-tcg.xml | 11 ++ .../qemumemlock-pseries-hardlimit+hostdev.xml | 20 +++ ...emumemlock-pseries-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pseries-hardlimit+locked.xml | 17 ++ .../qemumemlock-pseries-hardlimit.xml | 14 ++ .../qemumemlock-pseries-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pseries-kvm.xml | 11 ++ .../qemumemlock-pseries-locked+hostdev.xml | 20 +++ .../qemumemlockdata/qemumemlock-pseries-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pseries-tcg.xml | 11 ++ tests/qemumemlocktest.c | 172 +++++++++++++++++++++ tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 - tests/testutils.c | 27 ++++ tests/testutils.h | 2 + 31 files changed, 583 insertions(+), 72 deletions(-) create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-tcg.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-tcg.xml create mode 100644 tests/qemumemlocktest.c -- 2.7.4

This reverts commit c2e60ad0e5124482942164e5fec088157f5e716a. Turns out this check is excessively strict: there are ways other than <memtune><hard_limit> to raise the memory locking limit for QEMU processes, one prominent example being tweaking /etc/security/limits.conf. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1431793 --- src/qemu/qemu_domain.c | 10 ---------- tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 --- 2 files changed, 13 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c239a06..8fa43f2 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2919,16 +2919,6 @@ qemuDomainDefValidate(const virDomainDef *def, } } - /* Memory locking can only work properly if the memory locking limit - * for the QEMU process has been raised appropriately: the default one - * is extrememly low, so there's no way the guest will fit in there */ - if (def->mem.locked && !virMemoryLimitIsSet(def->mem.hard_limit)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("Setting <memoryBacking><locked> requires " - "<memtune><hard_limit> to be set as well")); - goto cleanup; - } - if (qemuDomainDefValidateVideo(def) < 0) goto cleanup; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml index 2046663..20a5eaa 100644 --- a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml +++ b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml @@ -3,9 +3,6 @@ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> <memory unit='KiB'>219136</memory> <currentMemory unit='KiB'>219136</currentMemory> - <memtune> - <hard_limit unit='KiB'>256000</hard_limit> - </memtune> <memoryBacking> <locked/> </memoryBacking> -- 2.7.4

On Thu, 23 Mar 2017 19:16:40 +0100 Andrea Bolognani <abologna@redhat.com> wrote:
This reverts commit c2e60ad0e5124482942164e5fec088157f5e716a.
Turns out this check is excessively strict: there are ways other than <memtune><hard_limit> to raise the memory locking limit for QEMU processes, one prominent example being tweaking /etc/security/limits.conf.
Actually, it seems that limits.conf doesn't work with libvirt as mentioned by Daniel in another thread. I didn't know this myself btw. This makes this series even more important because only through libvirt we can set this limit to infinity.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1431793 --- src/qemu/qemu_domain.c | 10 ---------- tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 --- 2 files changed, 13 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c239a06..8fa43f2 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2919,16 +2919,6 @@ qemuDomainDefValidate(const virDomainDef *def, } }
- /* Memory locking can only work properly if the memory locking limit - * for the QEMU process has been raised appropriately: the default one - * is extrememly low, so there's no way the guest will fit in there */ - if (def->mem.locked && !virMemoryLimitIsSet(def->mem.hard_limit)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("Setting <memoryBacking><locked> requires " - "<memtune><hard_limit> to be set as well")); - goto cleanup; - } - if (qemuDomainDefValidateVideo(def) < 0) goto cleanup;
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml index 2046663..20a5eaa 100644 --- a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml +++ b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml @@ -3,9 +3,6 @@ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> <memory unit='KiB'>219136</memory> <currentMemory unit='KiB'>219136</currentMemory> - <memtune> - <hard_limit unit='KiB'>256000</hard_limit> - </memtune> <memoryBacking> <locked/> </memoryBacking>

On Fri, 2017-03-24 at 13:36 -0400, Luiz Capitulino wrote:
Turns out this check is excessively strict: there are ways other than <memtune><hard_limit> to raise the memory locking limit for QEMU processes, one prominent example being tweaking /etc/security/limits.conf. Actually, it seems that limits.conf doesn't work with libvirt as mentioned by Daniel in another thread. I didn't know this myself btw. This makes this series even more important because only through libvirt we can set this limit to infinity.
Well, it *does* work if you set it up properly, eg. raise the memory locking limit for the user under which libvirtd will run instead of the user under which QEMU processes will run. Doing so is very counter-intuitive, though. -- Andrea Bolognani / Red Hat / Virtualization

On Mon, 27 Mar 2017 10:33:52 +0200 Andrea Bolognani <abologna@redhat.com> wrote:
On Fri, 2017-03-24 at 13:36 -0400, Luiz Capitulino wrote:
Turns out this check is excessively strict: there are ways other than <memtune><hard_limit> to raise the memory locking limit for QEMU processes, one prominent example being tweaking /etc/security/limits.conf. Actually, it seems that limits.conf doesn't work with libvirt as mentioned by Daniel in another thread. I didn't know this myself btw. This makes this series even more important because only through libvirt we can set this limit to infinity.
Well, it *does* work if you set it up properly, eg. raise the memory locking limit for the user under which libvirtd will run instead of the user under which QEMU processes will run.
Doesn't libvirtd run as root?
Doing so is very counter-intuitive, though.
-- Andrea Bolognani / Red Hat / Virtualization

On Tue, 2017-03-28 at 09:27 -0400, Luiz Capitulino wrote:
Well, it *does* work if you set it up properly, eg. raise the memory locking limit for the user under which libvirtd will run instead of the user under which QEMU processes will run. Doesn't libvirtd run as root?
Yes. You can set the memory locking limit to whatever you want for the root user, and then QEMU processes spawned by libvirtd will inherit it. -- Andrea Bolognani / Red Hat / Virtualization

Instead of having a separate function, we can simply return zero from the existing qemuDomainGetMemLockLimitBytes() to signal the caller that the memory locking limit doesn't need to be set for the guest. Having a single function instead of two makes it less likely that we will use the wrong value, which is exactly what happened when we started applying the limit that was meant for VFIO-using guests to <memoryBacking><locked>-using guests. --- src/qemu/qemu_command.c | 4 +--- src/qemu/qemu_domain.c | 62 ++++++++++++++++++------------------------------- src/qemu/qemu_domain.h | 1 - 3 files changed, 24 insertions(+), 43 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 2045c2e..52f6e00 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -9740,7 +9740,6 @@ qemuBuildCommandLine(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); unsigned int bootHostdevNet = 0; - VIR_DEBUG("driver=%p def=%p mon=%p json=%d " "qemuCaps=%p migrateURI=%s snapshot=%p vmop=%d", driver, def, monitor_chr, monitor_json, @@ -9966,8 +9965,7 @@ qemuBuildCommandLine(virQEMUDriverPtr driver, /* In some situations, eg. VFIO passthrough, QEMU might need to lock a * significant amount of memory, so we need to set the limit accordingly */ - if (qemuDomainRequiresMemLock(def)) - virCommandSetMaxMemLock(cmd, qemuDomainGetMemLockLimitBytes(def)); + virCommandSetMaxMemLock(cmd, qemuDomainGetMemLockLimitBytes(def)); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_MSG_TIMESTAMP) && cfg->logTimestamp) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8fa43f2..2e2ba4f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6199,18 +6199,20 @@ qemuDomainUpdateCurrentMemorySize(virQEMUDriverPtr driver, /** * qemuDomainGetMemLockLimitBytes: - * * @def: domain definition * - * Returns the size of the memory in bytes that needs to be set as - * RLIMIT_MEMLOCK for the QEMU process. - * If a mem.hard_limit is set, then that value is preferred; otherwise, the - * value returned may depend upon the architecture or devices present. + * Calculate the memory locking limit that needs to be set in order for + * the guest to operate properly. The limit depends on a number of factors, + * including certain configuration options and less immediately apparent ones + * such as the guest architecture or the use of certain devices. + * + * Returns: the memory locking limit, or 0 if setting the limit is not needed */ unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) { - unsigned long long memKB; + unsigned long long memKB = 0; + size_t i; /* prefer the hard limit */ if (virMemoryLimitIsSet(def->mem.hard_limit)) { @@ -6218,13 +6220,17 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; } - if (ARCH_IS_PPC64(def->os.arch)) { + if (def->mem.locked) { + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; + goto done; + } + + if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) { unsigned long long maxMemory; unsigned long long memory; unsigned long long baseLimit; unsigned long long passthroughLimit; size_t nPCIHostBridges; - size_t i; bool usesVFIO = false; /* TODO: Detect at runtime once we start using more than just @@ -6314,44 +6320,21 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) * * Note that this may not be valid for all platforms. */ - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - - done: - return memKB << 10; -} - - -/** - * @def: domain definition - * - * Returns true if the locked memory limit needs to be set or updated because - * of domain configuration, VFIO passthrough devices or architecture-specific - * requirements. - * */ -bool -qemuDomainRequiresMemLock(virDomainDefPtr def) -{ - size_t i; - - if (def->mem.locked) - return true; - for (i = 0; i < def->nhostdevs; i++) { virDomainHostdevDefPtr dev = def->hostdevs[i]; if (dev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS && dev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI && - dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) - return true; + dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) { + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; + } } - /* ppc64 KVM domains need to lock some memory even when VFIO is not used */ - if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) - return true; - - return false; + done: + return memKB << 10; } + /** * qemuDomainAdjustMaxMemLock: * @vm: domain @@ -6372,7 +6355,9 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr vm) unsigned long long bytes = 0; int ret = -1; - if (qemuDomainRequiresMemLock(vm->def)) { + bytes = qemuDomainGetMemLockLimitBytes(vm->def); + + if (bytes) { /* If this is the first time adjusting the limit, save the current * value so that we can restore it once memory locking is no longer * required. Failing to obtain the current limit is not a critical @@ -6381,7 +6366,6 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr vm) if (virProcessGetMaxMemLock(vm->pid, &(vm->original_memlock)) < 0) vm->original_memlock = 0; } - bytes = qemuDomainGetMemLockLimitBytes(vm->def); } else { /* Once memory locking is no longer required, we can restore the * original, usually very low, limit */ diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 1f266bf..b9c8f16 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -681,7 +681,6 @@ int qemuDomainUpdateCurrentMemorySize(virQEMUDriverPtr driver, virDomainObjPtr vm); unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDefPtr def); -bool qemuDomainRequiresMemLock(virDomainDefPtr def); int qemuDomainAdjustMaxMemLock(virDomainObjPtr vm); int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, -- 2.7.4

On Thu, Mar 23, 2017 at 07:16:41PM +0100, Andrea Bolognani wrote:
Instead of having a separate function, we can simply return zero from the existing qemuDomainGetMemLockLimitBytes() to signal the caller that the memory locking limit doesn't need to be set for the guest.
Having a single function instead of two makes it less likely that we will use the wrong value, which is exactly what happened when we started applying the limit that was meant for VFIO-using guests to <memoryBacking><locked>-using guests. --- src/qemu/qemu_command.c | 4 +--- src/qemu/qemu_domain.c | 62 ++++++++++++++++++------------------------------- src/qemu/qemu_domain.h | 1 - 3 files changed, 24 insertions(+), 43 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 2045c2e..52f6e00 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -9740,7 +9740,6 @@ qemuBuildCommandLine(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); unsigned int bootHostdevNet = 0;
- VIR_DEBUG("driver=%p def=%p mon=%p json=%d " "qemuCaps=%p migrateURI=%s snapshot=%p vmop=%d", driver, def, monitor_chr, monitor_json, @@ -9966,8 +9965,7 @@ qemuBuildCommandLine(virQEMUDriverPtr driver,
/* In some situations, eg. VFIO passthrough, QEMU might need to lock a * significant amount of memory, so we need to set the limit accordingly */ - if (qemuDomainRequiresMemLock(def)) - virCommandSetMaxMemLock(cmd, qemuDomainGetMemLockLimitBytes(def)); + virCommandSetMaxMemLock(cmd, qemuDomainGetMemLockLimitBytes(def));
if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_MSG_TIMESTAMP) && cfg->logTimestamp) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8fa43f2..2e2ba4f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6199,18 +6199,20 @@ qemuDomainUpdateCurrentMemorySize(virQEMUDriverPtr driver,
/** * qemuDomainGetMemLockLimitBytes: - * * @def: domain definition * - * Returns the size of the memory in bytes that needs to be set as - * RLIMIT_MEMLOCK for the QEMU process. - * If a mem.hard_limit is set, then that value is preferred; otherwise, the - * value returned may depend upon the architecture or devices present. + * Calculate the memory locking limit that needs to be set in order for + * the guest to operate properly. The limit depends on a number of factors, + * including certain configuration options and less immediately apparent ones + * such as the guest architecture or the use of certain devices. + * + * Returns: the memory locking limit, or 0 if setting the limit is not needed */ unsigned long long qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) { - unsigned long long memKB; + unsigned long long memKB = 0; + size_t i;
/* prefer the hard limit */ if (virMemoryLimitIsSet(def->mem.hard_limit)) { @@ -6218,13 +6220,17 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; }
- if (ARCH_IS_PPC64(def->os.arch)) { + if (def->mem.locked) { + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; + goto done; + } + + if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) { unsigned long long maxMemory; unsigned long long memory; unsigned long long baseLimit; unsigned long long passthroughLimit; size_t nPCIHostBridges; - size_t i; bool usesVFIO = false;
/* TODO: Detect at runtime once we start using more than just @@ -6314,44 +6320,21 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) * * Note that this may not be valid for all platforms. */ - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - - done: - return memKB << 10; -} - - -/** - * @def: domain definition - * - * Returns true if the locked memory limit needs to be set or updated because - * of domain configuration, VFIO passthrough devices or architecture-specific - * requirements. - * */ -bool -qemuDomainRequiresMemLock(virDomainDefPtr def) -{ - size_t i; - - if (def->mem.locked) - return true; - for (i = 0; i < def->nhostdevs; i++) { virDomainHostdevDefPtr dev = def->hostdevs[i];
if (dev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS && dev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI && - dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) - return true; + dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) { + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024;
Shouldn't this be raising memKB for _each_ host device?
+ } }
- /* ppc64 KVM domains need to lock some memory even when VFIO is not used */ - if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) - return true; - - return false; + done: + return memKB << 10; }
+ /** * qemuDomainAdjustMaxMemLock: * @vm: domain @@ -6372,7 +6355,9 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr vm) unsigned long long bytes = 0; int ret = -1;
- if (qemuDomainRequiresMemLock(vm->def)) { + bytes = qemuDomainGetMemLockLimitBytes(vm->def); + + if (bytes) { /* If this is the first time adjusting the limit, save the current * value so that we can restore it once memory locking is no longer * required. Failing to obtain the current limit is not a critical @@ -6381,7 +6366,6 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr vm) if (virProcessGetMaxMemLock(vm->pid, &(vm->original_memlock)) < 0) vm->original_memlock = 0; } - bytes = qemuDomainGetMemLockLimitBytes(vm->def); } else { /* Once memory locking is no longer required, we can restore the * original, usually very low, limit */
This function has weird behaviour, even when it's documented. But it makes sense, it just takes a while.

On Mon, 2017-03-27 at 14:19 +0200, Martin Kletzander wrote: [...]
for (i = 0; i < def->nhostdevs; i++) { virDomainHostdevDefPtr dev = def->hostdevs[i]; if (dev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS && dev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI && - dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) - return true; + dev->source.subsys.u.pci.backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) { + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; Shouldn't this be raising memKB for _each_ host device?
Nope, it's guest memory + 1 GiB regardless of the number of VFIO devices. There should be a 'goto done' after setting memKB to quit the loop early, though. Consider that squashed in.
@@ -6381,7 +6366,6 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr vm) if (virProcessGetMaxMemLock(vm->pid, &(vm->original_memlock)) < 0) vm->original_memlock = 0; } - bytes = qemuDomainGetMemLockLimitBytes(vm->def); } else { /* Once memory locking is no longer required, we can restore the * original, usually very low, limit */ This function has weird behaviour, even when it's documented. But it makes sense, it just takes a while.
Yeah, it's slightly confusing even to me, and I'm the one who wrote it! If you have any suggestions on how to improve it, I'm all ears :) -- Andrea Bolognani / Red Hat / Virtualization

For guests that use <memoryBacking><locked>, our only option is to remove the memory locking limit altogether. --- src/qemu/qemu_domain.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2e2ba4f..01681a5 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6220,10 +6220,13 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; } - if (def->mem.locked) { - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - goto done; - } + /* If the guest wants its memory to be locked, we need to raise the memory + * locking limit so that the OS will not refuse allocation requests; + * however, there is no reliable way for us to figure out how much memory + * the QEMU process will allocate for its own use, so our only way out is + * to remove the limit altogether. Use with extreme care */ + if (def->mem.locked) + return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) { unsigned long long maxMemory; -- 2.7.4

On Thu, Mar 23, 2017 at 07:16:42PM +0100, Andrea Bolognani wrote:
For guests that use <memoryBacking><locked>, our only option is to remove the memory locking limit altogether. --- src/qemu/qemu_domain.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2e2ba4f..01681a5 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6220,10 +6220,13 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; }
- if (def->mem.locked) { - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - goto done; - } + /* If the guest wants its memory to be locked, we need to raise the memory + * locking limit so that the OS will not refuse allocation requests; + * however, there is no reliable way for us to figure out how much memory + * the QEMU process will allocate for its own use, so our only way out is + * to remove the limit altogether. Use with extreme care */ + if (def->mem.locked) + return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED;
So there is no way how one can limit the size of the memlock, other than setting the hard limit? Are you planning on adding new element to the domain XML which would allow setting this number as well?
if (ARCH_IS_PPC64(def->os.arch) && def->virtType == VIR_DOMAIN_VIRT_KVM) { unsigned long long maxMemory; -- 2.7.4
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Mon, 2017-03-27 at 14:24 +0200, Martin Kletzander wrote: [...]
@@ -6220,10 +6220,13 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; } - if (def->mem.locked) { - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - goto done; - } + /* If the guest wants its memory to be locked, we need to raise the memory + * locking limit so that the OS will not refuse allocation requests; + * however, there is no reliable way for us to figure out how much memory + * the QEMU process will allocate for its own use, so our only way out is + * to remove the limit altogether. Use with extreme care */ + if (def->mem.locked) + return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; So there is no way how one can limit the size of the memlock, other than setting the hard limit?
Correct.
Are you planning on adding new element to the domain XML which would allow setting this number as well?
I do. Unless I forget about it again, of course :) -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Mar 27, 2017 at 03:17:12PM +0200, Andrea Bolognani wrote:
On Mon, 2017-03-27 at 14:24 +0200, Martin Kletzander wrote: [...]
@@ -6220,10 +6220,13 @@ qemuDomainGetMemLockLimitBytes(virDomainDefPtr def) goto done; } - if (def->mem.locked) { - memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024; - goto done; - } + /* If the guest wants its memory to be locked, we need to raise the memory + * locking limit so that the OS will not refuse allocation requests; + * however, there is no reliable way for us to figure out how much memory + * the QEMU process will allocate for its own use, so our only way out is + * to remove the limit altogether. Use with extreme care */ + if (def->mem.locked) + return VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; So there is no way how one can limit the size of the memlock, other than setting the hard limit?
Correct.
Are you planning on adding new element to the domain XML which would allow setting this number as well?
I do. Unless I forget about it again, of course :)
Well, honestly, then I feel really bad about forcing people do different choices and changing it between releases. I don't think anyone wants to be checking all documentation changes every release. But since there are only two releases with the patch being in, it's probably okay. But I would *at least* be nice to mention that the lock limit is on it's way to the XML.
-- Andrea Bolognani / Red Hat / Virtualization

On Mon, 2017-03-27 at 15:42 +0200, Martin Kletzander wrote:
Are you planning on adding new element to the domain XML which would allow setting this number as well? I do. Unless I forget about it again, of course :) Well, honestly, then I feel really bad about forcing people do different choices and changing it between releases. I don't think anyone wants to be checking all documentation changes every release. But since there are only two releases with the patch being in, it's probably okay. But I would *at least* be nice to mention that the lock limit is on it's way to the XML.
I'm not going to advertise a feature before it has materialized: this is not Hollywoo, we don't need to get people hyped using teasers and trailers ;) Moreover, I'm actually starting to question the utility of such a feature altogether. If you're using <memoryBacking><locked>, all the memory allocated by QEMU is going to be locked, so there's no point in setting the memory locking limit to anything but <memtune><hard_limit>. For all other cases our calculations should be correct, and if it turns out that they aren't we want people to get back to us and let us know so we can fix it for everybody rather than work around the bug using the new setting. -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Mar 27, 2017 at 04:24:57PM +0200, Andrea Bolognani wrote:
On Mon, 2017-03-27 at 15:42 +0200, Martin Kletzander wrote:
Are you planning on adding new element to the domain XML which would allow setting this number as well? I do. Unless I forget about it again, of course :) Well, honestly, then I feel really bad about forcing people do different choices and changing it between releases. I don't think anyone wants to be checking all documentation changes every release. But since there are only two releases with the patch being in, it's probably okay. But I would *at least* be nice to mention that the lock limit is on it's way to the XML.
I'm not going to advertise a feature before it has materialized: this is not Hollywoo, we don't need to get people hyped using teasers and trailers ;)
Moreover, I'm actually starting to question the utility of such a feature altogether.
If you're using <memoryBacking><locked>, all the memory allocated by QEMU is going to be locked, so there's no point in setting the memory locking limit to anything but <memtune><hard_limit>.
Since we can only lock the whole memory, then it makes sense.
For all other cases our calculations should be correct, and if it turns out that they aren't we want people to get back to us and let us know so we can fix it for everybody rather than work around the bug using the new setting.
-- Andrea Bolognani / Red Hat / Virtualization

The value we use internally to represent the lack of a memory locking limit, VIR_DOMAIN_MEMORY_PARAM_UNLIMITED, doesn't match the value setrlimit() and prlimit() use for the same purpose, RLIM_INFINITY, so we have to handle the translation ourselves. --- src/util/virprocess.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/src/util/virprocess.c b/src/util/virprocess.c index 16eb412..1fbbbb3 100644 --- a/src/util/virprocess.c +++ b/src/util/virprocess.c @@ -747,7 +747,15 @@ virProcessSetMaxMemLock(pid_t pid, unsigned long long bytes) if (bytes == 0) return 0; - rlim.rlim_cur = rlim.rlim_max = bytes; + /* We use VIR_DOMAIN_MEMORY_PARAM_UNLIMITED internally to represent + * unlimited memory amounts, but setrlimit() and prlimit() use + * RLIM_INFINITY for the same purpose, so we need to translate between + * the two conventions */ + if (virMemoryLimitIsSet(bytes)) + rlim.rlim_cur = rlim.rlim_max = bytes; + else + rlim.rlim_cur = rlim.rlim_max = RLIM_INFINITY; + if (pid == 0) { if (setrlimit(RLIMIT_MEMLOCK, &rlim) < 0) { virReportSystemError(errno, @@ -810,8 +818,14 @@ virProcessGetMaxMemLock(pid_t pid, } /* virProcessSetMaxMemLock() sets both rlim_cur and rlim_max to the - * same value, so we can retrieve just rlim_max here */ - *bytes = rlim.rlim_max; + * same value, so we can retrieve just rlim_max here. We use + * VIR_DOMAIN_MEMORY_PARAM_UNLIMITED internally to represent unlimited + * memory amounts, but setrlimit() and prlimit() use RLIM_INFINITY for the + * same purpose, so we need to translate between the two conventions */ + if (rlim.rlim_max == RLIM_INFINITY) + *bytes = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; + else + *bytes = rlim.rlim_max; return 0; } -- 2.7.4

On Thu, 23 Mar 2017 19:16:43 +0100 Andrea Bolognani <abologna@redhat.com> wrote:
The value we use internally to represent the lack of a memory locking limit, VIR_DOMAIN_MEMORY_PARAM_UNLIMITED, doesn't match the value setrlimit() and prlimit() use for the same purpose, RLIM_INFINITY, so we have to handle the translation ourselves. --- src/util/virprocess.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/src/util/virprocess.c b/src/util/virprocess.c index 16eb412..1fbbbb3 100644 --- a/src/util/virprocess.c +++ b/src/util/virprocess.c @@ -747,7 +747,15 @@ virProcessSetMaxMemLock(pid_t pid, unsigned long long bytes) if (bytes == 0) return 0;
- rlim.rlim_cur = rlim.rlim_max = bytes; + /* We use VIR_DOMAIN_MEMORY_PARAM_UNLIMITED internally to represent + * unlimited memory amounts, but setrlimit() and prlimit() use + * RLIM_INFINITY for the same purpose, so we need to translate between + * the two conventions */ + if (virMemoryLimitIsSet(bytes)) + rlim.rlim_cur = rlim.rlim_max = bytes; + else + rlim.rlim_cur = rlim.rlim_max = RLIM_INFINITY;
I know I'm not very smart, but I had trouble parsing this. What about: if (virMemoryLimitIsInfinity(bytes)) rlim.rlim_cur = rlim.rlim_max = RLIM_INFINITY; ... This reads better, and avoids using virMemoryLimitIsSet() which seems very error-prone. It doesn't check for zero and it's strange that "limit < infinity" means "limit is set".
+ if (pid == 0) { if (setrlimit(RLIMIT_MEMLOCK, &rlim) < 0) { virReportSystemError(errno, @@ -810,8 +818,14 @@ virProcessGetMaxMemLock(pid_t pid, }
/* virProcessSetMaxMemLock() sets both rlim_cur and rlim_max to the - * same value, so we can retrieve just rlim_max here */ - *bytes = rlim.rlim_max; + * same value, so we can retrieve just rlim_max here. We use + * VIR_DOMAIN_MEMORY_PARAM_UNLIMITED internally to represent unlimited + * memory amounts, but setrlimit() and prlimit() use RLIM_INFINITY for the + * same purpose, so we need to translate between the two conventions */ + if (rlim.rlim_max == RLIM_INFINITY) + *bytes = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; + else + *bytes = rlim.rlim_max;
return 0; }

On Fri, 2017-03-24 at 13:47 -0400, Luiz Capitulino wrote:
@@ -747,7 +747,15 @@ virProcessSetMaxMemLock(pid_t pid, unsigned long long bytes) if (bytes == 0) return 0; - rlim.rlim_cur = rlim.rlim_max = bytes; + /* We use VIR_DOMAIN_MEMORY_PARAM_UNLIMITED internally to represent + * unlimited memory amounts, but setrlimit() and prlimit() use + * RLIM_INFINITY for the same purpose, so we need to translate between + * the two conventions */ + if (virMemoryLimitIsSet(bytes)) + rlim.rlim_cur = rlim.rlim_max = bytes; + else + rlim.rlim_cur = rlim.rlim_max = RLIM_INFINITY; I know I'm not very smart, but I had trouble parsing this. What about: if (virMemoryLimitIsInfinity(bytes)) rlim.rlim_cur = rlim.rlim_max = RLIM_INFINITY; ... This reads better, and avoids using virMemoryLimitIsSet() which seems very error-prone. It doesn't check for zero and it's strange that "limit < infinity" means "limit is set".
I would love to have something like that, and in fact I spent a significant amount of time trying to clean up the way libvirt stores and represents memory limits. The short version is: not gonna happen ;) Moreover, while the current API looks poorly named in this context, it's also used for other memory limits and the names make more sense there, so it's not a terrible API when you look at the big picture. The case where bytes is zero is accounted for, though. You can see it right at the beginning of the hunk. -- Andrea Bolognani / Red Hat / Virtualization

This will be used later on in the test suite. --- src/libvirt_private.syms | 1 + src/util/vircommand.c | 9 +++++++++ src/util/vircommand.h | 2 ++ 3 files changed, 12 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index a1c7624..a978d61 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1433,6 +1433,7 @@ virCommandDaemonize; virCommandDoAsyncIO; virCommandExec; virCommandFree; +virCommandGetMaxMemLock; virCommandHandshakeNotify; virCommandHandshakeWait; virCommandNew; diff --git a/src/util/vircommand.c b/src/util/vircommand.c index e1bbc05..af97049 100644 --- a/src/util/vircommand.c +++ b/src/util/vircommand.c @@ -1083,6 +1083,15 @@ virCommandSetUID(virCommandPtr cmd, uid_t uid) cmd->uid = uid; } +unsigned long long +virCommandGetMaxMemLock(virCommandPtr cmd) +{ + if (!cmd || cmd->has_error) + return 0; + + return cmd->maxMemLock; +} + void virCommandSetMaxMemLock(virCommandPtr cmd, unsigned long long bytes) { diff --git a/src/util/vircommand.h b/src/util/vircommand.h index 99dcdeb..0fb4c8d 100644 --- a/src/util/vircommand.h +++ b/src/util/vircommand.h @@ -72,6 +72,8 @@ void virCommandSetGID(virCommandPtr cmd, gid_t gid); void virCommandSetUID(virCommandPtr cmd, uid_t uid); +unsigned long long virCommandGetMaxMemLock(virCommandPtr cmd); + void virCommandSetMaxMemLock(virCommandPtr cmd, unsigned long long bytes); void virCommandSetMaxProcesses(virCommandPtr cmd, unsigned int procs); void virCommandSetMaxFiles(virCommandPtr cmd, unsigned int files); -- 2.7.4

This will be used later on in the test suite. --- tests/testutils.c | 27 +++++++++++++++++++++++++++ tests/testutils.h | 2 ++ 2 files changed, 29 insertions(+) diff --git a/tests/testutils.c b/tests/testutils.c index 13eff9e..f3feb6d 100644 --- a/tests/testutils.c +++ b/tests/testutils.c @@ -722,6 +722,33 @@ virTestCompareToFile(const char *strcontent, } /* + * @param content: Input content + * @param src: Source to compare @content against + */ +int +virTestCompareToULL(unsigned long long content, + unsigned long long src) +{ + char *strcontent = NULL; + char *strsrc = NULL; + int ret = -1; + + if (virAsprintf(&strcontent, "%llu", content) < 0) + goto cleanup; + + if (virAsprintf(&strsrc, "%llu", src) < 0) + goto cleanup; + + ret = virTestCompareToString(strcontent, strsrc); + + cleanup: + VIR_FREE(strcontent); + VIR_FREE(strsrc); + + return ret; +} + +/* * @param strcontent: String input content * @param strsrc: String source to compare strcontent against */ diff --git a/tests/testutils.h b/tests/testutils.h index c7c641c..c16fe6c 100644 --- a/tests/testutils.h +++ b/tests/testutils.h @@ -77,6 +77,8 @@ int virTestCompareToFile(const char *strcontent, const char *filename); int virTestCompareToString(const char *strcontent, const char *strsrc); +int virTestCompareToULL(unsigned long long content, + unsigned long long src); unsigned int virTestGetDebug(void); unsigned int virTestGetVerbose(void); -- 2.7.4

These tests cover a number of scenarios where we care about the memory locking limit being set correctly for the guest to work properly. --- tests/Makefile.am | 9 ++ .../qemumemlock-pc-hardlimit+hostdev.xml | 20 +++ .../qemumemlock-pc-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pc-hardlimit+locked.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-kvm.xml | 11 ++ .../qemumemlock-pc-locked+hostdev.xml | 20 +++ tests/qemumemlockdata/qemumemlock-pc-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-tcg.xml | 11 ++ .../qemumemlock-pseries-hardlimit+hostdev.xml | 20 +++ ...emumemlock-pseries-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pseries-hardlimit+locked.xml | 17 ++ .../qemumemlock-pseries-hardlimit.xml | 14 ++ .../qemumemlock-pseries-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pseries-kvm.xml | 11 ++ .../qemumemlock-pseries-locked+hostdev.xml | 20 +++ .../qemumemlockdata/qemumemlock-pseries-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pseries-tcg.xml | 11 ++ tests/qemumemlocktest.c | 172 +++++++++++++++++++++ 20 files changed, 475 insertions(+) create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-tcg.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-tcg.xml create mode 100644 tests/qemumemlocktest.c diff --git a/tests/Makefile.am b/tests/Makefile.am index af69a3a..ccf5ee1 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -127,6 +127,7 @@ EXTRA_DIST = \ qemumonitorjsondata \ qemuxml2argvdata \ qemuxml2xmloutdata \ + qemumemlockdata \ secretxml2xmlin \ securityselinuxhelperdata \ securityselinuxlabeldata \ @@ -276,6 +277,7 @@ test_programs += qemuxml2argvtest qemuxml2xmltest \ qemuargv2xmltest qemuhelptest domainsnapshotxml2xmltest \ qemumonitortest qemumonitorjsontest qemuhotplugtest \ qemuagenttest qemucapabilitiestest qemucaps2xmltest \ + qemumemlocktest \ qemucommandutiltest test_helpers += qemucapsprobe test_libraries += libqemumonitortestutils.la \ @@ -658,6 +660,12 @@ domainsnapshotxml2xmltest_SOURCES = \ domainsnapshotxml2xmltest.c testutilsqemu.c testutilsqemu.h \ testutils.c testutils.h domainsnapshotxml2xmltest_LDADD = $(qemu_LDADDS) $(LDADDS) + +qemumemlocktest_SOURCES = \ + qemumemlocktest.c \ + testutilsqemu.c testutilsqemu.h \ + testutils.c testutils.h +qemumemlocktest_LDADD = $(qemu_LDADDS) $(LDADDS) else ! WITH_QEMU EXTRA_DIST += qemuxml2argvtest.c qemuxml2xmltest.c qemuargv2xmltest.c \ qemuhelptest.c domainsnapshotxml2xmltest.c \ @@ -665,6 +673,7 @@ EXTRA_DIST += qemuxml2argvtest.c qemuxml2xmltest.c qemuargv2xmltest.c \ qemumonitorjsontest.c qemuhotplugtest.c \ qemuagenttest.c qemucapabilitiestest.c \ qemucaps2xmltest.c qemucommandutiltest.c \ + qemumemlocktest.c \ $(QEMUMONITORTESTUTILS_SOURCES) endif ! WITH_QEMU diff --git a/tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml new file mode 100644 index 0000000..5443145 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml @@ -0,0 +1,20 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml new file mode 100644 index 0000000..8184eef --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml @@ -0,0 +1,23 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml new file mode 100644 index 0000000..78bee62 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml @@ -0,0 +1,17 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml b/tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml new file mode 100644 index 0000000..b1e3867 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml @@ -0,0 +1,14 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-hostdev.xml b/tests/qemumemlockdata/qemumemlock-pc-hostdev.xml new file mode 100644 index 0000000..6c058a9 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-hostdev.xml @@ -0,0 +1,17 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-kvm.xml b/tests/qemumemlockdata/qemumemlock-pc-kvm.xml new file mode 100644 index 0000000..7fa4e24 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-kvm.xml @@ -0,0 +1,11 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml new file mode 100644 index 0000000..fbc1dc3 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml @@ -0,0 +1,20 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-locked.xml b/tests/qemumemlockdata/qemumemlock-pc-locked.xml new file mode 100644 index 0000000..febb036 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-locked.xml @@ -0,0 +1,14 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pc-tcg.xml b/tests/qemumemlockdata/qemumemlock-pc-tcg.xml new file mode 100644 index 0000000..1934269 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pc-tcg.xml @@ -0,0 +1,11 @@ +<domain type='qemu'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml new file mode 100644 index 0000000..ddd3b47 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml @@ -0,0 +1,20 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml new file mode 100644 index 0000000..73c28c1 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml @@ -0,0 +1,23 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml new file mode 100644 index 0000000..fe984a2 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml @@ -0,0 +1,17 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml new file mode 100644 index 0000000..b23de1d --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml @@ -0,0 +1,14 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memtune> + <hard_limit unit='KiB'>2097152</hard_limit> + </memtune> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml b/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml new file mode 100644 index 0000000..daf70a4 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml @@ -0,0 +1,17 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml b/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml new file mode 100644 index 0000000..5a065da --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml @@ -0,0 +1,11 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml b/tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml new file mode 100644 index 0000000..74212f1 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml @@ -0,0 +1,20 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-locked.xml b/tests/qemumemlockdata/qemumemlock-pseries-locked.xml new file mode 100644 index 0000000..befaefd --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-locked.xml @@ -0,0 +1,14 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <memoryBacking> + <locked/> + </memoryBacking> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-tcg.xml b/tests/qemumemlockdata/qemumemlock-pseries-tcg.xml new file mode 100644 index 0000000..a3b03dd --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-tcg.xml @@ -0,0 +1,11 @@ +<domain type='qemu'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain> diff --git a/tests/qemumemlocktest.c b/tests/qemumemlocktest.c new file mode 100644 index 0000000..9e65622 --- /dev/null +++ b/tests/qemumemlocktest.c @@ -0,0 +1,172 @@ +#include <config.h> + +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <string.h> + +#include <sys/types.h> +#include <fcntl.h> + +#include "testutils.h" + +#ifdef WITH_QEMU + +# include "datatypes.h" +# include "internal.h" +# include "virstring.h" +# include "conf/domain_conf.h" +# include "qemu/qemu_capabilities.h" +# include "qemu/qemu_process.h" + +# include "testutilsqemu.h" + +# define VIR_FROM_THIS VIR_FROM_QEMU + +static const char *abs_top_srcdir; +static virQEMUDriver driver; + +struct testInfo { + const char *name; + unsigned long long memlock; +}; + +static int +testCompareMemLock(const void *data) +{ + const struct testInfo *info = data; + virConnectPtr conn = NULL; + virDomainObjPtr vm = NULL; + virCommandPtr cmd = NULL; + char *xml = NULL; + int ret = -1; + + if (!(conn = virGetConnect())) + goto cleanup; + + if (virAsprintf(&xml, "%s/qemumemlockdata/qemumemlock-%s.xml", + abs_srcdir, info->name) < 0) + goto cleanup; + + if (!(vm = virDomainObjNew(driver.xmlopt))) + goto cleanup; + + if (!(vm->def = virDomainDefParseFile(xml, driver.caps, driver.xmlopt, NULL, + VIR_DOMAIN_DEF_PARSE_INACTIVE))) { + goto cleanup; + } + + if (!(cmd = qemuProcessCreatePretendCmd(conn, &driver, vm, NULL, 0, false, + VIR_QEMU_PROCESS_START_COLD))) { + goto cleanup; + } + + if (virTestCompareToULL(info->memlock, virCommandGetMaxMemLock(cmd)) < 0) + goto cleanup; + + ret = 0; + + cleanup: + virCommandFree(cmd); + virObjectUnref(vm); + virObjectUnref(conn); + VIR_FREE(xml); + + return ret; +} + + +int +main(void) +{ + virQEMUCapsPtr qemuCaps = NULL; + int ret = 0; + + abs_top_srcdir = getenv("abs_top_srcdir"); + if (!abs_top_srcdir) + abs_top_srcdir = abs_srcdir "/.."; + + if (qemuTestDriverInit(&driver) < 0) + return EXIT_FAILURE; + + driver.privileged = true; + + /* QEMU capabilites are the same for all tests */ + if (!(qemuCaps = virQEMUCapsNew())) + return EXIT_FAILURE; + + virQEMUCapsSetList(qemuCaps, + QEMU_CAPS_KVM, QEMU_CAPS_REALTIME_MLOCK, + QEMU_CAPS_DEVICE_VFIO_PCI); + + if (qemuTestCapsCacheInsert(driver.qemuCapsCache, "qemumemlock", qemuCaps) < 0) + return EXIT_FAILURE; + + virObjectUnref(qemuCaps); + +# define DO_TEST(name, memlock) \ + do { \ + static struct testInfo info = { \ + name, memlock \ + }; \ + if (virTestRun("QEMU MEMLOCK " name, testCompareMemLock, &info) < 0) \ + ret = -1; \ + } while (0) + + /* The tests below make sure that the memory locking limit is being + * calculated correctly in a number of situations. Each test is + * performed both on x86_64/pc and ppc64/pseries in order to account + * for some architecture-specific details. + * + * kvm: simple KMV guest + * tcg: simple TCG guest + * + * hardlimit: guest where <memtune><hard_limit> has been configured + * locked: guest where <memoryBacking><locked> has been enabled + * hostdev: guest that has some hostdev assigned + * + * The remaining tests cover different combinations of the above to + * ensure settings are prioritized as expected. + */ + + qemuTestSetHostArch(driver.caps, VIR_ARCH_X86_64); + + DO_TEST("pc-kvm", 0); + DO_TEST("pc-tcg", 0); + + DO_TEST("pc-hardlimit", 2147483648); + DO_TEST("pc-locked", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + DO_TEST("pc-hostdev", 2147483648); + + DO_TEST("pc-hardlimit+locked", 2147483648); + DO_TEST("pc-hardlimit+hostdev", 2147483648); + DO_TEST("pc-hardlimit+locked+hostdev", 2147483648); + DO_TEST("pc-locked+hostdev", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + + qemuTestSetHostArch(driver.caps, VIR_ARCH_PPC64); + + DO_TEST("pseries-kvm", 20971520); + DO_TEST("pseries-tcg", 0); + + DO_TEST("pseries-hardlimit", 2147483648); + DO_TEST("pseries-locked", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + DO_TEST("pseries-hostdev", 2168455168); + + DO_TEST("pseries-hardlimit+locked", 2147483648); + DO_TEST("pseries-hardlimit+hostdev", 2147483648); + DO_TEST("pseries-hardlimit+locked+hostdev", 2147483648); + DO_TEST("pseries-locked+hostdev", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + + qemuTestDriverFree(&driver); + + return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE; +} + +#else + +int main(void) +{ + return EXIT_AM_SKIP; +} + +#endif /* WITH_QEMU */ -- 2.7.4

On Thu, 2017-03-23 at 19:16 +0100, Andrea Bolognani wrote:
These tests cover a number of scenarios where we care about the memory locking limit being set correctly for the guest to work properly.
[...]
diff --git a/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml b/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml new file mode 100644 index 0000000..daf70a4 --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml @@ -0,0 +1,17 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x04' slot='0x02' function='0x0'/> + </source> + </hostdev> + </devices> +</domain> diff --git a/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml b/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml new file mode 100644 index 0000000..5a065da --- /dev/null +++ b/tests/qemumemlockdata/qemumemlock-pseries-kvm.xml @@ -0,0 +1,11 @@ +<domain type='kvm'> + <name>guest</name> + <memory unit='KiB'>1048576</memory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='ppc64' machine='pseries'>hvm</type> + </os> + <devices> + <emulator>/usr/bin/qemu-system-ppc64</emulator> + </devices> +</domain>
[...]
+ qemuTestSetHostArch(driver.caps, VIR_ARCH_PPC64); + + DO_TEST("pseries-kvm", 20971520); [...] + DO_TEST("pseries-hostdev", 2168455168);
Hi David, since I'm finally introducing test suite coverage for this, it would be great if you could double-check that the values we're going to test against actually make sense :) You can see the configuration for the guests above: they both have 1 GiB of RAM and use KVM, but one of the two is using device assignment. The sizes are in bytes, so that's ~20 MiB for the basic guest and ~2 GiB for the one using device assignment. For reference, the code doing the calculation is https://github.com/libvirt/libvirt/blob/9b93c4c26483308371aae3ae30bf5536c88b... -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 23, 2017 at 07:16:46PM +0100, Andrea Bolognani wrote:
These tests cover a number of scenarios where we care about the memory locking limit being set correctly for the guest to work properly. --- tests/Makefile.am | 9 ++ .../qemumemlock-pc-hardlimit+hostdev.xml | 20 +++ .../qemumemlock-pc-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pc-hardlimit+locked.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-kvm.xml | 11 ++ .../qemumemlock-pc-locked+hostdev.xml | 20 +++ tests/qemumemlockdata/qemumemlock-pc-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-tcg.xml | 11 ++ .../qemumemlock-pseries-hardlimit+hostdev.xml | 20 +++ ...emumemlock-pseries-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pseries-hardlimit+locked.xml | 17 ++ .../qemumemlock-pseries-hardlimit.xml | 14 ++ .../qemumemlock-pseries-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pseries-kvm.xml | 11 ++ .../qemumemlock-pseries-locked+hostdev.xml | 20 +++ .../qemumemlockdata/qemumemlock-pseries-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pseries-tcg.xml | 11 ++ tests/qemumemlocktest.c | 172 +++++++++++++++++++++ 20 files changed, 475 insertions(+) create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-tcg.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-tcg.xml create mode 100644 tests/qemumemlocktest.c
[...]
diff --git a/tests/qemumemlocktest.c b/tests/qemumemlocktest.c new file mode 100644 index 0000000..9e65622 --- /dev/null +++ b/tests/qemumemlocktest.c @@ -0,0 +1,172 @@ +#include <config.h> + +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <string.h> + +#include <sys/types.h> +#include <fcntl.h> + +#include "testutils.h" + +#ifdef WITH_QEMU + +# include "datatypes.h" +# include "internal.h" +# include "virstring.h" +# include "conf/domain_conf.h" +# include "qemu/qemu_capabilities.h" +# include "qemu/qemu_process.h" + +# include "testutilsqemu.h" + +# define VIR_FROM_THIS VIR_FROM_QEMU + +static const char *abs_top_srcdir; +static virQEMUDriver driver; + +struct testInfo { + const char *name; + unsigned long long memlock; +}; + +static int +testCompareMemLock(const void *data) +{ + const struct testInfo *info = data; + virConnectPtr conn = NULL; + virDomainObjPtr vm = NULL; + virCommandPtr cmd = NULL; + char *xml = NULL; + int ret = -1; + + if (!(conn = virGetConnect())) + goto cleanup; + + if (virAsprintf(&xml, "%s/qemumemlockdata/qemumemlock-%s.xml", + abs_srcdir, info->name) < 0) + goto cleanup; + + if (!(vm = virDomainObjNew(driver.xmlopt))) + goto cleanup; + + if (!(vm->def = virDomainDefParseFile(xml, driver.caps, driver.xmlopt, NULL, + VIR_DOMAIN_DEF_PARSE_INACTIVE))) { + goto cleanup; + } + + if (!(cmd = qemuProcessCreatePretendCmd(conn, &driver, vm, NULL, 0, false, + VIR_QEMU_PROCESS_START_COLD))) { + goto cleanup; + } +
Going through all this, just to check one number seems too much. And it doesn't actually test much more than simple call to qemuDomainGetMemLockLimitBytes(def) would.
+ if (virTestCompareToULL(info->memlock, virCommandGetMaxMemLock(cmd)) < 0) + goto cleanup; + + ret = 0; + + cleanup: + virCommandFree(cmd); + virObjectUnref(vm); + virObjectUnref(conn); + VIR_FREE(xml); + + return ret; +} + + +int +main(void)
Use different name (e.g. mymain) and VIRT_TEST_MAIN()
+{ + virQEMUCapsPtr qemuCaps = NULL; + int ret = 0; + + abs_top_srcdir = getenv("abs_top_srcdir"); + if (!abs_top_srcdir) + abs_top_srcdir = abs_srcdir "/.."; + + if (qemuTestDriverInit(&driver) < 0) + return EXIT_FAILURE; + + driver.privileged = true; + + /* QEMU capabilites are the same for all tests */ + if (!(qemuCaps = virQEMUCapsNew())) + return EXIT_FAILURE; + + virQEMUCapsSetList(qemuCaps, + QEMU_CAPS_KVM, QEMU_CAPS_REALTIME_MLOCK, + QEMU_CAPS_DEVICE_VFIO_PCI);
You should add QEMU_CAPS_LAST here. Do you even need capabilities with the upper part removed? Other than that it looks fine.
+ + if (qemuTestCapsCacheInsert(driver.qemuCapsCache, "qemumemlock", qemuCaps) < 0) + return EXIT_FAILURE; + + virObjectUnref(qemuCaps); + +# define DO_TEST(name, memlock) \ + do { \ + static struct testInfo info = { \ + name, memlock \ + }; \ + if (virTestRun("QEMU MEMLOCK " name, testCompareMemLock, &info) < 0) \ + ret = -1; \ + } while (0) + + /* The tests below make sure that the memory locking limit is being + * calculated correctly in a number of situations. Each test is + * performed both on x86_64/pc and ppc64/pseries in order to account + * for some architecture-specific details. + * + * kvm: simple KMV guest + * tcg: simple TCG guest + * + * hardlimit: guest where <memtune><hard_limit> has been configured + * locked: guest where <memoryBacking><locked> has been enabled + * hostdev: guest that has some hostdev assigned + * + * The remaining tests cover different combinations of the above to + * ensure settings are prioritized as expected. + */ + + qemuTestSetHostArch(driver.caps, VIR_ARCH_X86_64); + + DO_TEST("pc-kvm", 0); + DO_TEST("pc-tcg", 0); + + DO_TEST("pc-hardlimit", 2147483648); + DO_TEST("pc-locked", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + DO_TEST("pc-hostdev", 2147483648); + + DO_TEST("pc-hardlimit+locked", 2147483648); + DO_TEST("pc-hardlimit+hostdev", 2147483648); + DO_TEST("pc-hardlimit+locked+hostdev", 2147483648); + DO_TEST("pc-locked+hostdev", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + + qemuTestSetHostArch(driver.caps, VIR_ARCH_PPC64); + + DO_TEST("pseries-kvm", 20971520); + DO_TEST("pseries-tcg", 0); + + DO_TEST("pseries-hardlimit", 2147483648); + DO_TEST("pseries-locked", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + DO_TEST("pseries-hostdev", 2168455168); + + DO_TEST("pseries-hardlimit+locked", 2147483648); + DO_TEST("pseries-hardlimit+hostdev", 2147483648); + DO_TEST("pseries-hardlimit+locked+hostdev", 2147483648); + DO_TEST("pseries-locked+hostdev", VIR_DOMAIN_MEMORY_PARAM_UNLIMITED); + + qemuTestDriverFree(&driver); + + return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE; +} + +#else + +int main(void) +{ + return EXIT_AM_SKIP; +} + +#endif /* WITH_QEMU */ -- 2.7.4
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Mon, 2017-03-27 at 15:26 +0200, Martin Kletzander wrote: [...]
+ if (!(cmd = qemuProcessCreatePretendCmd(conn, &driver, vm, NULL, 0, false, + VIR_QEMU_PROCESS_START_COLD))) { + goto cleanup; + } + Going through all this, just to check one number seems too much. And it doesn't actually test much more than simple call to qemuDomainGetMemLockLimitBytes(def) would.
You're of course completely right :) [...]
+int +main(void) Use different name (e.g. mymain) and VIRT_TEST_MAIN()
Done. -- Andrea Bolognani / Red Hat / Virtualization

--- docs/formatdomain.html.in | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 4a3123e..5906de6 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -937,14 +937,18 @@ <dt><code>locked</code></dt> <dd>When set and supported by the hypervisor, memory pages belonging to the domain will be locked in host's memory and the host will not - be allowed to swap them out. For QEMU/KVM this requires - <code>hard_limit</code> <a href="#elementsMemoryTuning">memory tuning</a> - element to be used and set to the maximum memory configured for the - domain plus any memory consumed by the QEMU process itself. Beware of - setting the memory limit too high (and thus allowing the domain to lock - most of the host's memory). Doing so may be dangerous to both the - domain and the host itself since the host's kernel may run out of - memory. <span class="since">Since 1.0.6</span></dd> + be allowed to swap them out, which might be required for some + workloads such as RT. For QEMU/KVM guests, the memory used by the QEMU + process itself will be locked too: unlike guest memory, this is an + amount libvirt has no way of figuring out in advance, so it has to + remove the limit on locked memory altogether. This can be very + dangerous as the host might run out of memory and be unable to reclaim + it from the guest, so using this option is discouraged unless your + workload demands it; even then, it's highly recommended to set an + <code>hard_limit</code> (see + <a href="#elementsMemoryTuning">memory tuning</a>) on memory allocation + suitable for the specific environment at the same time to mitigate + the risks described above. <span class="since">Since 1.0.6</span></dd> <dt><code>source</code></dt> <dd>In this attribute you can switch to file memorybacking or keep default anonymous.</dd> <dt><code>access</code></dt> @@ -989,12 +993,18 @@ <dt><code>hard_limit</code></dt> <dd> The optional <code>hard_limit</code> element is the maximum memory the guest can use. The units for this value are kibibytes (i.e. blocks - of 1024 bytes). <strong>However, users of QEMU and KVM are strongly - advised not to set this limit as domain may get killed by the kernel - if the guess is too low. To determine the memory needed for a process - to run is an + of 1024 bytes). Users of QEMU and KVM are strongly advised not to set + this limit as domain may get killed by the kernel if the guess is too + low, and determining the memory needed for a process to run is an <a href="http://en.wikipedia.org/wiki/Undecidable_problem"> - undecidable problem</a>.</strong></dd> + undecidable problem</a>; that said, if you already set + <code>locked</code> in + <a href="#elementsMemoryBacking">memory backing</a> because your + workload demands it, you'll have to take into account the specifics of + your deployment and figure out a value for <code>hard_limit</code> that + balances the risk of your guest being killed because the limit was set + too low and the risk of your host crashing because it cannot reclaim + the memory used by the guest due to <code>locked</code>. Good luck!</dd> <dt><code>soft_limit</code></dt> <dd> The optional <code>soft_limit</code> element is the memory limit to enforce during memory contention. The units for this value are -- 2.7.4

On Thu, 23 Mar 2017 19:16:47 +0100 Andrea Bolognani <abologna@redhat.com> wrote:
--- docs/formatdomain.html.in | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 4a3123e..5906de6 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -937,14 +937,18 @@ <dt><code>locked</code></dt> <dd>When set and supported by the hypervisor, memory pages belonging to the domain will be locked in host's memory and the host will not - be allowed to swap them out. For QEMU/KVM this requires - <code>hard_limit</code> <a href="#elementsMemoryTuning">memory tuning</a> - element to be used and set to the maximum memory configured for the - domain plus any memory consumed by the QEMU process itself. Beware of - setting the memory limit too high (and thus allowing the domain to lock - most of the host's memory). Doing so may be dangerous to both the - domain and the host itself since the host's kernel may run out of - memory. <span class="since">Since 1.0.6</span></dd> + be allowed to swap them out, which might be required for some + workloads such as RT. For QEMU/KVM guests, the memory used by the QEMU
Minor, but I'd do s/RT/real-time. As this doc is for the general population, RT may not be a know term for everyone.
+ process itself will be locked too: unlike guest memory, this is an + amount libvirt has no way of figuring out in advance, so it has to + remove the limit on locked memory altogether. This can be very + dangerous as the host might run out of memory and be unable to reclaim + it from the guest,
I'd rewrite this to: """ This option has a drawback and a possible security risk for the host. If the host is running out of memory, it will be unable to reclaim the memory locked by this guest which could cause the host to run out of memory. In particular, a malicious guest could be able to lock as much memory it wants, causing a DDoS attack in the host. For setups where this may have a significant impact, it is highly recommended to use <hard_limit> to prevent this attack. """
so using this option is discouraged unless your + workload demands it; even then, it's highly recommended to set an + <code>hard_limit</code> (see + <a href="#elementsMemoryTuning">memory tuning</a>) on memory allocation + suitable for the specific environment at the same time to mitigate + the risks described above. <span class="since">Since 1.0.6</span></dd> <dt><code>source</code></dt> <dd>In this attribute you can switch to file memorybacking or keep default anonymous.</dd> <dt><code>access</code></dt> @@ -989,12 +993,18 @@ <dt><code>hard_limit</code></dt> <dd> The optional <code>hard_limit</code> element is the maximum memory the guest can use. The units for this value are kibibytes (i.e. blocks - of 1024 bytes). <strong>However, users of QEMU and KVM are strongly - advised not to set this limit as domain may get killed by the kernel - if the guess is too low. To determine the memory needed for a process - to run is an + of 1024 bytes). Users of QEMU and KVM are strongly advised not to set + this limit as domain may get killed by the kernel if the guess is too + low, and determining the memory needed for a process to run is an <a href="http://en.wikipedia.org/wiki/Undecidable_problem"> - undecidable problem</a>.</strong></dd> + undecidable problem</a>; that said, if you already set + <code>locked</code> in + <a href="#elementsMemoryBacking">memory backing</a> because your + workload demands it, you'll have to take into account the specifics of + your deployment and figure out a value for <code>hard_limit</code> that + balances the risk of your guest being killed because the limit was set + too low and the risk of your host crashing because it cannot reclaim + the memory used by the guest due to <code>locked</code>. Good luck!</dd> <dt><code>soft_limit</code></dt> <dd> The optional <code>soft_limit</code> element is the memory limit to enforce during memory contention. The units for this value are

On Fri, 2017-03-24 at 13:58 -0400, Luiz Capitulino wrote: [...]
+ be allowed to swap them out, which might be required for some + workloads such as RT. For QEMU/KVM guests, the memory used by the QEMU Minor, but I'd do s/RT/real-time. As this doc is for the general population, RT may not be a know term for everyone.
Sure.
+ process itself will be locked too: unlike guest memory, this is an + amount libvirt has no way of figuring out in advance, so it has to + remove the limit on locked memory altogether. This can be very + dangerous as the host might run out of memory and be unable to reclaim + it from the guest, I'd rewrite this to: """ This option has a drawback and a possible security risk for the host. If the host is running out of memory, it will be unable to reclaim the memory locked by this guest which could cause the host to run out of memory. In particular, a malicious guest could be able to lock as much memory it wants, causing a DDoS attack in the host. For setups where this may have a significant impact, it is highly recommended to use <hard_limit> to prevent this attack. """
Another stab at it (which plugs into my original version): [...] remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host. Because of this, using the option is discouraged unless your [...] Does it look reasonable? -- Andrea Bolognani / Red Hat / Virtualization

On Mon, 27 Mar 2017 12:33:48 +0200 Andrea Bolognani <abologna@redhat.com> wrote:
On Fri, 2017-03-24 at 13:58 -0400, Luiz Capitulino wrote: [...]
+ be allowed to swap them out, which might be required for some + workloads such as RT. For QEMU/KVM guests, the memory used by the QEMU Minor, but I'd do s/RT/real-time. As this doc is for the general population, RT may not be a know term for everyone.
Sure.
+ process itself will be locked too: unlike guest memory, this is an + amount libvirt has no way of figuring out in advance, so it has to + remove the limit on locked memory altogether. This can be very + dangerous as the host might run out of memory and be unable to reclaim + it from the guest, I'd rewrite this to: """ This option has a drawback and a possible security risk for the host. If the host is running out of memory, it will be unable to reclaim the memory locked by this guest which could cause the host to run out of memory. In particular, a malicious guest could be able to lock as much memory it wants, causing a DDoS attack in the host. For setups where this may have a significant impact, it is highly recommended to use <hard_limit> to prevent this attack. """
Another stab at it (which plugs into my original version):
[...] remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host. Because of this, using the option is discouraged unless your [...]
Does it look reasonable?
That looks fine, although I'd drop "discouraged" because that's not helpful to those who must use the feature. I think it's better to objectively explain what the problems are and how to prevent or mitigate them. That's what I tried to do in my paragraph.

On Tue, 2017-03-28 at 09:43 -0400, Luiz Capitulino wrote:
Another stab at it (which plugs into my original version): [...] remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host. Because of this, using the option is discouraged unless your [...] Does it look reasonable? That looks fine, although I'd drop "discouraged" because that's not helpful to those who must use the feature. I think it's better to objectively explain what the problems are and how to prevent or mitigate them. That's what I tried to do in my paragraph.
The strong wording is intentional: we really, really don't want people to enable this unless their setup can't work without it. -- Andrea Bolognani / Red Hat / Virtualization

On Tue, 28 Mar 2017 18:26:12 +0200 Andrea Bolognani <abologna@redhat.com> wrote:
On Tue, 2017-03-28 at 09:43 -0400, Luiz Capitulino wrote:
Another stab at it (which plugs into my original version): [...] remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host. Because of this, using the option is discouraged unless your [...] Does it look reasonable? That looks fine, although I'd drop "discouraged" because that's not helpful to those who must use the feature. I think it's better to objectively explain what the problems are and how to prevent or mitigate them. That's what I tried to do in my paragraph.
The strong wording is intentional: we really, really don't want people to enable this unless their setup can't work without it.
I don't see how using strong wording is going to be helpful to users who need to use the feature. The documentation should be helpful and technically clear. It should instruct, not cause fear. Your last paragraph is an improvement, but I really think this mindset against <locked/> is very wrong.

On Tue, Mar 28, 2017 at 12:40:04PM -0400, Luiz Capitulino wrote:
On Tue, 28 Mar 2017 18:26:12 +0200 Andrea Bolognani <abologna@redhat.com> wrote:
On Tue, 2017-03-28 at 09:43 -0400, Luiz Capitulino wrote:
Another stab at it (which plugs into my original version): [...] remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host. Because of this, using the option is discouraged unless your [...] Does it look reasonable? That looks fine, although I'd drop "discouraged" because that's not helpful to those who must use the feature. I think it's better to objectively explain what the problems are and how to prevent or mitigate them. That's what I tried to do in my paragraph.
The strong wording is intentional: we really, really don't want people to enable this unless their setup can't work without it.
I don't see how using strong wording is going to be helpful to users who need to use the feature.
The documentation should be helpful and technically clear. It should instruct, not cause fear.
Your last paragraph is an improvement, but I really think this mindset against <locked/> is very wrong.
I must say I'm kinda on Luiz's side here. It is clear what everything does and how does it work and what are the risks. The discouragement is subjective *and* redundant as all the risks and technicalities are explained. If, for some admin/developer, "your machine may crash" is not enough, but "your machine may crash, you should not do this" will make everything rainbows and sunshine, then such person should rather be user of its own system and not anything else.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Tue, 2017-03-28 at 12:40 -0400, Luiz Capitulino wrote:
The strong wording is intentional: we really, really don't want people to enable this unless their setup can't work without it. I don't see how using strong wording is going to be helpful to users who need to use the feature. The documentation should be helpful and technically clear. It should instruct, not cause fear. Your last paragraph is an improvement, but I really think this mindset against <locked/> is very wrong.
I believe the current wording strikes a good balance between informing users who really need the feature about the potential risks involved and mitigation options that are available to them, and dissuading people who don't actually need memory locking from messing with it. Based on that, I'm not going to be spending any more time on it, but you're of course welcome to improve it further and post the resulting patches on the list :) -- Andrea Bolognani / Red Hat / Virtualization

On Thu, 23 Mar 2017 19:16:39 +0100 Andrea Bolognani <abologna@redhat.com> wrote:
A problem reported recently to the mailing list[1] uncovered some flaws in the way we deal with setting the memory locking limit for QEMU guests.
This series solves those issues and adds test suite coverage to hopefully prevent more bugs from popping up in the future.
Tested-by: Luiz Capitulino <lcapitulino@redhat.com>
[1] https://www.redhat.com/archives/libvir-list/2017-March/msg00534.html
Andrea Bolognani (8): Revert "qemu: Forbid <memoryBacking><locked> without <memtune><hard_limit>" qemu: Remove qemuDomainRequiresMemLock() qemu: Fix memory locking limit calculation process: Translate "unlimited" correctly command: Introduce virCommandGetMaxMemLock() tests: Introduce virTestCompareToULL() tests: Introduce QEMU memory locking limit tests docs: Improve documentation related to memory locking
docs/formatdomain.html.in | 36 +++-- src/libvirt_private.syms | 1 + src/qemu/qemu_command.c | 4 +- src/qemu/qemu_domain.c | 75 ++++----- src/qemu/qemu_domain.h | 1 - src/util/vircommand.c | 9 ++ src/util/vircommand.h | 2 + src/util/virprocess.c | 20 ++- tests/Makefile.am | 9 ++ .../qemumemlock-pc-hardlimit+hostdev.xml | 20 +++ .../qemumemlock-pc-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pc-hardlimit+locked.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pc-kvm.xml | 11 ++ .../qemumemlock-pc-locked+hostdev.xml | 20 +++ tests/qemumemlockdata/qemumemlock-pc-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pc-tcg.xml | 11 ++ .../qemumemlock-pseries-hardlimit+hostdev.xml | 20 +++ ...emumemlock-pseries-hardlimit+locked+hostdev.xml | 23 +++ .../qemumemlock-pseries-hardlimit+locked.xml | 17 ++ .../qemumemlock-pseries-hardlimit.xml | 14 ++ .../qemumemlock-pseries-hostdev.xml | 17 ++ tests/qemumemlockdata/qemumemlock-pseries-kvm.xml | 11 ++ .../qemumemlock-pseries-locked+hostdev.xml | 20 +++ .../qemumemlockdata/qemumemlock-pseries-locked.xml | 14 ++ tests/qemumemlockdata/qemumemlock-pseries-tcg.xml | 11 ++ tests/qemumemlocktest.c | 172 +++++++++++++++++++++ tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 - tests/testutils.c | 27 ++++ tests/testutils.h | 2 + 31 files changed, 583 insertions(+), 72 deletions(-) create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pc-tcg.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hardlimit.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-kvm.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-locked.xml create mode 100644 tests/qemumemlockdata/qemumemlock-pseries-tcg.xml create mode 100644 tests/qemumemlocktest.c

On Fri, Mar 24, 2017 at 02:01:34PM -0400, Luiz Capitulino wrote:
On Thu, 23 Mar 2017 19:16:39 +0100 Andrea Bolognani <abologna@redhat.com> wrote:
A problem reported recently to the mailing list[1] uncovered some flaws in the way we deal with setting the memory locking limit for QEMU guests.
This series solves those issues and adds test suite coverage to hopefully prevent more bugs from popping up in the future.
Tested-by: Luiz Capitulino <lcapitulino@redhat.com>
[1] https://www.redhat.com/archives/libvir-list/2017-March/msg00534.html
Andrea Bolognani (8): Revert "qemu: Forbid <memoryBacking><locked> without <memtune><hard_limit>" qemu: Remove qemuDomainRequiresMemLock() qemu: Fix memory locking limit calculation process: Translate "unlimited" correctly command: Introduce virCommandGetMaxMemLock() tests: Introduce virTestCompareToULL()
ACK to these first 6. I have to at least mention that I don't like the fact that we need the sixth one, but we do and that's how the compare functions work =)
tests: Introduce QEMU memory locking limit tests docs: Improve documentation related to memory locking
participants (3)
-
Andrea Bolognani
-
Luiz Capitulino
-
Martin Kletzander