On Mon, 2017-03-27 at 14:19 +0200, Martin Kletzander wrote:
[...]
> for (i = 0; i < def->nhostdevs; i++) {
> virDomainHostdevDefPtr dev = def->hostdevs[i];
>
> if (dev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS &&
> dev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI
&&
> - dev->source.subsys.u.pci.backend ==
VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO)
> - return true;
> + dev->source.subsys.u.pci.backend ==
VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO) {
> + memKB = virDomainDefGetMemoryTotal(def) + 1024 * 1024;
Shouldn't this be raising memKB for _each_ host device?
Nope, it's guest memory + 1 GiB regardless of the number of
VFIO devices.
There should be a 'goto done' after setting memKB to quit
the loop early, though. Consider that squashed in.
> @@ -6381,7 +6366,6 @@ qemuDomainAdjustMaxMemLock(virDomainObjPtr
vm)
> if (virProcessGetMaxMemLock(vm->pid, &(vm->original_memlock))
< 0)
> vm->original_memlock = 0;
> }
> - bytes = qemuDomainGetMemLockLimitBytes(vm->def);
> } else {
> /* Once memory locking is no longer required, we can restore the
> * original, usually very low, limit */
This function has weird behaviour, even when it's documented. But it
makes sense, it just takes a while.
Yeah, it's slightly confusing even to me, and I'm the one
who wrote it! If you have any suggestions on how to improve
it, I'm all ears :)
--
Andrea Bolognani / Red Hat / Virtualization