Hi,

I've tested these patches again, twice, in similar setups like I tested
the first version (first in a Power8, then in a Power9 server).

Same results, though. Libvirt will not avoid the launch of a pseries guest,
with numanode=strict, even if the numa node does not have available
RAM. If I stress test the memory of the guest to force the allocation,
QEMU exits with an error as soon as the memory of the host numa node
is exhausted. 

If I change the numanode setting to 'preferred' and repeats the test, QEMU
doesn't exit with an error - the process starts to take memory from other
numa nodes. This indicates that the numanode policy is apparently being
forced in the QEMU process - however, it is not forced in VM boot.

I've debugged it a little and haven't found anything wrong that jumps the
eye. All functions that succeeds qemuSetupCpusetMems exits out with
ret = 0. Unfortunately, I don't have access to a x86 server with more than
one NUMA node to compare results.

Since I can't say for sure if what I'm seeing is an exclusive pseries
behavior, I see no problem into pushing this series upstream
if it makes sense for x86. We can debug/fix the Power side later.



Thanks,


DHB





On 4/10/19 1:10 PM, Michal Privoznik wrote:
v2 of:

https://www.redhat.com/archives/libvir-list/2019-April/msg00658.html

diff to v1:
- Fixed the reported problem. Basically, even though emulator CGroup was
  created qemu was not running in it. Now qemu is moved into the CGroup
  even before exec()

Michal Prívozník (2):
  qemuSetupCpusetMems: Use VIR_AUTOFREE()
  qemu: Set up EMULATOR thread and cpuset.mems before exec()-ing qemu

 src/qemu/qemu_cgroup.c  |  5 ++---
 src/qemu/qemu_process.c | 12 ++++++++----
 2 files changed, 10 insertions(+), 7 deletions(-)