On Linux, changing the nodeset on 'numatune' does not imply that
the guest memory will be migrated on the spot to the new nodeset.
The memory migration is tied on guest usage of the memory pages,
and an idle guest will take longer to have its memory migrated
to the new nodeset.
This is a behavior explained in detail in the Linux kernel documentation
in Documentation/admin-guide/cgroup-v1/cpusets.rst. The user doesn't
need this level of detail though - just needs his/her expectations
under check. Running 'numastat' and hoping for instant memory
migration from the previous nodeset to the new one is not
viable.
There's also parts of the memory that are locked by QEMU in the
same place, e.g. when VFIO devices are present. Let's also
mention it as another factor that impacts the results the
user might expect from NUMA memory migration with numatune.
Signed-off-by: Daniel Henrique Barboza <danielhb413(a)gmail.com>
---
docs/manpages/virsh.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index 969a4d5543..ff32338f43 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -3395,6 +3395,13 @@ If *--live* is specified, set scheduler information of a running
guest.
If *--config* is specified, affect the next boot of a persistent guest.
If *--current* is specified, affect the current guest state.
+For running guests in Linux hosts, the changes made in the domain's
+numa parameters does not imply that the guest memory will be moved to a
+different nodeset immediately. The memory migration depends on the
+guest activity, and the memory of an idle guest will remain in its
+previous nodeset for longer. The presence of VFIO devices will also
+lock parts of the guest memory in the same nodeset used to start the
+guest, regardless of nodeset changes.
perf
----
--
2.26.2