In v6.7.0-rc1~86 I've tried to fix a problem where we were not
detecting NUMA nodes properly because we misused behaviour of a
libnuma API and as it turned out the behaviour was correct for
hosts with 64 CPUs in one NUMA node. So I changed the code to use
nodemask_isset(&numa_all_nodes, ..) instead and it fixed the
problem on such hosts. However, what I did not realize is that
numa_all_nodes does not reflect all NUMA nodes visible to
userspace, it contains only those nodes that the process
(libvirtd) an allocate memory from, which can be only a subset of
all NUMA nodes. The bitmask that contains all NUMA nodes visible
to userspace and which one I should have used is: numa_nodes_ptr.
For curious ones:
https://github.com/numactl/numactl/commit/4a22f2238234155e11e3e2717c01186...
And as I was fixing virNumaGetNodeCPUs() I came to realize that
we already have a function that wraps the correct bitmask:
virNumaNodeIsAvailable().
Fixes: 24d7d85208f812a45686b32a0561cc9c5c9a49c9
Resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1876956
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/virnuma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange(a)redhat.com>
Regards,
Daniel
--
|: