
On Fri, Sep 11, 2020 at 01:45:12PM +0200, Michal Privoznik wrote:
In v6.7.0-rc1~86 I've tried to fix a problem where we were not detecting NUMA nodes properly because we misused behaviour of a libnuma API and as it turned out the behaviour was correct for hosts with 64 CPUs in one NUMA node. So I changed the code to use nodemask_isset(&numa_all_nodes, ..) instead and it fixed the problem on such hosts. However, what I did not realize is that numa_all_nodes does not reflect all NUMA nodes visible to userspace, it contains only those nodes that the process (libvirtd) an allocate memory from, which can be only a subset of all NUMA nodes. The bitmask that contains all NUMA nodes visible to userspace and which one I should have used is: numa_nodes_ptr. For curious ones:
https://github.com/numactl/numactl/commit/4a22f2238234155e11e3e2717c01186472...
And as I was fixing virNumaGetNodeCPUs() I came to realize that we already have a function that wraps the correct bitmask: virNumaNodeIsAvailable().
Fixes: 24d7d85208f812a45686b32a0561cc9c5c9a49c9 Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1876956 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> --- src/util/virnuma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|