It all started as a simple bug: trying to move domain memory
between NUMA nodes (e.g. via virsh numatune) did not work. I've
traced the problem to qemuProcessHook() because that's where we
decide whether to rely on CGroups or use numactl APIs to satisfy
<numatune/>. The problem was that virCgroupControllerAvailable()
was telling us that cpuset controller is unavailable. This is
CGroupsV2, and pretty weird because CGroupsV2 definitely do
support cpuset controller and I had them mounted in a standard
way. What I found out (with Pavel's help) was that
virCgroupNewSelf() was looking into the following path to detect
supported controllers:
/sys/fs/cgroup/system.slice/cgroup.controllers
However, if there's no other VM running then the system.slice
only has 'memory' and 'pids' controllers. Therefore, we saw
'cpuset' as not available. The fix is to look at the top most
path, which has the full set of controllers:
/sys/fs/cgroup/cgroup.controllers
Resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1976690
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/vircgroup.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index 65a81e8d6f..1b3b28342e 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -2996,7 +2996,10 @@ virCgroupControllerAvailable(int controller)
{
g_autoptr(virCgroup) cgroup = NULL;
- if (virCgroupNewSelf(&cgroup) < 0)
+ /* Don't use virCgroupNewSelf() - we might be running in a slice that
+ * doesn't have @controller but the domain (for which we will later use
+ * virCgroupHasController()) might have it. */
+ if (virCgroupNew("/", -1, &cgroup) < 0)
return false;
return virCgroupHasController(cgroup, controller);
--
2.31.1