On 2/26/20 4:07 PM, Pavel Hrdina wrote:
The default memlock limit is 64k which is not enough to start a
single
VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
program, however, it fails to create eBPF map and program with 64k limit.
By testing I figured out that the minimal limit is 80k to start a single
VM with functional eBPF and if I add 12k I can start another one.
This leads into following calculation:
80k as memlock limit worked to start a VM with eBPF which means there
is 68k of lock memory that I was not able to figure out what was using
it. So to get a number for 4096 VMs:
68 + 12 * 4096 = 49220
If we round it up we will get 49M of memory lock limit to support 4096
VMs with default map size which can hold 64 entries for devices.
This should be good enough as a sane default and users can change it if
the need to.
Resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1807090
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/remote/libvirtd.service.in | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
index 9c8c54a2ef..8a3ace5bdb 100644
--- a/src/remote/libvirtd.service.in
+++ b/src/remote/libvirtd.service.in
@@ -40,6 +40,11 @@ LimitNOFILE=8192
# A conservative default of 8 tasks per guest results in a TasksMax of
# 32k to support 4096 guests.
TasksMax=32768
+# With cgroups v2 there is no devices controller anymore, we have to use
+# eBPF to control access to devices. In order to do that we create a eBPF
+# hash MAP which locked memory. The default map size for 64 devices together
s/locked/locks/
+# with program takes 12k per guest which results in 49M to support
4096 guests.
+LimitMEMLOCK=49M
Should we round this up to the nearest power of two? 49MB looks just
ugly. This is just a limit, it doesn't mean that libvirtd will lock
whole 49MB (or 64MB as I suggest) right from the beginning.
[Install]
WantedBy=multi-user.target
Michal