Fangge Jin <fjin(a)redhat.com> writes:
On Thu, Aug 18, 2022 at 2:46 PM Milan Zamazal
<mzamazal(a)redhat.com> wrote:
> Fangge Jin <fjin(a)redhat.com> writes:
>
> > I can share some test results with you:
> > 1. If no memtune->hard_limit is set when start a vm, the default memlock
> > hard limit is 64MB
> > 2. If memtune->hard_limit is set when start a vm, memlock hard limit will
> > be set to the value of memtune->hard_limit
> > 3. If memtune->hard_limit is updated at run-time, memlock hard limit
> won't
> > be changed accordingly
> >
> > And some additional knowledge:
> > 1. memlock hard limit can be shown by ‘prlimit -p <pid-of-qemu> -l’
> > 2. The default value of memlock hard limit can be changed by setting
> > LimitMEMLOCK in /usr/lib/systemd/system/virtqemud.service
>
> Ah, that explains it to me, thank you. And since in the default case
> the systemd limit is not reported in <memtune> of a running VM, I assume
> libvirt takes it as "not set" and sets the higher limit when setting up
> a zero-copy migration. Good.
>
Not sure whether you already know this, but I had a hard time
differentiating the two concepts:
1. memlock hard limit(shown by prlimit): the hard limit for locked host
memory
2. memtune hard limit(memtune->hard_limit): the hard limit for in-use host
memory, this memory can be swapped out.
No, I didn't know it, thank you for pointing this out. Indeed, 2. is
what both the libvirt and kernel documentation seem to say, although not
so clearly.
But when I add <memtune> with <hard_limit> to the domain XML and then
start the VM, I can see the limit shown by `prlimit -l' is increased
accordingly. This is good for my use case, but does it match what you
say about the two concepts?