The limits are controlled by the operating system via the
getrlimit/setrlimit system calls.
If you want libvirt to set the limit for QEMU after QEMU is forked, you
have to change libvirt, and that requires a invoking the system calls,
but more permanently requires a plan with the libvirt community on the
use of those system calls.
/*
* Michael R. Hines
* Platform Engineer, DigitalOcean.
*/
On 02/19/2016 04:37 PM, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process
MEMLOCK? maybe i can change this value in libvirt code?
Regards,
Roy
On Fri, Feb 19, 2016 at 11:15 PM, Michael R. Hines
<mhines(a)digitalocean.com <mailto:mhines@digitalocean.com>> wrote:
Is the QEMU process (after startup) actually running as the QEMU
userid ?
/*
* Michael R. Hines
* Platform Engineer, DigitalOcean.
*/
On 02/19/2016 02:43 PM, Roy Shterman wrote:
> First off all thank you for your answer,
>
> I couldn't figured how to start virtual machine with increased
> MEMLOCK,
>
> tried to add into /etc/security/limits.d
>
> qemu soft memlock 3221225
> qemu hard memlock 3221225
>
> so max locked-in-memory will be 3G, but it didn't worked.
>
> still has MEMLOCK of 60kb per each VM.
>
> Maybe you can spot what I'm doing wrong?
>
> On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines
> <michael(a)hinespot.com <mailto:michael@hinespot.com>> wrote:
>
> Hi Roy,
>
> On 02/09/2016 03:57 AM, Roy Shterman wrote:
>
> Hi,
>
> I tried to understand the rdma-migration in qemu code and
> i have two questions about it:
>
> 1. I'm working with qemu-kvm using libvirt and i'm getting
>
> MEMLOCK max locked-in-memory address space 65536
> 65536 bytes
>
> in qemu process so I don't understand how can you use
> rdma-pin-all with such low MEMLOCK.
>
> I found a solution in libvirt to lock all vm memory in
> advance and to enlarge MEMLOCK.
> It uses memoryBacking locking and memory tuning
> hard_limit of vm memory but I couldn't find a usage of
> this in rdma-migration code.
>
>
> You're absolutey right, the RDMA migration code itself
> doesn't set this lock limit explicitly because there are
> system-wide restrictions in both appArmour,
> /etc/security, as well as SELINUX that restrict applications
> from arbitrarily setting their maximum memory lock limits.
>
> The other problem is CGROUPS: If someone sets a cgroup
> control for maximum memory and forgets about that mlock()
> limits, then
> there will be a conflict.
>
> So, libvirt must have a policy to deal with all of these
> possibilities, not just handle a special case for RDMA migration.
>
> The only way "simple" way (without patching the problems
> above) to apply a higher lock limit to QEMU is to set the
> ulimit for libvirt
> (or for QEMU if starting QEMU manually) in your environment
> or the command line with $ ulimit # before attempting the
> migration,
> then the RDMA subsystem will be able to lock the memory
> successfully.
>
> The other option is to use /etc/security/limits.conf and set
> the option for a specific libvirt process user and make sure
> your libvirt/qemu
> are not running as root.
>
> QEMU itself also has a "mlock" option built into the command
> line, but it also suffers from the same problem --- you have
> to find
> a way (currently) to increase the limit before using the option.
>
> 2. Do you have any comparison of IOPS and bandwidth
> between TCP migration and rdma migration?
>
> Yes, lots of comparisons.
>
>
http://wiki.qemu.org/Features/RDMALiveMigration
>
http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
>
>
> Regards,
> Roy
>
>
>
>
>
>
> --
> libvir-list mailing list
> libvir-list(a)redhat.com <mailto:libvir-list@redhat.com>
>
https://www.redhat.com/mailman/listinfo/libvir-list