Re: [libvirt] question about rdma migration

Hi Roy, On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits. The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict. So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration. The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully. The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root. QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration and rdma migration?
Yes, lots of comparisons. http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards, Roy

First off all thank you for your answer, I couldn't figured how to start virtual machine with increased MEMLOCK, tried to add into /etc/security/limits.d qemu soft memlock 3221225 qemu hard memlock 3221225 so max locked-in-memory will be 3G, but it didn't worked. still has MEMLOCK of 60kb per each VM. Maybe you can spot what I'm doing wrong? On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines <michael@hinespot.com> wrote:
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root.
QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration
and rdma migration?
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards,
Roy

Is the QEMU process (after startup) actually running as the QEMU userid ? /* * Michael R. Hines * Platform Engineer, DigitalOcean. */ On 02/19/2016 02:43 PM, Roy Shterman wrote:
First off all thank you for your answer,
I couldn't figured how to start virtual machine with increased MEMLOCK,
tried to add into /etc/security/limits.d
qemu soft memlock 3221225 qemu hard memlock 3221225
so max locked-in-memory will be 3G, but it didn't worked.
still has MEMLOCK of 60kb per each VM.
Maybe you can spot what I'm doing wrong?
On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines <michael@hinespot.com <mailto:michael@hinespot.com>> wrote:
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root.
QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration and rdma migration?
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards, Roy
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

Yes, I tried also running it as root user and it also didn't worked. Do you know where libvirt (or QEMU) gets the value for process MEMLOCK? maybe i can change this value in libvirt code? Regards, Roy On Fri, Feb 19, 2016 at 11:15 PM, Michael R. Hines <mhines@digitalocean.com> wrote:
Is the QEMU process (after startup) actually running as the QEMU userid ?
/* * Michael R. Hines * Platform Engineer, DigitalOcean. */
On 02/19/2016 02:43 PM, Roy Shterman wrote:
First off all thank you for your answer,
I couldn't figured how to start virtual machine with increased MEMLOCK,
tried to add into /etc/security/limits.d
qemu soft memlock 3221225 qemu hard memlock 3221225
so max locked-in-memory will be 3G, but it didn't worked.
still has MEMLOCK of 60kb per each VM.
Maybe you can spot what I'm doing wrong?
On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines < <michael@hinespot.com> michael@hinespot.com> wrote:
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root.
QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration
and rdma migration?
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards,
Roy
-- libvir-list mailing listlibvir-list@redhat.comhttps://www.redhat.com/mailman/listinfo/libvir-list

The limits are controlled by the operating system via the getrlimit/setrlimit system calls. If you want libvirt to set the limit for QEMU after QEMU is forked, you have to change libvirt, and that requires a invoking the system calls, but more permanently requires a plan with the libvirt community on the use of those system calls. /* * Michael R. Hines * Platform Engineer, DigitalOcean. */ On 02/19/2016 04:37 PM, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process MEMLOCK? maybe i can change this value in libvirt code?
Regards, Roy
On Fri, Feb 19, 2016 at 11:15 PM, Michael R. Hines <mhines@digitalocean.com <mailto:mhines@digitalocean.com>> wrote:
Is the QEMU process (after startup) actually running as the QEMU userid ?
/* * Michael R. Hines * Platform Engineer, DigitalOcean. */
On 02/19/2016 02:43 PM, Roy Shterman wrote:
First off all thank you for your answer,
I couldn't figured how to start virtual machine with increased MEMLOCK,
tried to add into /etc/security/limits.d
qemu soft memlock 3221225 qemu hard memlock 3221225
so max locked-in-memory will be 3G, but it didn't worked.
still has MEMLOCK of 60kb per each VM.
Maybe you can spot what I'm doing wrong?
On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines <michael@hinespot.com <mailto:michael@hinespot.com>> wrote:
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root.
QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration and rdma migration?
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards, Roy
-- libvir-list mailing list libvir-list@redhat.com <mailto:libvir-list@redhat.com> https://www.redhat.com/mailman/listinfo/libvir-list

Besides, If it didn't work as root or qemu, then you simply didn't get the configuration setup correctly. I advise you to get it working correctly first (via opening another shell and verifying that the limits are set by default) before embarking on a change to libvirt. /* * Michael R. Hines * Platform Engineer, DigitalOcean. */ On 02/19/2016 04:37 PM, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process MEMLOCK? maybe i can change this value in libvirt code?
Regards, Roy
On Fri, Feb 19, 2016 at 11:15 PM, Michael R. Hines <mhines@digitalocean.com <mailto:mhines@digitalocean.com>> wrote:
Is the QEMU process (after startup) actually running as the QEMU userid ?
/* * Michael R. Hines * Platform Engineer, DigitalOcean. */
On 02/19/2016 02:43 PM, Roy Shterman wrote:
First off all thank you for your answer,
I couldn't figured how to start virtual machine with increased MEMLOCK,
tried to add into /etc/security/limits.d
qemu soft memlock 3221225 qemu hard memlock 3221225
so max locked-in-memory will be 3G, but it didn't worked.
still has MEMLOCK of 60kb per each VM.
Maybe you can spot what I'm doing wrong?
On Tue, Feb 9, 2016 at 5:16 PM, Michael R. Hines <michael@hinespot.com <mailto:michael@hinespot.com>> wrote:
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
Hi,
I tried to understand the rdma-migration in qemu code and i have two questions about it:
1. I'm working with qemu-kvm using libvirt and i'm getting
MEMLOCK max locked-in-memory address space 65536 65536 bytes
in qemu process so I don't understand how can you use rdma-pin-all with such low MEMLOCK.
I found a solution in libvirt to lock all vm memory in advance and to enlarge MEMLOCK. It uses memoryBacking locking and memory tuning hard_limit of vm memory but I couldn't find a usage of this in rdma-migration code.
You're absolutey right, the RDMA migration code itself doesn't set this lock limit explicitly because there are system-wide restrictions in both appArmour, /etc/security, as well as SELINUX that restrict applications from arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for maximum memory and forgets about that mlock() limits, then there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities, not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply a higher lock limit to QEMU is to set the ulimit for libvirt (or for QEMU if starting QEMU manually) in your environment or the command line with $ ulimit # before attempting the migration, then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option for a specific libvirt process user and make sure your libvirt/qemu are not running as root.
QEMU itself also has a "mlock" option built into the command line, but it also suffers from the same problem --- you have to find a way (currently) to increase the limit before using the option.
2. Do you have any comparison of IOPS and bandwidth between TCP migration and rdma migration?
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
Regards, Roy
-- libvir-list mailing list libvir-list@redhat.com <mailto:libvir-list@redhat.com> https://www.redhat.com/mailman/listinfo/libvir-list

On Fri, Feb 19, 2016 at 23:37:57 +0200, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process MEMLOCK? maybe i can change this value in libvirt code?
You can change it by specifying /domain/memtune/hard_limit in domain XML; see http://libvirt.org/formatdomain.html#elementsMemoryTuning Libvirt will use the hard_limit value for RLIMIT_MEMLOCK, but only if needed. So it will be set only if any of the following is true: - you are on PPC64 - the /domain/memoryBacking/locked is set - a VFIO device passthrough is used - an RDMA migration is initiated. Jirka

Hi Jirka, Thanks for your answer, but I'm looking for a way to set RLIMIT_MEMLOCK without locking all VM memory. I'm wondering if there is a way to set RLIMIT_MEMLOCK without using the methods you wrote above. Regards, Roy On Tue, Mar 15, 2016 at 8:02 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
On Fri, Feb 19, 2016 at 23:37:57 +0200, Roy Shterman wrote:
Yes,
I tried also running it as root user and it also didn't worked.
Do you know where libvirt (or QEMU) gets the value for process MEMLOCK? maybe i can change this value in libvirt code?
You can change it by specifying /domain/memtune/hard_limit in domain XML; see http://libvirt.org/formatdomain.html#elementsMemoryTuning
Libvirt will use the hard_limit value for RLIMIT_MEMLOCK, but only if needed. So it will be set only if any of the following is true: - you are on PPC64 - the /domain/memoryBacking/locked is set - a VFIO device passthrough is used - an RDMA migration is initiated.
Jirka

On Sat, Mar 19, 2016 at 14:53:45 +0200, Roy Shterman wrote:
Hi Jirka,
Thanks for your answer, but I'm looking for a way to set RLIMIT_MEMLOCK without locking all VM memory. I'm wondering if there is a way to set RLIMIT_MEMLOCK without using the methods you wrote above.
No, libvirt doesn't provide any way to do this. But why would you even want to do that? RLIMIT_MEMLOCK is irrelevant when you don't lock the memory. Jirka

Correct me if I'm wrong but locked option is pinning all VM memory in host RAM, for example if I have a VM with 4G memory, and I want to run some QEMU code which needs to pin 500M, I will need to lock all 4G in host memory instead of locking only 500M. Any idea to solve my problem? Regards, Roy On Mon, Mar 21, 2016 at 10:58 AM, Jiri Denemark <jdenemar@redhat.com> wrote:
On Sat, Mar 19, 2016 at 14:53:45 +0200, Roy Shterman wrote:
Hi Jirka,
Thanks for your answer, but I'm looking for a way to set RLIMIT_MEMLOCK without locking all VM memory. I'm wondering if there is a way to set RLIMIT_MEMLOCK without using the methods you wrote above.
No, libvirt doesn't provide any way to do this. But why would you even want to do that? RLIMIT_MEMLOCK is irrelevant when you don't lock the memory.
Jirka

On Tue, Mar 22, 2016 at 14:21:52 +0200, Roy Shterman wrote:
Correct me if I'm wrong but locked option is pinning all VM memory in host RAM,
for example if I have a VM with 4G memory, and I want to run some QEMU code which needs to pin 500M,
I will need to lock all 4G in host memory instead of locking only 500M.
So the question is which code wants to lock part of the memory, why, and if it's something that can be influenced by user. For example, we know that if you ask for all memory to by locked, we need to set the limit. The same applies when RDMA migration is started. On PPC we know some amount of memory will always need to be locked, we compute the amount and set the limit accordingly. We can't really expect user to have deep knowledge of QEMU and know what limit needs to be set when they use a specific device, QMP command, or whatever. So if the limit is something predictable and deterministic, we can automatically compute the amount of memory and use it when starting QEMU. Forcing users to set the limit when all memory needs to be locked is already bad enough that I don't think we should add a new option to explicitly set arbitrary lock limit. Jirka

OK, I will describe it better, I'm developing iSER transport option inside libiscsi, as part of that I'm planning to implement it also in QEMU block layer. iSER RDMA components (QP, CQ, MR) need to lock some amount of memory (predictable amount), So, needed memory to be locked is (num of libiscsi-iser devices in VM)*(constant amount per device), For now, I'm using locked option in libvirt, although I don't really need to lock all VM memory. Regards, Roy -----Original Message----- From: Jiri Denemark [mailto:jdenemar@redhat.com] Sent: Tuesday, March 22, 2016 2:56 PM To: Roy Shterman <roy.shterman@gmail.com> Cc: libvir-list@redhat.com; Roy Shterman <roysh@mellanox.com> Subject: Re: [libvirt] question about rdma migration On Tue, Mar 22, 2016 at 14:21:52 +0200, Roy Shterman wrote:
Correct me if I'm wrong but locked option is pinning all VM memory in host RAM,
for example if I have a VM with 4G memory, and I want to run some QEMU code which needs to pin 500M,
I will need to lock all 4G in host memory instead of locking only 500M.
So the question is which code wants to lock part of the memory, why, and if it's something that can be influenced by user. For example, we know that if you ask for all memory to by locked, we need to set the limit. The same applies when RDMA migration is started. On PPC we know some amount of memory will always need to be locked, we compute the amount and set the limit accordingly. We can't really expect user to have deep knowledge of QEMU and know what limit needs to be set when they use a specific device, QMP command, or whatever. So if the limit is something predictable and deterministic, we can automatically compute the amount of memory and use it when starting QEMU. Forcing users to set the limit when all memory needs to be locked is already bad enough that I don't think we should add a new option to explicitly set arbitrary lock limit. Jirka

On Tue, Mar 22, 2016 at 13:16:49 +0000, Roy Shterman wrote:
OK, I will describe it better,
I'm developing iSER transport option inside libiscsi, as part of that I'm planning to implement it also in QEMU block layer.
iSER RDMA components (QP, CQ, MR) need to lock some amount of memory (predictable amount),
So, needed memory to be locked is (num of libiscsi-iser devices in VM)*(constant amount per device),
Well, it should be easy enough to compute and automatically set the limit from libvirt then. Unless the "constant amount per device" changes anytime someone touches the code in QEMU or libiscsi.
For now, I'm using locked option in libvirt, although I don't really need to lock all VM memory.
For testing you could perhaps try playing with hooks and prlimit, but forcing libvirt to lock all memory is still easier. Jirka

On 3/22/2016 3:58 PM, Jiri Denemark wrote:
OK, I will describe it better,
I'm developing iSER transport option inside libiscsi, as part of that I'm planning to implement it also in QEMU block layer.
iSER RDMA components (QP, CQ, MR) need to lock some amount of memory (predictable amount),
So, needed memory to be locked is (num of libiscsi-iser devices in VM)*(constant amount per device), Well, it should be easy enough to compute and automatically set the
On Tue, Mar 22, 2016 at 13:16:49 +0000, Roy Shterman wrote: limit from libvirt then. Unless the "constant amount per device" changes anytime someone touches the code in QEMU or libiscsi.
Yes, it should be constant
For now, I'm using locked option in libvirt, although I don't really need to lock all VM memory. For testing you could perhaps try playing with hooks and prlimit, but forcing libvirt to lock all memory is still easier.
tried with setrlimit inside QEMU without any success. I'm working with libvirt package from redhat 7.0, so like you said locking all memory was the easiest choice for me. I'm not familiar with libvirt code, where do you suggest I will add the calculation and setting?
Jirka
Regards, Roy

On Tue, Mar 22, 2016 at 16:50:40 +0000, Roy Shterman wrote:
On 3/22/2016 3:58 PM, Jiri Denemark wrote:
OK, I will describe it better,
I'm developing iSER transport option inside libiscsi, as part of that I'm planning to implement it also in QEMU block layer.
iSER RDMA components (QP, CQ, MR) need to lock some amount of memory (predictable amount),
So, needed memory to be locked is (num of libiscsi-iser devices in VM)*(constant amount per device), Well, it should be easy enough to compute and automatically set the
On Tue, Mar 22, 2016 at 13:16:49 +0000, Roy Shterman wrote: limit from libvirt then. Unless the "constant amount per device" changes anytime someone touches the code in QEMU or libiscsi.
Yes, it should be constant
Perfect.
For now, I'm using locked option in libvirt, although I don't really need to lock all VM memory. For testing you could perhaps try playing with hooks and prlimit, but forcing libvirt to lock all memory is still easier.
tried with setrlimit inside QEMU without any success.
Yeah, QEMU is not allowed to increase the limit if started by libvirt.
I'm not familiar with libvirt code, where do you suggest I will add the calculation and setting?
The code which calculates how much memory needs to be locked should go in qemuDomainGetMemLockLimitBytes. In addition to that you should modify qemuDomainRequiresMemLock to return true in case the domain has iscsi devices that will use the new transport option. In other words, the code which sets memory locking limit is: if (qemuDomainRequiresMemLock(def)) virCommandSetMaxMemLock(cmd, qemuDomainGetMemLockLimitBytes(def)); Jirka

Hi Jiri, Sorry about the late response, only now I managed to push iSER into QEMU. Because ISER is registered as a different protocol than iSCSI with the prefix of iser:// I want to add support for it in libvirt. Libvirt code is pretty new for me, wondered if adding VIR_STORAGE_NET_PROTOCOL_ISER as another virStorageNetProtocol should be enough? Of course I added ISER in all necessary switch-case in code also. Thanks, Roy On Tue, Mar 22, 2016 at 2:56 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
On Tue, Mar 22, 2016 at 14:21:52 +0200, Roy Shterman wrote:
Correct me if I'm wrong but locked option is pinning all VM memory in host RAM,
for example if I have a VM with 4G memory, and I want to run some QEMU code which needs to pin 500M,
I will need to lock all 4G in host memory instead of locking only 500M.
So the question is which code wants to lock part of the memory, why, and if it's something that can be influenced by user.
For example, we know that if you ask for all memory to by locked, we need to set the limit. The same applies when RDMA migration is started. On PPC we know some amount of memory will always need to be locked, we compute the amount and set the limit accordingly. We can't really expect user to have deep knowledge of QEMU and know what limit needs to be set when they use a specific device, QMP command, or whatever. So if the limit is something predictable and deterministic, we can automatically compute the amount of memory and use it when starting QEMU. Forcing users to set the limit when all memory needs to be locked is already bad enough that I don't think we should add a new option to explicitly set arbitrary lock limit.
Jirka
participants (5)
-
Jiri Denemark
-
Michael R. Hines
-
Michael R. Hines
-
Roy Shterman
-
Roy Shterman