OK, I will describe it better,
I'm developing iSER transport option inside libiscsi, as part of that I'm planning
to implement it also in QEMU block layer.
iSER RDMA components (QP, CQ, MR) need to lock some amount of memory (predictable
amount),
So, needed memory to be locked is (num of libiscsi-iser devices in VM)*(constant amount
per device),
For now, I'm using locked option in libvirt, although I don't really need to lock
all VM memory.
Regards,
Roy
-----Original Message-----
From: Jiri Denemark [mailto:jdenemar@redhat.com]
Sent: Tuesday, March 22, 2016 2:56 PM
To: Roy Shterman <roy.shterman(a)gmail.com>
Cc: libvir-list(a)redhat.com; Roy Shterman <roysh(a)mellanox.com>
Subject: Re: [libvirt] question about rdma migration
On Tue, Mar 22, 2016 at 14:21:52 +0200, Roy Shterman wrote:
Correct me if I'm wrong but locked option is pinning all VM
memory in
host RAM,
for example if I have a VM with 4G memory, and I want to run some QEMU
code which needs to pin 500M,
I will need to lock all 4G in host memory instead of locking only 500M.
So the question is which code wants to lock part of the memory, why, and if it's
something that can be influenced by user.
For example, we know that if you ask for all memory to by locked, we need to set the
limit. The same applies when RDMA migration is started.
On PPC we know some amount of memory will always need to be locked, we compute the amount
and set the limit accordingly. We can't really expect user to have deep knowledge of
QEMU and know what limit needs to be set when they use a specific device, QMP command, or
whatever. So if the limit is something predictable and deterministic, we can automatically
compute the amount of memory and use it when starting QEMU. Forcing users to set the limit
when all memory needs to be locked is already bad enough that I don't think we should
add a new option to explicitly set arbitrary lock limit.
Jirka