
On Mon, 13 Mar 2017 16:43:46 +0000 "Daniel P. Berrange" <berrange@redhat.com> wrote:
On Mon, Mar 13, 2017 at 12:35:42PM -0400, Luiz Capitulino wrote:
On Mon, 13 Mar 2017 16:08:58 +0000 "Daniel P. Berrange" <berrange@redhat.com> wrote:
2. Drop change c2e60ad0e51 and automtically increase memory locking limit to infinity when seeing <memoryBacking><locked/>
pros: make all cases work, no more <hard_limit> requirement
cons: allows guests with <locked/> to lock all memory assigned to them plus QEMU allocations. While this seems undesirable or even a security issue, using <hard_limit> will have the same effect
I think this is the only viable approach, given that no one can provide a way to reliably calculate QEMU peak memory usage. Unless we want to take guest RAM + $LARGE NUMBER - eg just blindly assume that 2 GB is enough for QEMU working set, so for an 8 GB guest, just set 10 GB as the limit.
I forgot to say that I'm fine with this solution, provided that we drop the requirement on using <hard_limit> with <locked/> and revert commit c2e60ad0e51.
Better to set it to infinity and be done with it.
Not neccessarily, no. If we set $RAM + $LARGE_NUMBER, we are still likely to be well below the overall physical RAM of a host. IOW, a single compromised QEMU would still be restricted in how much it can pin. If we set it to infinity then a compromised QEMU can lock all of physical RAM doing a very effective DOS on the host, as it can't even swap the guest out to recover.
OK, you're right. I personally don't like we're putting a random cap on QEMU memory allocations, but if it's large enough it shouldn't be a problem (I hope).