
On 12/10/2015 05:43 AM, Andrea Bolognani wrote:
On Wed, 2015-12-09 at 15:15 -0500, John Ferlan wrote:
On 11/24/2015 08:56 AM, Andrea Bolognani wrote:
Replace all uses of the qemuDomainRequiresMlock/virProcessSetMaxMemLock combination with the equivalent qemuDomainAdjustMaxMemLock() call. --- src/qemu/qemu_hotplug.c | 40 +++++++++++++--------------------------- 1 file changed, 13 insertions(+), 27 deletions(-)
OK - so now I see how this is going to be used... patch 3's multi purpose MaxMemLock makes a bit more sense; however, I'm still wondering how we'd ever get to a point where the a last removal wouldn't have set the lockbytes value to anything but the original (yes, patch 5 is missing from the current removal)...
Seems like perhaps the original_memlock is fixing the bug that was shown in patch 5 - that the hostdev removal didn't return our value properly.
IOW: If the original_memlock adjustment was taken out, is there a case (after patch 5 is applied) where when memory locking is no longer required that the value after the removal wouldn't have been what was saved in original_memlock.
Hi John,
first of all, thanks for the review. I'm in the process of going through all your comments, but I wanted to address this one first.
I tried to give a somewhat detailed overview of the series in the cover letter to help reviewers understand how the single commits fit toghether, but it's always hard to balance the amount of information when you've just finished implementing something and all the details are still so completely clear in your head. Any tips on how the overview could have been more helpful are appreciated!
I always appreciate extra comments and cover letters, but they are a delicate balance. It's hard when you're so embroiled in your patches to step back and take that 10,000 foot view. There's no secret sauce for that. Yes I read cover letters, but then I start reviewing patches and that cover letter can easily get paged out of short term memory. When I review I try to take patches one at a time, but I will peek ahead as well when I get curious about the why something is being done. I'll also try to provide enough context in reviews to try an "show" what I was thinking while reviewing. Seeing terse reviews of "this is wrong" or "I wouldn't do it that way" in my mind are not very helpful.
The whole issue stems from the use of this kind of construct:
if (qemuDomainRequiresMlock(vm->def)) virProcessSetMaxMemLock(vm->pid, qemuDomainGetMlockLimitBytes(vm->def));
qemuDomainRequiresMlock() is always true on ppc64, but on x86 it's true only if at least one VFIO device is currently assigned to the guest, which means it can return false both because no VFIO device has been assigned to the guest yet or because we just removed the last one.
As I got to patch 4 & 5 I think it started to sink in again that the issue was more because of x86. Once I read and thought about patch 5, then I went back to my patch 4 comments and added the extra wording around whether or not using original_memlock would be useful and of course to separate out "code motion" from "new code". Of course by that time patch 3 was already sent so I couldn't go "adjust" my comments there.
In the first case, we don't need to adjust the memory lock limit and everything is peachy; in the latter case, though, we *had* increased the memory locking limit when we assigned the first VFIO device and we should lower it now that's no longer required, but because of the code above we don't.
qemuDomainAdjustMaxMemLock() solves this by storing the previous memory locking limit when first increasing it, so now we can tell whether memory locking is simply not needed or *no longer* needed, and we can behave accordingly - not doing anything in the former case, restoring the previous value in the latter.
Perhaps would have been easier to have added the Adjust code without original_memlock, then replace code as is done here, then add the hostdev case, then add the original value. Just trying to separate new functionality from code motion/creation of helper code.
It's never easy to order changes, is it? :)
Well better than what my former job had - here's a steaming pile of changes with an explanation of the changes. Sometimes you got a good explanation of what the changes did, while other times it was "this fixes bug #NNNNN". Um, gee thanks! John
I can see how using the ordering you propose would make it easier to follow along, though. I will shuffle commits according to your suggestion for v2.
Cheers.