2011/1/27 Justin Clift <jclift(a)redhat.com>:
Addresses BZ # 622534:
https://bugzilla.redhat.com/show_bug.cgi?id=622534
---
tools/virsh.pod | 17 ++++++++++++++---
1 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 4e8b295..0d37512 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -584,9 +584,20 @@ update the XML <currentMemory> element.
=item B<setmaxmem> I<domain-id> B<kilobytes>
-Change the maximum memory allocation limit in the guest domain. This should
-not change the current memory use. The memory limit is specified in
-kilobytes.
+Change the maximum memory allocation limit for an inactive guest domain.
+
+This command works for at least the Xen and vSphere/ESX hypervisors,
+but not for QEMU/KVM.
+
+Some hypervisors require a larger granularity than kilobytes, rounding down
+or rejecting requests that are not an even multiple of the desired amount.
+vSphere/ESX is one of these, requiring the parameter to be evenly divisible
+by 4MB. For example with vSphere/ESX, 262144 (256MB) is valid as it's a
+multiple of 4MB; 263168 (257MB) is not valid as it's not a multiple of 4MB;
+266240 (260MB) is also valid, as it's a multiple of 4MB.
Well, currently this is true. But I'm preparing a patch that makes
libvirt round memory/storage request up instead of down when dividing
it for a larger granularity. That way you'll get at least what you
requested.
I basically applied the pattern we started to use in commit "storage:
Round up capacity for LVM volume creation" to all places where it
applies. I also removed the strict checks from the ESX driver. This
means that the ESX driver will automatically round 257MB up to 260MB
instead of reporting an error.
I didn't post the patch yet, as it isn't finished yet.
When my patch gets accepted your patch needs some small tweaks :)
Matthias