2009/5/22 Daniel P. Berrange <berrange(a)redhat.com>:
Actually QEMU, KVM, Xen PV, Xen FV all follow the same model,
providing
you have the balloon driver available in the guest. 'maxmem', confusingly
calling <memory> in the XML sets the maximum possible memory for the
guest, as exposed in e820 maps. When the guest runs this maximum can be
reduced setting 'memory', confusingly called <currentMemory> in the XML,
to a lower value. The hosts talks to the balloon driver in the guest and
asks it to release memory. This isn't a guarenteed lower limit, since it
relies on guest cooperation, but at least the guest is aware of what
the host it telling it todo.
Depending on bugs in the guest ballloon driver, the 'free' command may or
may not, update the 'total memory' setting in the guest.
What Matthias is talking about wrt VMWare ESX is about how the hypervisor
satisfies the memory allocation for the guest, eg how much real RAM ist
guarantees, with the rest of guest RAM susceptible to swapping. This is
more of a tuning parameter, and does not map onto the libvirt memory/maxmem
settings.
Well, ESX has support for ballooning, too. If the balloon driver is
installed inside the guest, then ESX does the same as you described
for QEMU etc. If you set the memory value (the limit in ESX terms)
below the max-memory value, ESX lets the balloon driver allocate
memory in the guest to "steal" it from the guest. But ESX can do this
even without the balloon driver by swapping, as you described in the
third paragraph. So IMHO this maps somewhat onto the libvirt
memory/max-memory semantics, if the balloon driver is installed. See
also
http://www.vmware.com/pdf/esx3_memory.pdf page 3-4, "Memory
Balloon Driver" and "Swapping".
Regards,
Matthias