On Thu, Feb 05, 2015 at 20:54:11 -0700, Eric Blake wrote:
On 01/30/2015 06:20 AM, Peter Krempa wrote:
> Add a XML element that will allow to specify maximum supportable memory
s/a XML/an XML/
> and the count of memory slots to use with memory hotplug.
Might be nice to demonstrate that XML here in the commit message, not
just in formatdomain.html.
>
> To avoid possible confusion and misuse of the new element this patch
> also explicitly forbids the use of the maxMemory setting in individual
> drivers's post parse callbacks. This limitation will be lifted when the
s/drivers's/drivers'/
> support will be implemented.
> ---
> docs/formatdomain.html.in | 19 +++++++++++
> docs/schemas/domaincommon.rng | 8 +++++
> src/bhyve/bhyve_domain.c | 4 +++
> src/conf/domain_conf.c | 64 ++++++++++++++++++++++++++++++++++++
> src/conf/domain_conf.h | 7 ++++
> src/libvirt_private.syms | 1 +
> src/libxl/libxl_domain.c | 5 +++
> src/lxc/lxc_domain.c | 4 +++
> src/openvz/openvz_driver.c | 11 +++++--
> src/parallels/parallels_driver.c | 6 +++-
> src/phyp/phyp_driver.c | 6 +++-
> src/qemu/qemu_domain.c | 4 +++
> src/uml/uml_driver.c | 6 +++-
> src/vbox/vbox_common.c | 6 +++-
> src/vmware/vmware_driver.c | 6 +++-
> src/vmx/vmx.c | 6 +++-
> src/xen/xen_driver.c | 4 +++
> src/xenapi/xenapi_driver.c | 6 +++-
> tests/domainschemadata/maxMemory.xml | 19 +++++++++++
> 19 files changed, 183 insertions(+), 9 deletions(-)
> create mode 100644 tests/domainschemadata/maxMemory.xml
>
> diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
> index f8d5f89..12f7ede 100644
> --- a/docs/formatdomain.html.in
> +++ b/docs/formatdomain.html.in
> @@ -664,6 +664,7 @@
> <pre>
> <domain>
> ...
> + <maxMemory slots='123'
unit='KiB'>1524288</maxMemory>
123 is unusual; the example would look more realistic with a power of 2.
> <memory unit='KiB'>524288</memory>
> <currentMemory
unit='KiB'>524288</currentMemory>
Hmm. Historically, <memory> was the maximum memory, then ballooning was
introduced and <currentMemory> was added to show the live difference
between the ballooned current value and the boot-up maximum.
But with the idea of hot-plug, I see where you are coming from - the
balloon can only inflate or deflate up to the amount of memory currently
plugged in, so <maxMemory> is the startup maximum, <memory> becomes the
yes, maxMemory is basically size of guest's address space
amount plugged in, and <currentMemory> reflects the balloon
value (that
Now <memory> is a thing we need to clarify, there are two options:
1) <memory> will still determine the amount of startup memory thus will
not include any memory added via "memory modules" or hotplug.
Pros: - no change to semantics
- no need to take care of changing the value when adding devices
- the value will not change with hotplug or other operations
Cons: - I'll have to add a way to express the "current maximum
memory" - the memory amount with the ballon deflated
2) <memory> will become total of initial and added memory
This will change semantics and require us to recalculate the totals
every time.
Pros: - you are able to see the total memory at any time
- no need to introduce any new parameter for balloon setup
- recalculating the total would actually fix the bug when you
specify /domain/memory less than the sum of
/domain/cpu/numa/cell/@memory :
error: internal error: process exited while connecting to monitor:
2015-02-06T17:00:55.971851Z qemu-system-x86_64: total memory for
NUMA nodes (0x40000000) should equal RAM size (0x100000)
Cons: - we would need to calculate the total memory always (thus
overwrite it's value by a sum of the NUMA node memory and
memory devices memory
- aplications would need to actually check for the change of
the max memory
is, current <= memory <= max). So I guess this makes sense; it
may be
more interesting figuring out how to expose it all through virsh.
> ...
> @@ -697,6 +698,24 @@
> <span class='since'><code>unit</code> since
0.9.11</span>,
> <span class='since'><code>dumpCore</code> since
0.10.2
> (QEMU only)</span></dd>
> + <dt><code>maxMemory</code></dt>
> + <dd>The run time maximum memory allocation of the guest. The initial
> + memory specified by <code><memory></code> can be
increased by
> + hot-plugging of memory to the limit specified by this element.
> +
> + The <code>unit</code> attribute behaves the same as for
> + <code>memory</code>.
> +
> + The <code>slots</code> attribute specifies the number of slots
> + available for adding memory to the guest. The bounds are hypervisor
> + specific.
> +
> + Note that due to alignment of the memory chunks added via memory
> + hotplug the full size allocation specified by this element may be
> + impossible to achieve.
Is a hypervisor free to reject requests that aren't aligned properly?
With <memory>, we had the odd situation that we allowed the hypervisor
to round requests up, so that live numbers were different than the
original startup numbers; it might be easier to not repeat that.
> + <optional>
> + <element name="maxMemory">
> + <ref name="scaledInteger"/>
> + <attribute name="slots">
> + <ref name="unsignedInt"/>
> + </attribute>
This says the slots attribute mandatory; is there ever a reason to allow
it to be optional (defaulting to one slot, an all-or-none hotplug)?
qemu enforces it that way, so I went for strictly all-or-none with the
possibility to relax it once a different hypervisor will not enforce it
> +/**
> + * virDomainDefCheckUnsupportedMemoryHotplug:
> + * @def: domain definition
> + *
> + * Returns -1 if the domain definition would enable memory hotplug via the
> + * <maxMemory> tunable and reports an error. Otherwise returns 0.
> + */
> +int
> +virDomainDefCheckUnsupportedMemoryHotplug(virDomainDefPtr def)
> +{
> + /* memory hotplug tunables are not supported by this driver */
> + if (def->mem.max_memory > 0 || def->mem.memory_slots > 0) {
Based on the XML, mem.memory_slots cannot be specified unless max_memory
is also present (at least, assuming that you enforce that <maxMemory> be
>= <memory>). But I guess it doesn't hurt to check both values.
Hmm, yeah, the part after the logic or is dead code.
Peter