On Thu, Sep 23, 2010 at 04:38:45PM -0600, Eric Blake wrote:
Some hypervisors have the ability to hot-plug VCPUs exposed to the
guest. Right now, libvirt XML only has the ability to describe the
total number of vcpus assigned to a domain (the <vcpu> element under
<domain>). It has the following APIs:
virConnectGetMaxVcpus - provide maximum that host can assign to guests
virDomainGetMaxVcpus - if domain is active, then max it was booted
with; if inactive, then same as virConnectGetMaxVcpus
virDomainSetVcpus - change current vcpus assigned to domain; active
domain only
virDomainPinVcpu - control how vcpus are pinned; active domain only
virDomainGetVcpus - detailed map of how vcpus are mapped to host cpus
And virsh has these commands:
setvcpus - maps to virDomainSetVcpus
vcpuinfo - maps to virDomainGetVcpus
vcpupin - maps to virDomainPinVcpu
https://bugzilla.redhat.com/show_bug.cgi?id=545118 describes the use
case of booting a Xen HV with one value set for the maximum vcpu
count, but another value for the current count. Technically, this
can already be approximated by calling virDomainSetVcpus immediately
after the guest is booted, but that can be resource-intensive,
compared to the alternative of using xen's command line options to
boot with a smaller current value than the maximum, and only later
hot-plugging additional vcpus when needed (up to the maximum set at
boot time). And it is not persistent, so the extra vcpus must be
manually unplugged every boot.
At the XML layer, I'm proposing the addition of a new element <currentVcpu>:
<domain ...>
<vcpu>2</vcpu>
<currentVcpu>1</vcpu>
...
Hum, we already have an attribute cpuset for <vcpu> which is used
to specify the semantic, I would rather keep an attribute here
<vcpu current="2">4</vcpu>
instead
If absent, then we keep the status quo of starting the domain with
the same number of vcpus as the maximum. If present, it must be
between 1 and <vcpu> inclusive (where supported, and exactly <vcpu>
for hypervisors that lack vcpu hot-plug support), and dumping the
xml of a domain will update the element to match virDomainSetVcpus;
this provides the persistence aspect, and allows domain startup to
take advantage of any command line options to start with a reduced
current vcpu count rather than having to unplug vcpus after the
fact.
okay
At the library API layer, I plan on adding:
virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for
next boot; only affects persistent state
the fact that this can't change the limit for a running domain will
have to be made clear in the doc, yes
virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of
a
domain, with a flag to state whether the change is persistent
(inactive domains or affecting next boot of active domain) or live
(active domains only).
that really overlap with virDomainSetVcpus. If the domain isn't
running a redefine with <vcpu current"..."> .. </vcpu> is
equivalent
but less convenient.
Currently the API has
and altering:
virDomainSetVcpus - can additionally be used on inactive domains to
affect next boot; no change to active semantics, basically now a
wrapper for virDomainSetVcpusFlags(,0)
Sounds reasonnable,
virDomainGetMaxVcpus - on inactive domains, this value now matches
the <vcpu> setting rather than blindly matching
virConnectGetMaxVcpus
hum ... there we are really changing semantic, documented semantic.
That I think we can't do !
I think that the existing virDomainGetVcpus is adequate for
determining the number of current vcpus in an active domain. Any
other API changes that you think might be necessary?
virDomainSetVcpusFlags could be used to set the maximum vcpus
of the persistant domain definition with a 3rd flag. Maybe we can
find a better name for that function though the Flags suffix is in
line with other API functions extensions.
What we really want is have convenient functions to get
- max vcpus on stopped guests
- max vcpus on running guests
- current vcpu on stopped guests
- current vcpu on running guests
and set
- max vcpus on stopped guests
- max vcpu persistent on running guests
- current vcpu on stopped guests
- current vcpu on running guests
a priori the only think we can't change is set a max vcpu on running
guests because if it was feasible this would just mean the notion of
max vcpu doesn't exist on that hypervisor (though there is always at
least a realistic limit provided by virConnectGetMaxVcpus().
Maybe we ought to just make 2 functions allowing the extended Set
and Get, using flags for those, and not touch the other entry point
semantics, since it's already defined.
Another thing is that when setting the current vcpu count on a
running guest we should also save this to the persistant data so
that on domain restart one get an expected state.
Finally, at the virsh layer, I plan on:
vcpuinfo: add --count flag; if flag is present, then inactive
domains show current and max vcpus rather than erroring out, and
active domains add current and max vcpu information to the overall
output
not sure, vcpuinfo is really about the pinning, this is a rather
complex operation which may not be available on all hypervisors,
I would not tie something as simple as providing the cound with the
pinning.
A separate command
vcpucount [--max] domain
would be more orthogonal and convenient I think
setvcpus: add --max and --persistent flags; without flags, this
still maps to virDomainSetVcpus and only affects active domains;
with --max, it maps to virDomainSetMaxVcpus, with --persistent, it
maps to virDomainSetVcpusFlags
yes that sounds fine
thanks !
Daniel
--
Daniel Veillard | libxml Gnome XML XSLT toolkit
http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine
http://rpmfind.net/
http://veillard.com/ | virtualization library
http://libvirt.org/