[libvirt] RFC: add <currentVcpu> element

Some hypervisors have the ability to hot-plug VCPUs exposed to the guest. Right now, libvirt XML only has the ability to describe the total number of vcpus assigned to a domain (the <vcpu> element under <domain>). It has the following APIs: virConnectGetMaxVcpus - provide maximum that host can assign to guests virDomainGetMaxVcpus - if domain is active, then max it was booted with; if inactive, then same as virConnectGetMaxVcpus virDomainSetVcpus - change current vcpus assigned to domain; active domain only virDomainPinVcpu - control how vcpus are pinned; active domain only virDomainGetVcpus - detailed map of how vcpus are mapped to host cpus And virsh has these commands: setvcpus - maps to virDomainSetVcpus vcpuinfo - maps to virDomainGetVcpus vcpupin - maps to virDomainPinVcpu https://bugzilla.redhat.com/show_bug.cgi?id=545118 describes the use case of booting a Xen HV with one value set for the maximum vcpu count, but another value for the current count. Technically, this can already be approximated by calling virDomainSetVcpus immediately after the guest is booted, but that can be resource-intensive, compared to the alternative of using xen's command line options to boot with a smaller current value than the maximum, and only later hot-plugging additional vcpus when needed (up to the maximum set at boot time). And it is not persistent, so the extra vcpus must be manually unplugged every boot. At the XML layer, I'm proposing the addition of a new element <currentVcpu>: <domain ...> <vcpu>2</vcpu> <currentVcpu>1</vcpu> ... If absent, then we keep the status quo of starting the domain with the same number of vcpus as the maximum. If present, it must be between 1 and <vcpu> inclusive (where supported, and exactly <vcpu> for hypervisors that lack vcpu hot-plug support), and dumping the xml of a domain will update the element to match virDomainSetVcpus; this provides the persistence aspect, and allows domain startup to take advantage of any command line options to start with a reduced current vcpu count rather than having to unplug vcpus after the fact. At the library API layer, I plan on adding: virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for next boot; only affects persistent state virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of a domain, with a flag to state whether the change is persistent (inactive domains or affecting next boot of active domain) or live (active domains only). and altering: virDomainSetVcpus - can additionally be used on inactive domains to affect next boot; no change to active semantics, basically now a wrapper for virDomainSetVcpusFlags(,0) virDomainGetMaxVcpus - on inactive domains, this value now matches the <vcpu> setting rather than blindly matching virConnectGetMaxVcpus I think that the existing virDomainGetVcpus is adequate for determining the number of current vcpus in an active domain. Any other API changes that you think might be necessary? Finally, at the virsh layer, I plan on: vcpuinfo: add --count flag; if flag is present, then inactive domains show current and max vcpus rather than erroring out, and active domains add current and max vcpu information to the overall output setvcpus: add --max and --persistent flags; without flags, this still maps to virDomainSetVcpus and only affects active domains; with --max, it maps to virDomainSetMaxVcpus, with --persistent, it maps to virDomainSetVcpusFlags Any thoughts on this plan of attack before I start submitting code patches? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On 09/23/2010 04:38 PM, Eric Blake wrote:
At the library API layer, I plan on adding:
virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for next boot; only affects persistent state
As I start to code this, it seems a bit redundant. I can avoid virDomainSetMaxVcpus by
virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of a domain, with a flag to state whether the change is persistent (inactive domains or affecting next boot of active domain) or live (active domains only).
using these flags: VIR_SET_VCPU_MAXIMUM = 1 VIR_SET_VCPU_PERSISTENT = 2 such that virDomainSetVcpusFlags(dom,1,0) - same as existing virDomainSetVcpus virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM) - error; can't change max on active domain virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM|VIR_SET_VCPU_PERSISTENT) - sets <vcpu> xml element for next boot virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_PERSISTENT) - sets <currentVcpu> xml element for next boot -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Fri, Sep 24, 2010 at 02:25:30PM -0600, Eric Blake wrote:
On 09/23/2010 04:38 PM, Eric Blake wrote:
At the library API layer, I plan on adding:
virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for next boot; only affects persistent state
As I start to code this, it seems a bit redundant. I can avoid virDomainSetMaxVcpus by
virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of a domain, with a flag to state whether the change is persistent (inactive domains or affecting next boot of active domain) or live (active domains only).
using these flags:
VIR_SET_VCPU_MAXIMUM = 1 VIR_SET_VCPU_PERSISTENT = 2
such that
virDomainSetVcpusFlags(dom,1,0) - same as existing virDomainSetVcpus virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM) - error; can't change max on active domain virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM|VIR_SET_VCPU_PERSISTENT) - sets <vcpu> xml element for next boot virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_PERSISTENT) - sets <currentVcpu> xml element for next boot
Yes I suggest to get 2 functions one for set and one for get allowing to do the full set of operations with the use of flags. Another question I had, is there a way in QEmu to specifiy a different cpu count from the -smp indicating the startup count ? Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/27/2010 10:25 AM, Daniel Veillard wrote:
using these flags:
VIR_SET_VCPU_MAXIMUM = 1 VIR_SET_VCPU_PERSISTENT = 2
such that
virDomainSetVcpusFlags(dom,1,0) - same as existing virDomainSetVcpus virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM) - error; can't change max on active domain virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM|VIR_SET_VCPU_PERSISTENT) - sets<vcpu> xml element for next boot virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_PERSISTENT) - sets <currentVcpu> xml element for next boot
Yes I suggest to get 2 functions one for set and one for get allowing to do the full set of operations with the use of flags.
OK, given your feedback, the proposal is now: XML layer - still debating on <currentVcpu> vs. <vcpu current=n> (see other email), but that is relatively trivial to switch between styles API layer - given your desire to make changes to an active domain also affect persistent state in one call, we need three flags instead of two. My current thoughts: add one new enum and two new functions: // flags for both virDomainSetVcpusFlags and virDomainGetVcpusFlags enum virDomainVcpuFlags { // whether to affect active state or next boot state VIR_DOMAIN_VCPU_ACTIVE = 1, VIR_DOMAIN_VCPU_PERSISTENT = 2, // whether to affect maximum rather than current VIR_DOMAIN_VCPU_MAXIMUM = 4, }; At least one of VIR_DOMAIN_VCPU_ACTIVE and VIR_DOMAIN_VCPU_PERSISTENT must be set. Using VIR_DOMAIN_VCPU_ACTIVE requires an active domain, while VIR_DOMAIN_VCPU_PERSISTENT works for active and inactive domains. For setting the count, both flags may be set (although setting both + VIR_DOMAIN_VCPU_MAXIMUM will fail); for getting, exactly one must be set. For setting, VIR_DOMAIN_VCPU_MAXIMUM must be paired with VIR_DOMAIN_VCPU_PERSISTENT; for getting, it can be paired with either flag. // returns -1 on failure, 0 on success // virDomainSetVcpus maps to virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) int virDomainSetVcpusFlags(virDomainPtr, unsigned int nvcpus, unsigned int flags); // returns -1 on failure, count on success // virDomainGetVcpus remains more complex regarding pinning info // virDomainGetMaxVcpus maps to virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_MAXIMUM) int virDomainGetVcpusFlags(virDomainPtr, unsigned int flags); No change to existing API semantics, although the implementation can wrap old APIs to call the new ones with appropriate flags where appropriate to minimize code duplication.
virDomainSetVcpusFlags could be used to set the maximum vcpus of the persistant domain definition with a 3rd flag. Maybe we can find a better name for that function though the Flags suffix is in line with other API functions extensions. What we really want is have convenient functions to get - max vcpus on stopped guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc(,VIR_DOMAIN_XML_INACTIVE) + XML parsing
- max vcpus on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetMaxVcpus() virDomainGetXMLDesc(,0) + XML parsing
- current vcpu on stopped guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT) [virDomainGetXMLDesc + parsing if XML update goes in]
- current vcpu on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainGetVcpus() + parsing pinning info
and set - max vcpus on stopped guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc + XML mod + virDomainDefineXML
- max vcpu persistent on running guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc + XML mod + virDomainDefineXML
- current vcpu on stopped guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT) [virDomainGetXMLDesc + XML mod + virDomainDefineXML if XML update goes in]
- current vcpu on running guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainSetVcpus()
Another thing is that when setting the current vcpu count on a running guest we should also save this to the persistant data so that on domain restart one get an expected state.
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_PERSISTENT) [combination of virDomainSetVcpus() and virDomainGetXMLDesc + XML mod + virDomainDefineXML if XML update goes in] So I think my latest proposal with three enum flags fits all these needs. virsh layer: vcpuinfo unchanged, tracks pinning info setvcpus learns --max, --persistent, and --active flags mapping quite nicely to the three enum values at the API; omitting both --persistent and --active calls old API (which in turn implies --active) new vcpucount command, I'm debating whether it is easier to provide all possible information without needing boolean options, or whether to provide --max, --persistent, and --active to make the user more closely match the API
Another question I had, is there a way in QEmu to specifiy a different cpu count from the -smp indicating the startup count ?
I wish I knew off-hand, as it would make it easier for me to implement when I get to that part of the patch series :) But even if there isn't, I think that starting with the maximum via -smp and immediately after hot-unplugging to the current count is better than nothing. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Mon, Sep 27, 2010 at 11:20:42AM -0600, Eric Blake wrote:
On 09/27/2010 10:25 AM, Daniel Veillard wrote:
using these flags:
VIR_SET_VCPU_MAXIMUM = 1 VIR_SET_VCPU_PERSISTENT = 2
such that
virDomainSetVcpusFlags(dom,1,0) - same as existing virDomainSetVcpus virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM) - error; can't change max on active domain virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_MAXIMUM|VIR_SET_VCPU_PERSISTENT) - sets<vcpu> xml element for next boot virDomainSetVcpusFlags(dom,1,VIR_SET_VCPU_PERSISTENT) - sets <currentVcpu> xml element for next boot
Yes I suggest to get 2 functions one for set and one for get allowing to do the full set of operations with the use of flags.
OK, given your feedback, the proposal is now:
XML layer - still debating on <currentVcpu> vs. <vcpu current=n> (see other email), but that is relatively trivial to switch between styles
API layer - given your desire to make changes to an active domain also affect persistent state in one call, we need three flags instead of two. My current thoughts:
add one new enum and two new functions:
// flags for both virDomainSetVcpusFlags and virDomainGetVcpusFlags enum virDomainVcpuFlags { // whether to affect active state or next boot state VIR_DOMAIN_VCPU_ACTIVE = 1, VIR_DOMAIN_VCPU_PERSISTENT = 2,
// whether to affect maximum rather than current VIR_DOMAIN_VCPU_MAXIMUM = 4, };
At least one of VIR_DOMAIN_VCPU_ACTIVE and VIR_DOMAIN_VCPU_PERSISTENT must be set. Using VIR_DOMAIN_VCPU_ACTIVE requires an active domain, while VIR_DOMAIN_VCPU_PERSISTENT works for active and inactive domains. For setting the count, both flags may be set (although setting both + VIR_DOMAIN_VCPU_MAXIMUM will fail); for getting, exactly one must be set. For setting, VIR_DOMAIN_VCPU_MAXIMUM must be paired with VIR_DOMAIN_VCPU_PERSISTENT; for getting, it can be paired with either flag.
yup looks fine to me :-)
// returns -1 on failure, 0 on success // virDomainSetVcpus maps to virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) int virDomainSetVcpusFlags(virDomainPtr, unsigned int nvcpus, unsigned int flags);
// returns -1 on failure, count on success // virDomainGetVcpus remains more complex regarding pinning info // virDomainGetMaxVcpus maps to virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_MAXIMUM) int virDomainGetVcpusFlags(virDomainPtr, unsigned int flags);
No change to existing API semantics, although the implementation can wrap old APIs to call the new ones with appropriate flags where appropriate to minimize code duplication.
right
virDomainSetVcpusFlags could be used to set the maximum vcpus of the persistant domain definition with a 3rd flag. Maybe we can find a better name for that function though the Flags suffix is in line with other API functions extensions. What we really want is have convenient functions to get - max vcpus on stopped guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc(,VIR_DOMAIN_XML_INACTIVE) + XML parsing
- max vcpus on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetMaxVcpus() virDomainGetXMLDesc(,0) + XML parsing
- current vcpu on stopped guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT) [virDomainGetXMLDesc + parsing if XML update goes in]
- current vcpu on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainGetVcpus() + parsing pinning info
and set - max vcpus on stopped guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc + XML mod + virDomainDefineXML
- max vcpu persistent on running guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT|VIR_DOMAIN_VCPU_MAXIMUM) virDomainGetXMLDesc + XML mod + virDomainDefineXML
- current vcpu on stopped guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_PERSISTENT) [virDomainGetXMLDesc + XML mod + virDomainDefineXML if XML update goes in]
- current vcpu on running guests
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainSetVcpus()
Another thing is that when setting the current vcpu count on a running guest we should also save this to the persistant data so that on domain restart one get an expected state.
virDomainSetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE|VIR_DOMAIN_VCPU_PERSISTENT) [combination of virDomainSetVcpus() and virDomainGetXMLDesc + XML mod + virDomainDefineXML if XML update goes in]
So I think my latest proposal with three enum flags fits all these needs.
yes that looks like a complete set, and we still have some room at the flag level !
virsh layer:
vcpuinfo unchanged, tracks pinning info
setvcpus learns --max, --persistent, and --active flags mapping quite nicely to the three enum values at the API; omitting both --persistent and --active calls old API (which in turn implies --active)
yes
new vcpucount command, I'm debating whether it is easier to provide all possible information without needing boolean options, or whether to provide --max, --persistent, and --active to make the user more closely match the API
in general virsh commands follow really closely the APIs unless in cases where we know on API isn't really used in isolation. We need to keep in mind that the output may be reused in further scripting, which is why I would tend to favor distinct flags
Another question I had, is there a way in QEmu to specifiy a different cpu count from the -smp indicating the startup count ?
I wish I knew off-hand, as it would make it easier for me to implement when I get to that part of the patch series :) But even if there isn't, I think that starting with the maximum via -smp and immediately after hot-unplugging to the current count is better than nothing.
right :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/27/2010 11:20 AM, Eric Blake wrote:
No change to existing API semantics, although the implementation can wrap old APIs to call the new ones with appropriate flags where appropriate to minimize code duplication.
One more API to think about: virDomainGetInfo returns a virDomainInfoPtr, where that struct includes an unsigned short nrVirtCpu member. I'm assuming that since this is a public struct involved in on-the-wire RPC protocol, we can't change it to add a new member (and it also implicitly means that we are limited to 64k vcpus, even though the unsigned int argument of virDomainSetVcpus could otherwise go larger). Given my testing, it looks like this field tracks live changes from virsh setvcpus, so this now needs to be explicitly documented as the current vcpu rather than the maximum, when the two differ. Which means we have another synonym:
- current vcpu on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainGetVcpus() + parsing pinning info virDomainGetInfo()
-- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Tue, Sep 28, 2010 at 01:36:13PM -0600, Eric Blake wrote:
On 09/27/2010 11:20 AM, Eric Blake wrote:
No change to existing API semantics, although the implementation can wrap old APIs to call the new ones with appropriate flags where appropriate to minimize code duplication.
One more API to think about:
virDomainGetInfo returns a virDomainInfoPtr, where that struct includes an unsigned short nrVirtCpu member. I'm assuming that since this is a public struct involved in on-the-wire RPC protocol, we can't change it to add a new member (and it also implicitly means that we are limited to 64k vcpus, even though the unsigned int argument of virDomainSetVcpus could otherwise go larger). Given my
it's a public struct so immutable now, right
testing, it looks like this field tracks live changes from virsh setvcpus, so this now needs to be explicitly documented as the current vcpu rather than the maximum, when the two differ. Which
yes
means we have another synonym:
- current vcpu on running guests
virDomainGetVcpusFlags(,VIR_DOMAIN_VCPU_ACTIVE) virDomainGetVcpus() + parsing pinning info virDomainGetInfo()
agreed, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/27/2010 11:20 AM, Eric Blake wrote:
Another question I had, is there a way in QEmu to specifiy a different cpu count from the -smp indicating the startup count ?
I wish I knew off-hand, as it would make it easier for me to implement when I get to that part of the patch series :) But even if there isn't, I think that starting with the maximum via -smp and immediately after hot-unplugging to the current count is better than nothing.
Answering my own question: with qemu 0.12.5, 'qemu -help' lists: -smp n[,maxcpus=cpus][,cores=cores][,threads=threads][,sockets=sockets] set the number of CPUs to 'n' [default=1] maxcpus= maximum number of total cpus, including offline CPUs for hotplug etc. So it looks like '-smp 1,maxcpus=2' maps nicely to <vcpu current='1'>2</vcpu>. But per tests/qemuhelpdata, qemu 0.11.0 lacks this, so I also have to code up a qemu feature test. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Tue, Sep 28, 2010 at 04:36:51PM -0600, Eric Blake wrote:
On 09/27/2010 11:20 AM, Eric Blake wrote:
Another question I had, is there a way in QEmu to specifiy a different cpu count from the -smp indicating the startup count ?
I wish I knew off-hand, as it would make it easier for me to implement when I get to that part of the patch series :) But even if there isn't, I think that starting with the maximum via -smp and immediately after hot-unplugging to the current count is better than nothing.
Answering my own question: with qemu 0.12.5, 'qemu -help' lists:
-smp n[,maxcpus=cpus][,cores=cores][,threads=threads][,sockets=sockets] set the number of CPUs to 'n' [default=1] maxcpus= maximum number of total cpus, including offline CPUs for hotplug etc.
So it looks like '-smp 1,maxcpus=2' maps nicely to <vcpu current='1'>2</vcpu>.
But per tests/qemuhelpdata, qemu 0.11.0 lacks this, so I also have to code up a qemu feature test.
Right but that's good, we can really fully implement this :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On Thu, Sep 23, 2010 at 04:38:45PM -0600, Eric Blake wrote:
Some hypervisors have the ability to hot-plug VCPUs exposed to the guest. Right now, libvirt XML only has the ability to describe the total number of vcpus assigned to a domain (the <vcpu> element under <domain>). It has the following APIs:
virConnectGetMaxVcpus - provide maximum that host can assign to guests virDomainGetMaxVcpus - if domain is active, then max it was booted with; if inactive, then same as virConnectGetMaxVcpus virDomainSetVcpus - change current vcpus assigned to domain; active domain only virDomainPinVcpu - control how vcpus are pinned; active domain only virDomainGetVcpus - detailed map of how vcpus are mapped to host cpus
And virsh has these commands:
setvcpus - maps to virDomainSetVcpus vcpuinfo - maps to virDomainGetVcpus vcpupin - maps to virDomainPinVcpu
https://bugzilla.redhat.com/show_bug.cgi?id=545118 describes the use case of booting a Xen HV with one value set for the maximum vcpu count, but another value for the current count. Technically, this can already be approximated by calling virDomainSetVcpus immediately after the guest is booted, but that can be resource-intensive, compared to the alternative of using xen's command line options to boot with a smaller current value than the maximum, and only later hot-plugging additional vcpus when needed (up to the maximum set at boot time). And it is not persistent, so the extra vcpus must be manually unplugged every boot.
At the XML layer, I'm proposing the addition of a new element <currentVcpu>:
<domain ...> <vcpu>2</vcpu> <currentVcpu>1</vcpu> ...
Hum, we already have an attribute cpuset for <vcpu> which is used to specify the semantic, I would rather keep an attribute here <vcpu current="2">4</vcpu> instead
If absent, then we keep the status quo of starting the domain with the same number of vcpus as the maximum. If present, it must be between 1 and <vcpu> inclusive (where supported, and exactly <vcpu> for hypervisors that lack vcpu hot-plug support), and dumping the xml of a domain will update the element to match virDomainSetVcpus; this provides the persistence aspect, and allows domain startup to take advantage of any command line options to start with a reduced current vcpu count rather than having to unplug vcpus after the fact.
okay
At the library API layer, I plan on adding:
virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for next boot; only affects persistent state
the fact that this can't change the limit for a running domain will have to be made clear in the doc, yes
virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of a domain, with a flag to state whether the change is persistent (inactive domains or affecting next boot of active domain) or live (active domains only).
that really overlap with virDomainSetVcpus. If the domain isn't running a redefine with <vcpu current"..."> .. </vcpu> is equivalent but less convenient. Currently the API has
and altering:
virDomainSetVcpus - can additionally be used on inactive domains to affect next boot; no change to active semantics, basically now a wrapper for virDomainSetVcpusFlags(,0)
Sounds reasonnable,
virDomainGetMaxVcpus - on inactive domains, this value now matches the <vcpu> setting rather than blindly matching virConnectGetMaxVcpus
hum ... there we are really changing semantic, documented semantic. That I think we can't do !
I think that the existing virDomainGetVcpus is adequate for determining the number of current vcpus in an active domain. Any other API changes that you think might be necessary?
virDomainSetVcpusFlags could be used to set the maximum vcpus of the persistant domain definition with a 3rd flag. Maybe we can find a better name for that function though the Flags suffix is in line with other API functions extensions. What we really want is have convenient functions to get - max vcpus on stopped guests - max vcpus on running guests - current vcpu on stopped guests - current vcpu on running guests and set - max vcpus on stopped guests - max vcpu persistent on running guests - current vcpu on stopped guests - current vcpu on running guests a priori the only think we can't change is set a max vcpu on running guests because if it was feasible this would just mean the notion of max vcpu doesn't exist on that hypervisor (though there is always at least a realistic limit provided by virConnectGetMaxVcpus(). Maybe we ought to just make 2 functions allowing the extended Set and Get, using flags for those, and not touch the other entry point semantics, since it's already defined. Another thing is that when setting the current vcpu count on a running guest we should also save this to the persistant data so that on domain restart one get an expected state.
Finally, at the virsh layer, I plan on:
vcpuinfo: add --count flag; if flag is present, then inactive domains show current and max vcpus rather than erroring out, and active domains add current and max vcpu information to the overall output
not sure, vcpuinfo is really about the pinning, this is a rather complex operation which may not be available on all hypervisors, I would not tie something as simple as providing the cound with the pinning. A separate command vcpucount [--max] domain would be more orthogonal and convenient I think
setvcpus: add --max and --persistent flags; without flags, this still maps to virDomainSetVcpus and only affects active domains; with --max, it maps to virDomainSetMaxVcpus, with --persistent, it maps to virDomainSetVcpusFlags
yes that sounds fine thanks ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/27/2010 10:21 AM, Daniel Veillard wrote:
At the XML layer, I'm proposing the addition of a new element<currentVcpu>:
<domain ...> <vcpu>2</vcpu> <currentVcpu>1</vcpu> ...
Hum, we already have an attribute cpuset for<vcpu> which is used to specify the semantic, I would rather keep an attribute here
<vcpu current="2">4</vcpu>
instead
Possible, but consider that we have: <domain ...> <memory>256</memory> <currentMemory>128</memory> ... </domain> So I was modeling after <memory>/<currentMemory> for consistency. Preferences on whether the parallel element or the attribute approach is better? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Mon, Sep 27, 2010 at 10:33:42AM -0600, Eric Blake wrote:
On 09/27/2010 10:21 AM, Daniel Veillard wrote:
At the XML layer, I'm proposing the addition of a new element<currentVcpu>:
<domain ...> <vcpu>2</vcpu> <currentVcpu>1</vcpu> ...
Hum, we already have an attribute cpuset for<vcpu> which is used to specify the semantic, I would rather keep an attribute here
<vcpu current="2">4</vcpu>
instead
Possible, but consider that we have:
<domain ...> <memory>256</memory> <currentMemory>128</memory> ... </domain>
So I was modeling after <memory>/<currentMemory> for consistency. Preferences on whether the parallel element or the attribute approach is better?
Well, the attribute cpuset possible on vcpu which indicate on which physical CPU the virtual CPUs may be mapped would make as much sense if not more on currentVcpu instead. Also both attributes are indications used for domain startup. So I think it's a bit more coherent in the end to have current on the vcpu as an attribute. It's not a big deal though, but I assume the change for the patch is trivial, just a change of XPath expression. Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/27/2010 02:26 PM, Daniel Veillard wrote:
<vcpu current="2">4</vcpu>
instead
Possible, but consider that we have:
<domain ...> <memory>256</memory> <currentMemory>128</memory> ... </domain>
So I was modeling after<memory>/<currentMemory> for consistency. Preferences on whether the parallel element or the attribute approach is better?
Well, the attribute cpuset possible on vcpu which indicate on which physical CPU the virtual CPUs may be mapped would make as much sense if not more on currentVcpu instead. Also both attributes are indications used for domain startup. So I think it's a bit more coherent in the end to have current on the vcpu as an attribute. It's not a big deal though, but I assume the change for the patch is trivial, just a change of XPath expression.
Fair enough. With <memory>/<currentMemory>, there are no attributes; but since <vcpu> already has an attribute, adding another attribute instead of a parallel element makes the use of <vcpu cpuset="...">n</vcpu> less confusing. I'm adjusting my current work accordingly (and yes, it is a pretty trivial switch). -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org
participants (2)
-
Daniel Veillard
-
Eric Blake