[libvirt] [PATCH 00/12] vcpus: initial implemantation of <vcpu current=x>.

As promised, here's my first round of patches to make <vcpu> support add the ability to persistently remember if a domain should have fewer vcpus at boot than the maximum allowed for hot-plugging. Patches 1-10 are pretty well tested, 11 may need some tweaking to get the semantics right, and 12 is mainly for RFC, since I need help finishing off the series. Eric Blake (12): vcpu: improve cpuset attribute vcpu: add current attribute to <vcpu> element vcpu: add new public API vcpu: define internal driver API vcpu: implement the public APIs vcpu: implement the remote protocol vcpu: make old API trivially wrap to new API vcpu: support maxvcpu in domain_conf vcpu: add virsh support vcpu: improve vcpu support in qemu command line vcpu: complete vcpu support in qemu driver food for thought daemon/remote.c | 53 +++++++ daemon/remote_dispatch_args.h | 2 + daemon/remote_dispatch_prototypes.h | 16 ++ daemon/remote_dispatch_ret.h | 1 + daemon/remote_dispatch_table.h | 10 ++ docs/formatdomain.html.in | 17 ++- docs/schemas/domain.rng | 14 ++- include/libvirt/libvirt.h.in | 14 ++ src/conf/domain_conf.c | 41 ++++-- src/conf/domain_conf.h | 3 +- src/driver.h | 9 + src/esx/esx_driver.c | 30 ++++- src/esx/esx_vmx.c | 24 ++- src/libvirt.c | 125 +++++++++++++++- src/libvirt_public.syms | 6 + src/lxc/lxc_driver.c | 2 + src/opennebula/one_conf.c | 11 +- src/opennebula/one_driver.c | 2 + src/openvz/openvz_conf.c | 7 +- src/openvz/openvz_driver.c | 47 +++++- src/phyp/phyp_driver.c | 32 ++++- src/qemu/qemu_conf.c | 19 ++- src/qemu/qemu_driver.c | 119 +++++++++++++-- src/remote/remote_driver.c | 55 +++++++ src/remote/remote_protocol.c | 33 ++++ src/remote/remote_protocol.h | 26 +++ src/remote/remote_protocol.x | 19 +++- src/remote_protocol-structs | 12 ++ src/test/test_driver.c | 34 ++++- src/uml/uml_driver.c | 2 + src/vbox/vbox_tmpl.c | 46 +++++- src/xen/xen_driver.c | 32 ++++- src/xen/xend_internal.c | 15 ++- src/xen/xm_internal.c | 15 ++- src/xenapi/xenapi_driver.c | 60 +++++++- src/xenapi/xenapi_utils.c | 6 +- tests/qemuargv2xmltest.c | 2 + tests/qemuxml2argvdata/qemuxml2argv-smp.args | 1 + tests/qemuxml2argvdata/qemuxml2argv-smp.xml | 28 ++++ tests/qemuxml2argvtest.c | 2 + tests/qemuxml2xmltest.c | 2 + tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml | 22 +++ tools/virsh.c | 211 +++++++++++++++++++++++--- tools/virsh.pod | 28 ++++- 44 files changed, 1140 insertions(+), 115 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.xml create mode 100644 tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml -- 1.7.2.3

The <vcpu cpuset=...> attribute has been available since commit e193b5dd, but without documentation or RNG validation. * docs/schemas/domain.rng (vcpu): Further validate cpuset. * docs/formatdomain.html.in: Document it. * src/conf/domain_conf.c: Fix typos. --- docs/formatdomain.html.in | 12 ++++++++++-- docs/schemas/domain.rng | 9 ++++++++- src/conf/domain_conf.c | 6 +++--- 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 5e56dae..9f077eb 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -194,7 +194,7 @@ <memoryBacking> <hugepages/> </memoryBacking> - <vcpu>1</vcpu> + <vcpu cpuset="1-4,^3,6">2</vcpu> ...</pre> <dl> @@ -213,7 +213,15 @@ hugepages instead of the normal native page size.</dd> <dt><code>vcpu</code></dt> <dd>The content of this element defines the number of virtual - CPUs allocated for the guest OS.</dd> + CPUs allocated for the guest OS, which must be between 1 and + the maximum supported by the hypervisor. <span class="since">Since + 0.4.4</span>, this element can contain an optional + <code>cpuset</code> attribute, which is a comma-separated + list of physical CPU numbers that virtual CPUs can be pinned + to. Each element in that list is either a single CPU number, + a range of CPU numbers, or a caret followed by a CPU number to + be excluded from a previous range. + </dd> </dl> <h3><a name="elementsCPU">CPU model and topology</a></h3> diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng index ccb8cf3..2e0457b 100644 --- a/docs/schemas/domain.rng +++ b/docs/schemas/domain.rng @@ -302,7 +302,9 @@ <optional> <element name="vcpu"> <optional> - <attribute name="cpuset"/> + <attribute name="cpuset"> + <ref name="cpuset"/> + </attribute> </optional> <ref name="countCPU"/> </element> @@ -1785,6 +1787,11 @@ <param name="pattern">[0-9]+</param> </data> </define> + <define name="cpuset"> + <data type="string"> + <param name="pattern">([0-9]+(-[0-9]+)?|\^[0-9]+)(,([0-9]+(-[0-9]+)?|\^[0-9]+))*</param> + </data> + </define> <define name="countCPU"> <data type="unsignedShort"> <param name="pattern">[0-9]+</param> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e05d5d7..afb06e8 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -5270,7 +5270,7 @@ virDomainCpuNumberParse(const char **str, int maxcpu) * * Serialize the cpuset to a string * - * Returns the new string NULL in case of error. The string need to be + * Returns the new string NULL in case of error. The string needs to be * freed by the caller. */ char * @@ -5329,8 +5329,8 @@ virDomainCpuSetFormat(char *cpuset, int maxcpu) * @maxcpu: number of elements available in @cpuset * * Parse the cpu set, it will set the value for enabled CPUs in the @cpuset - * to 1, and 0 otherwise. The syntax allows coma separated entries each - * can be either a CPU number, ^N to unset that CPU or N-M for ranges. + * to 1, and 0 otherwise. The syntax allows comma separated entries; each + * can be either a CPU number, ^N to unset that CPU, or N-M for ranges. * * Returns the number of CPU found in that set, or -1 in case of error. * @cpuset is modified accordingly to the value parsed. -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:05PM -0600, Eric Blake wrote:
The <vcpu cpuset=...> attribute has been available since commit e193b5dd, but without documentation or RNG validation.
* docs/schemas/domain.rng (vcpu): Further validate cpuset. * docs/formatdomain.html.in: Document it. * src/conf/domain_conf.c: Fix typos.
ACK, actually independant fom the other patches, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 10/01/2010 08:19 AM, Daniel Veillard wrote:
On Wed, Sep 29, 2010 at 06:02:05PM -0600, Eric Blake wrote:
The<vcpu cpuset=...> attribute has been available since commit e193b5dd, but without documentation or RNG validation.
* docs/schemas/domain.rng (vcpu): Further validate cpuset. * docs/formatdomain.html.in: Document it. * src/conf/domain_conf.c: Fix typos.
ACK, actually independant fom the other patches,
Pushed this one since it is independent, like you noticed. I'm probably going to do a v2 for the other patches, even though they've been ACKed, to show how I've folded in the nit fixups, and including a xen implementation. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

Syntax agreed on in https://www.redhat.com/archives/libvir-list/2010-September/msg00476.html * docs/schemas/domain.rng: Add new attribute. * docs/formatdomain.html.in: Document it. * tests/qemuxml2argvdata/qemuxml2argv-smp.xml: Add to domainschematest. * tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml: Likewise. --- docs/formatdomain.html.in | 9 +++++-- docs/schemas/domain.rng | 5 ++++ tests/qemuxml2argvdata/qemuxml2argv-smp.xml | 28 +++++++++++++++++++++++++++ tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml | 22 +++++++++++++++++++++ 4 files changed, 61 insertions(+), 3 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.xml create mode 100644 tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 9f077eb..b3d6001 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -194,7 +194,7 @@ <memoryBacking> <hugepages/> </memoryBacking> - <vcpu cpuset="1-4,^3,6">2</vcpu> + <vcpu cpuset="1-4,^3,6" current="1">2</vcpu> ...</pre> <dl> @@ -212,7 +212,7 @@ hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size.</dd> <dt><code>vcpu</code></dt> - <dd>The content of this element defines the number of virtual + <dd>The content of this element defines the maximum number of virtual CPUs allocated for the guest OS, which must be between 1 and the maximum supported by the hypervisor. <span class="since">Since 0.4.4</span>, this element can contain an optional @@ -220,7 +220,10 @@ list of physical CPU numbers that virtual CPUs can be pinned to. Each element in that list is either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to - be excluded from a previous range. + be excluded from a previous range. <span class="since">Since + 0.8.5</span>, the optional attribute <code>current</code> can + be used to specify that fewer than the maximum number of + virtual CPUs should be enabled. </dd> </dl> diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng index 2e0457b..b3ae597 100644 --- a/docs/schemas/domain.rng +++ b/docs/schemas/domain.rng @@ -306,6 +306,11 @@ <ref name="cpuset"/> </attribute> </optional> + <optional> + <attribute name="current"> + <ref name="countCPU"/> + </attribute> + </optional> <ref name="countCPU"/> </element> </optional> diff --git a/tests/qemuxml2argvdata/qemuxml2argv-smp.xml b/tests/qemuxml2argvdata/qemuxml2argv-smp.xml new file mode 100644 index 0000000..975f873 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-smp.xml @@ -0,0 +1,28 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory>219200</memory> + <currentMemory>219200</currentMemory> + <vcpu current='1'>2</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <cpu> + <topology sockets='2' cores='1' threads='1'/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='block' device='disk'> + <source dev='/dev/HostVG/QEMUGuest1'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' unit='0'/> + </disk> + <controller type='ide' index='0'/> + <memballoon model='virtio'/> + </devices> +</domain> diff --git a/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml new file mode 100644 index 0000000..a11f713 --- /dev/null +++ b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml @@ -0,0 +1,22 @@ +<domain type='xen' id='15'> + <name>pvtest</name> + <uuid>596a5d2171f48fb2e068e2386a5c413e</uuid> + <os> + <type>linux</type> + <kernel>/var/lib/xen/vmlinuz.2Dn2YT</kernel> + <initrd>/var/lib/xen/initrd.img.0u-Vhq</initrd> + <cmdline> method=http://download.fedora.devel.redhat.com/pub/fedora/linux/core/test/5.91/x86_... </cmdline> + </os> + <memory>430080</memory> + <vcpu current='1'>2</vcpu> + <on_poweroff>destroy</on_poweroff> + <on_reboot>destroy</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <disk type='file' device='disk'> + <source file='/root/some.img'/> + <target dev='xvda'/> + </disk> + <console tty='/dev/pts/4'/> + </devices> +</domain> -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:06PM -0600, Eric Blake wrote:
Syntax agreed on in https://www.redhat.com/archives/libvir-list/2010-September/msg00476.html
* docs/schemas/domain.rng: Add new attribute. * docs/formatdomain.html.in: Document it. * tests/qemuxml2argvdata/qemuxml2argv-smp.xml: Add to domainschematest. * tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml: Likewise. --- docs/formatdomain.html.in | 9 +++++-- docs/schemas/domain.rng | 5 ++++ tests/qemuxml2argvdata/qemuxml2argv-smp.xml | 28 +++++++++++++++++++++++++++ tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml | 22 +++++++++++++++++++++ 4 files changed, 61 insertions(+), 3 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.xml create mode 100644 tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml
@@ -220,7 +220,10 @@ list of physical CPU numbers that virtual CPUs can be pinned to. Each element in that list is either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to - be excluded from a previous range. + be excluded from a previous range. <span class="since">Since + 0.8.5</span>, the optional attribute <code>current</code> can + be used to specify that fewer than the maximum number of
"that fewer" doesn't sounds so nice, "when fewer" would work or "that less", but it's a nitpick and I'm not even an english native speaker so my opinion is not worth much :-)
+ virtual CPUs should be enabled. </dd> </dl>
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

API agreed on in https://www.redhat.com/archives/libvir-list/2010-September/msg00456.html * include/libvirt/libvirt.h.in (virDomainVcpuFlags) (virDomainSetVcpusFlags, virDomainGetVcpusFlags): New declarations. * src/libvirt_public.syms: Export new symbols. --- However, in implementing things, I'm wondering if I should use the names: VIR_DOMAIN_VCPU_CONFIG (instead of VIR_DOMAIN_VCPU_PERSISTENT) VIR_DOMAIN_VCPU_LIVE (instead of VIR_DOMAIN_VCPU_ACTIVE) to match virDomainDeviceModifyFlags, where _CONFIG and _LIVE have the same semantics of setting one or both aspects of a domain. include/libvirt/libvirt.h.in | 14 ++++++++++++++ src/libvirt_public.syms | 6 ++++++ 2 files changed, 20 insertions(+), 0 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index b45f7ec..6bac8f1 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -826,8 +826,22 @@ struct _virVcpuInfo { }; typedef virVcpuInfo *virVcpuInfoPtr; +typedef enum { + /* Must choose at least one of these two bits; SetVcpus can choose both */ + VIR_DOMAIN_VCPU_ACTIVE = (1 << 0), /* Affect active domain */ + VIR_DOMAIN_VCPU_PERSISTENT = (1 << 1), /* Affect next boot */ + + /* Additional flags to be bit-wise OR'd in */ + VIR_DOMAIN_VCPU_MAXIMUM = (1 << 2), /* Max rather than current count */ +} virDomainVcpuFlags; + int virDomainSetVcpus (virDomainPtr domain, unsigned int nvcpus); +int virDomainSetVcpusFlags (virDomainPtr domain, + unsigned int nvcpus, + unsigned int flags); +int virDomainGetVcpusFlags (virDomainPtr domain, + unsigned int flags); int virDomainPinVcpu (virDomainPtr domain, unsigned int vcpu, diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 849c163..4e89723 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -405,4 +405,10 @@ LIBVIRT_0.8.2 { virDomainCreateWithFlags; } LIBVIRT_0.8.1; +LIBVIRT_0.8.5 { + global: + virDomainGetVcpusFlags; + virDomainSetVcpusFlags; +} LIBVIRT_0.8.2; + # .... define new API here using predicted next version number .... -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:07PM -0600, Eric Blake wrote:
API agreed on in https://www.redhat.com/archives/libvir-list/2010-September/msg00456.html
* include/libvirt/libvirt.h.in (virDomainVcpuFlags) (virDomainSetVcpusFlags, virDomainGetVcpusFlags): New declarations. * src/libvirt_public.syms: Export new symbols. ---
However, in implementing things, I'm wondering if I should use the names:
VIR_DOMAIN_VCPU_CONFIG (instead of VIR_DOMAIN_VCPU_PERSISTENT) VIR_DOMAIN_VCPU_LIVE (instead of VIR_DOMAIN_VCPU_ACTIVE)
to match virDomainDeviceModifyFlags, where _CONFIG and _LIVE have the same semantics of setting one or both aspects of a domain.
include/libvirt/libvirt.h.in | 14 ++++++++++++++ src/libvirt_public.syms | 6 ++++++ 2 files changed, 20 insertions(+), 0 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index b45f7ec..6bac8f1 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -826,8 +826,22 @@ struct _virVcpuInfo { }; typedef virVcpuInfo *virVcpuInfoPtr;
+typedef enum { + /* Must choose at least one of these two bits; SetVcpus can choose both */ + VIR_DOMAIN_VCPU_ACTIVE = (1 << 0), /* Affect active domain */ + VIR_DOMAIN_VCPU_PERSISTENT = (1 << 1), /* Affect next boot */ + + /* Additional flags to be bit-wise OR'd in */ + VIR_DOMAIN_VCPU_MAXIMUM = (1 << 2), /* Max rather than current count */ +} virDomainVcpuFlags; + int virDomainSetVcpus (virDomainPtr domain, unsigned int nvcpus); +int virDomainSetVcpusFlags (virDomainPtr domain, + unsigned int nvcpus, + unsigned int flags); +int virDomainGetVcpusFlags (virDomainPtr domain, + unsigned int flags);
int virDomainPinVcpu (virDomainPtr domain, unsigned int vcpu, diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 849c163..4e89723 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -405,4 +405,10 @@ LIBVIRT_0.8.2 { virDomainCreateWithFlags; } LIBVIRT_0.8.1;
+LIBVIRT_0.8.5 { + global: + virDomainGetVcpusFlags; + virDomainSetVcpusFlags; +} LIBVIRT_0.8.2; + # .... define new API here using predicted next version number ....
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

2010/9/30 Eric Blake <eblake@redhat.com>:
API agreed on in https://www.redhat.com/archives/libvir-list/2010-September/msg00456.html
* include/libvirt/libvirt.h.in (virDomainVcpuFlags) (virDomainSetVcpusFlags, virDomainGetVcpusFlags): New declarations. * src/libvirt_public.syms: Export new symbols. ---
However, in implementing things, I'm wondering if I should use the names:
VIR_DOMAIN_VCPU_CONFIG (instead of VIR_DOMAIN_VCPU_PERSISTENT) VIR_DOMAIN_VCPU_LIVE (instead of VIR_DOMAIN_VCPU_ACTIVE)
to match virDomainDeviceModifyFlags, where _CONFIG and _LIVE have the same semantics of setting one or both aspects of a domain.
If you want to match the semantics of _CONFIG and _LIVE from virDomainDeviceModifyFlags then you need to change the ESX driver differently in patch 7/12. If I understand the semantics of virDomainDeviceModifyFlags correctly, then _LIVE is only affecting the runtime state of the domain and changes done with the _LIVE flag only are lost on domain restart. In the same way _CONFIG only affects the persistent config and not the runtime state. With ESX there is no such thing as a distinct runtime state. Changes to the number of vCPUs affect the runtime state and the persistent config at the same time. Currently you altered the ESX driver in patch 7/12 to esxDomainSetVcpus = esxDomainSetVcpusFlags(VIR_DOMAIN_VCPU_ACTIVE) But this should actually use _LIVE | _CONFIG as flags to match what ESX can do only. Also in 5/12 you state that trying to change the vCPU count of an inactive domain using the _LIVE flag should fail. The implementation in the ESX driver in 7/12 doesn't match this either. esxDomainSetVcpus needs to call esxDomainSetVcpusFlags with _CONFIG and add _LIVE to the set of flags when the domain is active. Likewise esxDomainSetVcpusFlags needs to enforce that _CONFIG is always set and that _LIVE is set iff the domain is active. The same might apply to other drivers, I didn't check this yet. Matthias

On 10/01/2010 10:13 AM, Matthias Bolte wrote:
However, in implementing things, I'm wondering if I should use the names:
VIR_DOMAIN_VCPU_CONFIG (instead of VIR_DOMAIN_VCPU_PERSISTENT) VIR_DOMAIN_VCPU_LIVE (instead of VIR_DOMAIN_VCPU_ACTIVE)
to match virDomainDeviceModifyFlags, where _CONFIG and _LIVE have the same semantics of setting one or both aspects of a domain.
If you want to match the semantics of _CONFIG and _LIVE from virDomainDeviceModifyFlags then you need to change the ESX driver differently in patch 7/12.
If I understand the semantics of virDomainDeviceModifyFlags correctly, then _LIVE is only affecting the runtime state of the domain and changes done with the _LIVE flag only are lost on domain restart. In the same way _CONFIG only affects the persistent config and not the runtime state.
Yes, that was the usage pattern I had envisioned, to allow for domains like qemu where you can make a live-only change.
With ESX there is no such thing as a distinct runtime state. Changes to the number of vCPUs affect the runtime state and the persistent config at the same time.
And, in playing with things a bit more, this is what I see pre-patch on Fedora 14 (# for host, $ for guest): # virsh edit fedora_14 # virsh dumpxml fedora_14 | grep vcpu <vcpu>2</vcpu> # virsh start fedora_14 Domain fedora_14 started # virsh setvcpus fedora_14 3 [hmm, trying to exceed the max killed my guest - not very nice] # virsh setvcpus fedora_14 1 error: Requested operation is not valid: domain is not running [it currently requires a live domain] # virsh dumpxml fedora_14 | grep vcpu <vcpu>3</vcpu> [but it changed the configured value] # virsh start fedora_14 Domain fedora_14 started # virsh setvcpus fedora_14 2 # virsh dumpxml fedora_14 | grep vcpu <vcpu>2</vcpu> $ nproc 3 [hmm - the live value is still 3] $ shutdown -h now # virsh start fedora_14 Domain fedora_14 started $ nproc 2 [yep - definitely a persistent setting, and I couldn't figure how it is supposed to be live, since I could not see qemuMonitorSetCPU having a visible effect on nproc(1) in the guest?]
Currently you altered the ESX driver in patch 7/12 to
esxDomainSetVcpus = esxDomainSetVcpusFlags(VIR_DOMAIN_VCPU_ACTIVE)
But this should actually use _LIVE | _CONFIG as flags to match what ESX can do only.
I'm starting to think that should be the case in general: virDomainSetVcpus should map to virDomainSetVcpusFlags(VIR_DOMAIN_VCPU_LIVE|VIR_DOMAIN_VCPU_CONFIG), and the new behavior becomes that of specifying live-only or config-only. There may yet be some logic bugs for some drivers, and certainly this argues for additions to the TCK on hot-plugged vcpu behavior. At any rate, I really need to post a v2 series that tries to be more careful with my handling of live vs. configuration.
Also in 5/12 you state that trying to change the vCPU count of an inactive domain using the _LIVE flag should fail. The implementation in the ESX driver in 7/12 doesn't match this either.
If the domain is inactive, then _LIVE can't possibly change anything. I could probably agree that it makes more sense to have _CONFIG|_LIVE silently ignore _LIVE when a domain is inactive rather than erroring out, especially given the argument that the existing setVcpus command without flags should probably be using _CONFIG|_LIVE; except that the qemu example above proved that the existing setVcpus errors out if the domain is not live. And I still think that _LIVE in isolation on an inactive domain should error out (the _CONFIG|_LIVE combination is different, because _CONFIG can still do useful work).
esxDomainSetVcpus needs to call esxDomainSetVcpusFlags with _CONFIG and add _LIVE to the set of flags when the domain is active. Likewise esxDomainSetVcpusFlags needs to enforce that _CONFIG is always set and that _LIVE is set iff the domain is active.
The same might apply to other drivers, I didn't check this yet.
Matthias
-- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

* src/driver.h (virDrvDomainSetVcpusFlags) (virDrvDomainGetVcpusFlags): New typedefs. (_virDriver): New callback members. * src/esx/esx_driver.c (esxDriver): Add stub for driver. * src/lxc/lxc_driver.c (lxcDriver): Likewise. * src/opennebula/one_driver.c (oneDriver): Likewise. * src/openvz/openvz_driver.c (openvzDriver): Likewise. * src/phyp/phyp_driver.c (phypDriver): Likewise. * src/qemu/qemu_driver.c (qemuDriver): Likewise. * src/remote/remote_driver.c (remote_driver): Likewise. * src/test/test_driver.c (testDriver): Likewise. * src/uml/uml_driver.c (umlDriver): Likewise. * src/vbox/vbox_tmpl.c (Driver): Likewise. * src/xen/xen_driver.c (xenUnifiedDriver): Likewise. * src/xenapi/xenapi_driver.c (xenapiDriver): Likewise. --- Pretty mechanical. src/driver.h | 9 +++++++++ src/esx/esx_driver.c | 2 ++ src/lxc/lxc_driver.c | 2 ++ src/opennebula/one_driver.c | 2 ++ src/openvz/openvz_driver.c | 2 ++ src/phyp/phyp_driver.c | 2 ++ src/qemu/qemu_driver.c | 2 ++ src/remote/remote_driver.c | 2 ++ src/test/test_driver.c | 2 ++ src/uml/uml_driver.c | 2 ++ src/vbox/vbox_tmpl.c | 2 ++ src/xen/xen_driver.c | 2 ++ src/xenapi/xenapi_driver.c | 2 ++ 13 files changed, 33 insertions(+), 0 deletions(-) diff --git a/src/driver.h b/src/driver.h index e443c1c..e157cee 100644 --- a/src/driver.h +++ b/src/driver.h @@ -173,6 +173,13 @@ typedef int (*virDrvDomainSetVcpus) (virDomainPtr domain, unsigned int nvcpus); typedef int + (*virDrvDomainSetVcpusFlags) (virDomainPtr domain, + unsigned int nvcpus, + unsigned int flags); +typedef int + (*virDrvDomainGetVcpusFlags) (virDomainPtr domain, + unsigned int flags); +typedef int (*virDrvDomainPinVcpu) (virDomainPtr domain, unsigned int vcpu, unsigned char *cpumap, @@ -508,6 +515,8 @@ struct _virDriver { virDrvDomainRestore domainRestore; virDrvDomainCoreDump domainCoreDump; virDrvDomainSetVcpus domainSetVcpus; + virDrvDomainSetVcpusFlags domainSetVcpusFlags; + virDrvDomainGetVcpusFlags domainGetVcpusFlags; virDrvDomainPinVcpu domainPinVcpu; virDrvDomainGetVcpus domainGetVcpus; virDrvDomainGetMaxVcpus domainGetMaxVcpus; diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c index e382950..1db3a90 100644 --- a/src/esx/esx_driver.c +++ b/src/esx/esx_driver.c @@ -4150,6 +4150,8 @@ static virDriver esxDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ esxDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ esxDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index 326fee6..935fe49 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -2553,6 +2553,8 @@ static virDriver lxcDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ NULL, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ NULL, /* domainGetMaxVcpus */ diff --git a/src/opennebula/one_driver.c b/src/opennebula/one_driver.c index e70f17b..2b36380 100644 --- a/src/opennebula/one_driver.c +++ b/src/opennebula/one_driver.c @@ -751,6 +751,8 @@ static virDriver oneDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ NULL, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ NULL, /* domainGetMaxVcpus */ diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 1ad93d9..8df9e7f 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1590,6 +1590,8 @@ static virDriver openvzDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ openvzDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ openvzDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index ab12392..56b0f30 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3936,6 +3936,8 @@ static virDriver phypDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ phypDomainSetCPU, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ phypGetLparCPUMAX, /* domainGetMaxVcpus */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 25695df..9d21f35 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -12643,6 +12643,8 @@ static virDriver qemuDriver = { qemudDomainRestore, /* domainRestore */ qemudDomainCoreDump, /* domainCoreDump */ qemudDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ qemudDomainPinVcpu, /* domainPinVcpu */ qemudDomainGetVcpus, /* domainGetVcpus */ qemudDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index acba01e..274c1fe 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -10299,6 +10299,8 @@ static virDriver remote_driver = { remoteDomainRestore, /* domainRestore */ remoteDomainCoreDump, /* domainCoreDump */ remoteDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ remoteDomainPinVcpu, /* domainPinVcpu */ remoteDomainGetVcpus, /* domainGetVcpus */ remoteDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 9d22339..f30de50 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -5260,6 +5260,8 @@ static virDriver testDriver = { testDomainRestore, /* domainRestore */ testDomainCoreDump, /* domainCoreDump */ testSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ testDomainPinVcpu, /* domainPinVcpu */ testDomainGetVcpus, /* domainGetVcpus */ testDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/uml/uml_driver.c b/src/uml/uml_driver.c index 9101928..654b45c 100644 --- a/src/uml/uml_driver.c +++ b/src/uml/uml_driver.c @@ -2129,6 +2129,8 @@ static virDriver umlDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ NULL, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ NULL, /* domainGetMaxVcpus */ diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c index b5cd2e4..a14b06b 100644 --- a/src/vbox/vbox_tmpl.c +++ b/src/vbox/vbox_tmpl.c @@ -8267,6 +8267,8 @@ virDriver NAME(Driver) = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ vboxDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ vboxDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c index 56ba41b..4bbca94 100644 --- a/src/xen/xen_driver.c +++ b/src/xen/xen_driver.c @@ -1939,6 +1939,8 @@ static virDriver xenUnifiedDriver = { xenUnifiedDomainRestore, /* domainRestore */ xenUnifiedDomainCoreDump, /* domainCoreDump */ xenUnifiedDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ xenUnifiedDomainPinVcpu, /* domainPinVcpu */ xenUnifiedDomainGetVcpus, /* domainGetVcpus */ xenUnifiedDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index 730859b..f57759a 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -1754,6 +1754,8 @@ static virDriver xenapiDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ xenapiDomainSetVcpus, /* domainSetVcpus */ + NULL, /* domainSetVcpusFlags */ + NULL, /* domainGetVcpusFlags */ xenapiDomainPinVcpu, /* domainPinVcpu */ xenapiDomainGetVcpus, /* domainGetVcpus */ xenapiDomainGetMaxVcpus, /* domainGetMaxVcpus */ -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:08PM -0600, Eric Blake wrote:
* src/driver.h (virDrvDomainSetVcpusFlags) (virDrvDomainGetVcpusFlags): New typedefs. (_virDriver): New callback members. * src/esx/esx_driver.c (esxDriver): Add stub for driver. * src/lxc/lxc_driver.c (lxcDriver): Likewise. * src/opennebula/one_driver.c (oneDriver): Likewise. * src/openvz/openvz_driver.c (openvzDriver): Likewise. * src/phyp/phyp_driver.c (phypDriver): Likewise. * src/qemu/qemu_driver.c (qemuDriver): Likewise. * src/remote/remote_driver.c (remote_driver): Likewise. * src/test/test_driver.c (testDriver): Likewise. * src/uml/uml_driver.c (umlDriver): Likewise. * src/vbox/vbox_tmpl.c (Driver): Likewise. * src/xen/xen_driver.c (xenUnifiedDriver): Likewise. * src/xenapi/xenapi_driver.c (xenapiDriver): Likewise. ---
Pretty mechanical.
Yup, ACK ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

* src/libvirt.c (virDomainSetVcpusFlags, virDomainGetVcpusFlags): New functions. --- Pretty mechanical. src/libvirt.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 122 insertions(+), 3 deletions(-) diff --git a/src/libvirt.c b/src/libvirt.c index ca383ba..3a29d70 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -5111,13 +5111,132 @@ error: } /** + * virDomainSetVcpusFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @nvcpus: the new number of virtual CPUs for this domain, must be at least 1 + * @flags: an OR'ed set of virDomainVcpuFlags + * + * Dynamically change the number of virtual CPUs used by the domain. + * Note that this call may fail if the underlying virtualization hypervisor + * does not support it or if growing the number is arbitrary limited. + * This function requires privileged access to the hypervisor. + * + * @flags must include VIR_DOMAIN_VCPU_ACTIVE to affect a running + * domain (which will fail if domain is not active), or + * VIR_DOMAIN_VCPU_PERSISTENT to affect the next boot via the XML + * description of the domain. Both flags may be set. + * + * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then + * VIR_DOMAIN_VCPU_ACTIVE must be clear, and only the maximum virtual + * CPU limit is altered; generally, this value must be less than or + * equal to virConnectGetMaxVcpus(). Otherwise, this call affects the + * current virtual CPU limit, which must be less than or equal to the + * maximum limit. + * + * Returns 0 in case of success, -1 in case of failure. + */ + +int +virDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus, + unsigned int flags) +{ + virConnectPtr conn; + DEBUG("domain=%p, nvcpus=%u, flags=%u", domain, nvcpus, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return (-1); + } + if (domain->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(domain, VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (nvcpus < 1) { + virLibDomainError(domain, VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + conn = domain->conn; + + if (conn->driver->domainSetVcpusFlags) { + int ret; + ret = conn->driver->domainSetVcpusFlags (domain, nvcpus, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** + * virDomainGetVcpusFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @flags: an OR'ed set of virDomainVcpuFlags + * + * Query the number of virtual CPUs used by the domain. Note that + * this call may fail if the underlying virtualization hypervisor does + * not support it. This function requires privileged access to the + * hypervisor. + * + * @flags must include either VIR_DOMAIN_VCPU_ACTIVE to query a + * running domain (which will fail if domain is not active), or + * VIR_DOMAIN_VCPU_PERSISTENT to query the XML description of the + * domain. It is an error to set both flags. + * + * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then the maximum + * virtual CPU limit is queried. Otherwise, this call queries the + * current virtual CPU limit. + * + * Returns 0 in case of success, -1 in case of failure. + */ + +int +virDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) +{ + virConnectPtr conn; + DEBUG("domain=%p, flags=%u", domain, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return (-1); + } + + conn = domain->conn; + + if (conn->driver->domainGetVcpusFlags) { + int ret; + ret = conn->driver->domainGetVcpusFlags (domain, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** * virDomainPinVcpu: * @domain: pointer to domain object, or NULL for Domain0 * @vcpu: virtual CPU number * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) - * Each bit set to 1 means that corresponding CPU is usable. - * Bytes are stored in little-endian order: CPU0-7, 8-15... - * In each byte, lowest CPU number is least significant bit. + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in * underlying virtualization system (Xen...). * If maplen < size, missing bytes are set to zero. -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:09PM -0600, Eric Blake wrote:
* src/libvirt.c (virDomainSetVcpusFlags, virDomainGetVcpusFlags): New functions. ---
Pretty mechanical.
src/libvirt.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 122 insertions(+), 3 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c index ca383ba..3a29d70 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -5111,13 +5111,132 @@ error: }
/** + * virDomainSetVcpusFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @nvcpus: the new number of virtual CPUs for this domain, must be at least 1 + * @flags: an OR'ed set of virDomainVcpuFlags + * + * Dynamically change the number of virtual CPUs used by the domain. + * Note that this call may fail if the underlying virtualization hypervisor + * does not support it or if growing the number is arbitrary limited. + * This function requires privileged access to the hypervisor. + * + * @flags must include VIR_DOMAIN_VCPU_ACTIVE to affect a running + * domain (which will fail if domain is not active), or + * VIR_DOMAIN_VCPU_PERSISTENT to affect the next boot via the XML + * description of the domain. Both flags may be set. + * + * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then + * VIR_DOMAIN_VCPU_ACTIVE must be clear, and only the maximum virtual + * CPU limit is altered; generally, this value must be less than or + * equal to virConnectGetMaxVcpus(). Otherwise, this call affects the + * current virtual CPU limit, which must be less than or equal to the + * maximum limit. + * + * Returns 0 in case of success, -1 in case of failure. + */ + +int +virDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus, + unsigned int flags) +{ + virConnectPtr conn; + DEBUG("domain=%p, nvcpus=%u, flags=%u", domain, nvcpus, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return (-1); + } + if (domain->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(domain, VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (nvcpus < 1) { + virLibDomainError(domain, VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + conn = domain->conn; + + if (conn->driver->domainSetVcpusFlags) { + int ret; + ret = conn->driver->domainSetVcpusFlags (domain, nvcpus, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** + * virDomainGetVcpusFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @flags: an OR'ed set of virDomainVcpuFlags + * + * Query the number of virtual CPUs used by the domain. Note that + * this call may fail if the underlying virtualization hypervisor does + * not support it. This function requires privileged access to the + * hypervisor. + * + * @flags must include either VIR_DOMAIN_VCPU_ACTIVE to query a + * running domain (which will fail if domain is not active), or + * VIR_DOMAIN_VCPU_PERSISTENT to query the XML description of the + * domain. It is an error to set both flags. + * + * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then the maximum + * virtual CPU limit is queried. Otherwise, this call queries the + * current virtual CPU limit. + * + * Returns 0 in case of success, -1 in case of failure. + */ + +int +virDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) +{ + virConnectPtr conn; + DEBUG("domain=%p, flags=%u", domain, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return (-1); + } + + conn = domain->conn; + + if (conn->driver->domainGetVcpusFlags) { + int ret; + ret = conn->driver->domainGetVcpusFlags (domain, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** * virDomainPinVcpu: * @domain: pointer to domain object, or NULL for Domain0 * @vcpu: virtual CPU number * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) - * Each bit set to 1 means that corresponding CPU is usable. - * Bytes are stored in little-endian order: CPU0-7, 8-15... - * In each byte, lowest CPU number is least significant bit. + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in * underlying virtualization system (Xen...). * If maplen < size, missing bytes are set to zero.
Well one could argue that the flags set could be checked upfront here instead of waiting for the individual drivers (for example you error immediately instead of doing a round-trip when talking to libvirtd), but the check will have to be done in the driver too, so that's fine. ACK Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

* daemon/remote.c (remoteDispatchDomainSetVcpusFlags) (remoteDispatchDomainGetVcpusFlags): New functions. * src/remote/remote_driver.c (remoteDomainSetVcpusFlags) (remoteDomainGetVcpusFlags, remote_driver): Client side serialization. * src/remote/remote_protocol.x (remote_domain_set_vcpus_flags_args) (remote_domain_get_vcpus_flags_args) (remote_domain_get_vcpus_flags_ret) (REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS) (REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS): Define wire format. * daemon/remote_dispatch_args.h: Regenerate. * daemon/remote_dispatch_prototypes.h: Likewise. * daemon/remote_dispatch_table.h: Likewise. * src/remote/remote_protocol.c: Likewise. * src/remote/remote_protocol.h: Likewise. * src/remote_protocol-structs: Likewise. --- Pretty mechanical. daemon/remote.c | 53 ++++++++++++++++++++++++++++++++ daemon/remote_dispatch_args.h | 2 + daemon/remote_dispatch_prototypes.h | 16 ++++++++++ daemon/remote_dispatch_ret.h | 1 + daemon/remote_dispatch_table.h | 10 ++++++ src/remote/remote_driver.c | 57 +++++++++++++++++++++++++++++++++- src/remote/remote_protocol.c | 33 ++++++++++++++++++++ src/remote/remote_protocol.h | 26 ++++++++++++++++ src/remote/remote_protocol.x | 19 +++++++++++- src/remote_protocol-structs | 12 +++++++ 10 files changed, 226 insertions(+), 3 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index 6b67678..a3193b9 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -1751,6 +1751,33 @@ oom: } static int +remoteDispatchDomainGetVcpusFlags (struct qemud_server *server ATTRIBUTE_UNUSED, + struct qemud_client *client ATTRIBUTE_UNUSED, + virConnectPtr conn, + remote_message_header *hdr ATTRIBUTE_UNUSED, + remote_error *rerr, + remote_domain_get_vcpus_flags_args *args, + remote_domain_get_vcpus_flags_ret *ret) +{ + virDomainPtr dom; + + dom = get_nonnull_domain (conn, args->dom); + if (dom == NULL) { + remoteDispatchConnError(rerr, conn); + return -1; + } + + ret->num = virDomainGetVcpusFlags (dom, args->flags); + if (ret->num == -1) { + virDomainFree(dom); + remoteDispatchConnError(rerr, conn); + return -1; + } + virDomainFree(dom); + return 0; +} + +static int remoteDispatchDomainMigratePrepare (struct qemud_server *server ATTRIBUTE_UNUSED, struct qemud_client *client ATTRIBUTE_UNUSED, virConnectPtr conn, @@ -2358,6 +2385,32 @@ remoteDispatchDomainSetVcpus (struct qemud_server *server ATTRIBUTE_UNUSED, } static int +remoteDispatchDomainSetVcpusFlags (struct qemud_server *server ATTRIBUTE_UNUSED, + struct qemud_client *client ATTRIBUTE_UNUSED, + virConnectPtr conn, + remote_message_header *hdr ATTRIBUTE_UNUSED, + remote_error *rerr, + remote_domain_set_vcpus_flags_args *args, + void *ret ATTRIBUTE_UNUSED) +{ + virDomainPtr dom; + + dom = get_nonnull_domain (conn, args->dom); + if (dom == NULL) { + remoteDispatchConnError(rerr, conn); + return -1; + } + + if (virDomainSetVcpusFlags (dom, args->nvcpus, args->flags) == -1) { + virDomainFree(dom); + remoteDispatchConnError(rerr, conn); + return -1; + } + virDomainFree(dom); + return 0; +} + +static int remoteDispatchDomainShutdown (struct qemud_server *server ATTRIBUTE_UNUSED, struct qemud_client *client ATTRIBUTE_UNUSED, virConnectPtr conn, diff --git a/daemon/remote_dispatch_args.h b/daemon/remote_dispatch_args.h index ee95043..18cab30 100644 --- a/daemon/remote_dispatch_args.h +++ b/daemon/remote_dispatch_args.h @@ -165,3 +165,5 @@ remote_domain_snapshot_delete_args val_remote_domain_snapshot_delete_args; remote_domain_get_block_info_args val_remote_domain_get_block_info_args; remote_domain_create_with_flags_args val_remote_domain_create_with_flags_args; + remote_domain_set_vcpus_flags_args val_remote_domain_set_vcpus_flags_args; + remote_domain_get_vcpus_flags_args val_remote_domain_get_vcpus_flags_args; diff --git a/daemon/remote_dispatch_prototypes.h b/daemon/remote_dispatch_prototypes.h index cf1a0f9..2ba7907 100644 --- a/daemon/remote_dispatch_prototypes.h +++ b/daemon/remote_dispatch_prototypes.h @@ -298,6 +298,14 @@ static int remoteDispatchDomainGetVcpus( remote_error *err, remote_domain_get_vcpus_args *args, remote_domain_get_vcpus_ret *ret); +static int remoteDispatchDomainGetVcpusFlags( + struct qemud_server *server, + struct qemud_client *client, + virConnectPtr conn, + remote_message_header *hdr, + remote_error *err, + remote_domain_get_vcpus_flags_args *args, + remote_domain_get_vcpus_flags_ret *ret); static int remoteDispatchDomainHasCurrentSnapshot( struct qemud_server *server, struct qemud_client *client, @@ -538,6 +546,14 @@ static int remoteDispatchDomainSetVcpus( remote_error *err, remote_domain_set_vcpus_args *args, void *ret); +static int remoteDispatchDomainSetVcpusFlags( + struct qemud_server *server, + struct qemud_client *client, + virConnectPtr conn, + remote_message_header *hdr, + remote_error *err, + remote_domain_set_vcpus_flags_args *args, + void *ret); static int remoteDispatchDomainShutdown( struct qemud_server *server, struct qemud_client *client, diff --git a/daemon/remote_dispatch_ret.h b/daemon/remote_dispatch_ret.h index 75ac0b2..5ecbeff 100644 --- a/daemon/remote_dispatch_ret.h +++ b/daemon/remote_dispatch_ret.h @@ -135,3 +135,4 @@ remote_domain_snapshot_current_ret val_remote_domain_snapshot_current_ret; remote_domain_get_block_info_ret val_remote_domain_get_block_info_ret; remote_domain_create_with_flags_ret val_remote_domain_create_with_flags_ret; + remote_domain_get_vcpus_flags_ret val_remote_domain_get_vcpus_flags_ret; diff --git a/daemon/remote_dispatch_table.h b/daemon/remote_dispatch_table.h index ef00edd..64191f3 100644 --- a/daemon/remote_dispatch_table.h +++ b/daemon/remote_dispatch_table.h @@ -987,3 +987,13 @@ .args_filter = (xdrproc_t) xdr_remote_domain_create_with_flags_args, .ret_filter = (xdrproc_t) xdr_remote_domain_create_with_flags_ret, }, +{ /* DomainSetVcpusFlags => 197 */ + .fn = (dispatch_fn) remoteDispatchDomainSetVcpusFlags, + .args_filter = (xdrproc_t) xdr_remote_domain_set_vcpus_flags_args, + .ret_filter = (xdrproc_t) xdr_void, +}, +{ /* DomainGetVcpusFlags => 198 */ + .fn = (dispatch_fn) remoteDispatchDomainGetVcpusFlags, + .args_filter = (xdrproc_t) xdr_remote_domain_get_vcpus_flags_args, + .ret_filter = (xdrproc_t) xdr_remote_domain_get_vcpus_flags_ret, +}, diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 274c1fe..3bf3336 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -2411,6 +2411,59 @@ done: } static int +remoteDomainSetVcpusFlags (virDomainPtr domain, unsigned int nvcpus, + unsigned int flags) +{ + int rv = -1; + remote_domain_set_vcpus_flags_args args; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + make_nonnull_domain (&args.dom, domain); + args.nvcpus = nvcpus; + args.flags = flags; + + if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS, + (xdrproc_t) xdr_remote_domain_set_vcpus_flags_args, + (char *) &args, + (xdrproc_t) xdr_void, (char *) NULL) == -1) + goto done; + + rv = 0; + +done: + remoteDriverUnlock(priv); + return rv; +} + +static int +remoteDomainGetVcpusFlags (virDomainPtr domain, unsigned int flags) +{ + int rv = -1; + remote_domain_get_vcpus_flags_args args; + remote_domain_get_vcpus_flags_ret ret; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + make_nonnull_domain (&args.dom, domain); + args.flags = flags; + + memset (&ret, 0, sizeof ret); + if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS, + (xdrproc_t) xdr_remote_domain_get_vcpus_flags_args, (char *) &args, + (xdrproc_t) xdr_remote_domain_get_vcpus_flags_ret, (char *) &ret) == -1) + goto done; + + rv = ret.num; + +done: + remoteDriverUnlock(priv); + return rv; +} + +static int remoteDomainPinVcpu (virDomainPtr domain, unsigned int vcpu, unsigned char *cpumap, @@ -10299,8 +10352,8 @@ static virDriver remote_driver = { remoteDomainRestore, /* domainRestore */ remoteDomainCoreDump, /* domainCoreDump */ remoteDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + remoteDomainSetVcpusFlags, /* domainSetVcpusFlags */ + remoteDomainGetVcpusFlags, /* domainGetVcpusFlags */ remoteDomainPinVcpu, /* domainPinVcpu */ remoteDomainGetVcpus, /* domainGetVcpus */ remoteDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/remote/remote_protocol.c b/src/remote/remote_protocol.c index 2483004..7f13412 100644 --- a/src/remote/remote_protocol.c +++ b/src/remote/remote_protocol.c @@ -1267,6 +1267,39 @@ xdr_remote_domain_set_vcpus_args (XDR *xdrs, remote_domain_set_vcpus_args *objp) } bool_t +xdr_remote_domain_set_vcpus_flags_args (XDR *xdrs, remote_domain_set_vcpus_flags_args *objp) +{ + + if (!xdr_remote_nonnull_domain (xdrs, &objp->dom)) + return FALSE; + if (!xdr_u_int (xdrs, &objp->nvcpus)) + return FALSE; + if (!xdr_u_int (xdrs, &objp->flags)) + return FALSE; + return TRUE; +} + +bool_t +xdr_remote_domain_get_vcpus_flags_args (XDR *xdrs, remote_domain_get_vcpus_flags_args *objp) +{ + + if (!xdr_remote_nonnull_domain (xdrs, &objp->dom)) + return FALSE; + if (!xdr_u_int (xdrs, &objp->flags)) + return FALSE; + return TRUE; +} + +bool_t +xdr_remote_domain_get_vcpus_flags_ret (XDR *xdrs, remote_domain_get_vcpus_flags_ret *objp) +{ + + if (!xdr_int (xdrs, &objp->num)) + return FALSE; + return TRUE; +} + +bool_t xdr_remote_domain_pin_vcpu_args (XDR *xdrs, remote_domain_pin_vcpu_args *objp) { char **objp_cpp0 = (char **) (void *) &objp->cpumap.cpumap_val; diff --git a/src/remote/remote_protocol.h b/src/remote/remote_protocol.h index afe9287..43bf853 100644 --- a/src/remote/remote_protocol.h +++ b/src/remote/remote_protocol.h @@ -704,6 +704,24 @@ struct remote_domain_set_vcpus_args { }; typedef struct remote_domain_set_vcpus_args remote_domain_set_vcpus_args; +struct remote_domain_set_vcpus_flags_args { + remote_nonnull_domain dom; + u_int nvcpus; + u_int flags; +}; +typedef struct remote_domain_set_vcpus_flags_args remote_domain_set_vcpus_flags_args; + +struct remote_domain_get_vcpus_flags_args { + remote_nonnull_domain dom; + u_int flags; +}; +typedef struct remote_domain_get_vcpus_flags_args remote_domain_get_vcpus_flags_args; + +struct remote_domain_get_vcpus_flags_ret { + int num; +}; +typedef struct remote_domain_get_vcpus_flags_ret remote_domain_get_vcpus_flags_ret; + struct remote_domain_pin_vcpu_args { remote_nonnull_domain dom; int vcpu; @@ -2233,6 +2251,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_GET_BLOCK_INFO = 194, REMOTE_PROC_DOMAIN_EVENT_IO_ERROR_REASON = 195, REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196, + REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS = 197, + REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS = 198, }; typedef enum remote_procedure remote_procedure; @@ -2369,6 +2389,9 @@ extern bool_t xdr_remote_domain_define_xml_args (XDR *, remote_domain_define_xm extern bool_t xdr_remote_domain_define_xml_ret (XDR *, remote_domain_define_xml_ret*); extern bool_t xdr_remote_domain_undefine_args (XDR *, remote_domain_undefine_args*); extern bool_t xdr_remote_domain_set_vcpus_args (XDR *, remote_domain_set_vcpus_args*); +extern bool_t xdr_remote_domain_set_vcpus_flags_args (XDR *, remote_domain_set_vcpus_flags_args*); +extern bool_t xdr_remote_domain_get_vcpus_flags_args (XDR *, remote_domain_get_vcpus_flags_args*); +extern bool_t xdr_remote_domain_get_vcpus_flags_ret (XDR *, remote_domain_get_vcpus_flags_ret*); extern bool_t xdr_remote_domain_pin_vcpu_args (XDR *, remote_domain_pin_vcpu_args*); extern bool_t xdr_remote_domain_get_vcpus_args (XDR *, remote_domain_get_vcpus_args*); extern bool_t xdr_remote_domain_get_vcpus_ret (XDR *, remote_domain_get_vcpus_ret*); @@ -2704,6 +2727,9 @@ extern bool_t xdr_remote_domain_define_xml_args (); extern bool_t xdr_remote_domain_define_xml_ret (); extern bool_t xdr_remote_domain_undefine_args (); extern bool_t xdr_remote_domain_set_vcpus_args (); +extern bool_t xdr_remote_domain_set_vcpus_flags_args (); +extern bool_t xdr_remote_domain_get_vcpus_flags_args (); +extern bool_t xdr_remote_domain_get_vcpus_flags_ret (); extern bool_t xdr_remote_domain_pin_vcpu_args (); extern bool_t xdr_remote_domain_get_vcpus_args (); extern bool_t xdr_remote_domain_get_vcpus_ret (); diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 8af469c..b5400af 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -728,6 +728,21 @@ struct remote_domain_set_vcpus_args { int nvcpus; }; +struct remote_domain_set_vcpus_flags_args { + remote_nonnull_domain dom; + unsigned int nvcpus; + unsigned int flags; +}; + +struct remote_domain_get_vcpus_flags_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_get_vcpus_flags_ret { + int num; +}; + struct remote_domain_pin_vcpu_args { remote_nonnull_domain dom; int vcpu; @@ -2020,7 +2035,9 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_SNAPSHOT_DELETE = 193, REMOTE_PROC_DOMAIN_GET_BLOCK_INFO = 194, REMOTE_PROC_DOMAIN_EVENT_IO_ERROR_REASON = 195, - REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196 + REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196, + REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS = 197, + REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS = 198 /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index a5fc6aa..fd056fe 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -426,6 +426,18 @@ struct remote_domain_set_vcpus_args { remote_nonnull_domain dom; int nvcpus; }; +struct remote_domain_set_vcpus_flags_args { + remote_nonnull_domain dom; + u_int nvcpus; + u_int flags; +}; +struct remote_domain_get_vcpus_flags_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_get_vcpus_flags_ret { + int num; +}; struct remote_domain_pin_vcpu_args { remote_nonnull_domain dom; int vcpu; -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:10PM -0600, Eric Blake wrote:
* daemon/remote.c (remoteDispatchDomainSetVcpusFlags) (remoteDispatchDomainGetVcpusFlags): New functions. * src/remote/remote_driver.c (remoteDomainSetVcpusFlags) (remoteDomainGetVcpusFlags, remote_driver): Client side serialization. * src/remote/remote_protocol.x (remote_domain_set_vcpus_flags_args) (remote_domain_get_vcpus_flags_args) (remote_domain_get_vcpus_flags_ret) (REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS) (REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS): Define wire format. * daemon/remote_dispatch_args.h: Regenerate. * daemon/remote_dispatch_prototypes.h: Likewise. * daemon/remote_dispatch_table.h: Likewise. * src/remote/remote_protocol.c: Likewise. * src/remote/remote_protocol.h: Likewise. * src/remote_protocol-structs: Likewise. [...] --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -728,6 +728,21 @@ struct remote_domain_set_vcpus_args { int nvcpus; };
+struct remote_domain_set_vcpus_flags_args { + remote_nonnull_domain dom; + unsigned int nvcpus; + unsigned int flags; +}; + +struct remote_domain_get_vcpus_flags_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_get_vcpus_flags_ret { + int num; +}; + struct remote_domain_pin_vcpu_args { remote_nonnull_domain dom; int vcpu; @@ -2020,7 +2035,9 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_SNAPSHOT_DELETE = 193, REMOTE_PROC_DOMAIN_GET_BLOCK_INFO = 194, REMOTE_PROC_DOMAIN_EVENT_IO_ERROR_REASON = 195, - REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196 + REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196, + REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS = 197, + REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS = 198
/* * Notice how the entries are grouped in sets of 10 ?
that's the core part and that looks fine, ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus): Move guts... (esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new functions. (esxDriver): Trivially support the new API. * src/openvz/openvz_driver.c (openvzDomainSetVcpus) (openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus) (openvzDomainGetVcpusFlags, openvzDriver): Likewise. * src/phyp/phyp_driver.c (phypDomainSetCPU) (phypDomainSetVcpusFlags, phypGetLparCPUMAX) (phypDomainGetVcpusFlags, phypDriver): Likewise. * src/qemu/qemu_driver.c (qemudDomainSetVcpus) (qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus) (qemudDomainGetVcpusFlags, qemuDriver): Likewise. * src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags) (testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainSetVcpus) (vboxDomainSetVcpusFlags, virDomainGetMaxVcpus) (virDomainGetVcpusFlags, virDriver): Likewise. * src/xen/xen_driver.c (xenUnifiedDomainSetVcpus) (xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus) (xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise. * src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus) (xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus) (xenapiDomainGetVcpusFlags, xenapiDriver): Likewise. (xenapiError): New helper macro. --- Long, but consistent - anywhere the old API exists, this implements the new API, makes the old one a wrapper around the new, and makes the new API fail for any flag combinations not yet implemented. src/esx/esx_driver.c | 32 +++++++++++++++++++--- src/openvz/openvz_driver.c | 34 +++++++++++++++++++++--- src/phyp/phyp_driver.c | 32 ++++++++++++++++++++--- src/qemu/qemu_driver.c | 38 +++++++++++++++++++++++++--- src/test/test_driver.c | 36 ++++++++++++++++++++++--- src/vbox/vbox_tmpl.c | 36 +++++++++++++++++++++++--- src/xen/xen_driver.c | 34 ++++++++++++++++++++++--- src/xenapi/xenapi_driver.c | 60 ++++++++++++++++++++++++++++++++++++++------ 8 files changed, 263 insertions(+), 39 deletions(-) diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c index 1db3a90..3d13d74 100644 --- a/src/esx/esx_driver.c +++ b/src/esx/esx_driver.c @@ -2382,7 +2382,8 @@ esxDomainGetInfo(virDomainPtr domain, virDomainInfoPtr info) static int -esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) +esxDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus, + unsigned int flags) { int result = -1; esxPrivate *priv = domain->conn->privateData; @@ -2392,6 +2393,11 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) esxVI_ManagedObjectReference *task = NULL; esxVI_TaskInfoState taskInfoState; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + if (nvcpus < 1) { ESX_ERROR(VIR_ERR_INVALID_ARG, "%s", _("Requested number of virtual CPUs must at least be 1")); @@ -2451,15 +2457,26 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) } +static int +esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) +{ + return esxDomainSetVcpusFlags(domain, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + static int -esxDomainGetMaxVcpus(virDomainPtr domain) +esxDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) { esxPrivate *priv = domain->conn->privateData; esxVI_String *propertyNameList = NULL; esxVI_ObjectContent *hostSystem = NULL; esxVI_DynamicProperty *dynamicProperty = NULL; + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + if (priv->maxVcpus > 0) { return priv->maxVcpus; } @@ -2505,7 +2522,12 @@ esxDomainGetMaxVcpus(virDomainPtr domain) return priv->maxVcpus; } - +static int +esxDomainGetMaxVcpus(virDomainPtr domain) +{ + return esxDomainGetVcpusFlags(domain, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} static char * esxDomainDumpXML(virDomainPtr domain, int flags) @@ -4150,8 +4172,8 @@ static virDriver esxDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ esxDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + esxDomainSetVcpusFlags, /* domainSetVcpusFlags */ + esxDomainGetVcpusFlags, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ esxDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 8df9e7f..547646d 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -67,7 +67,6 @@ static int openvzGetProcessInfo(unsigned long long *cpuTime, int vpsid); static int openvzGetMaxVCPUs(virConnectPtr conn, const char *type); static int openvzDomainGetMaxVcpus(virDomainPtr dom); -static int openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus); static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, unsigned int nvcpus); static int openvzDomainSetMemoryInternal(virDomainObjPtr vm, @@ -1211,11 +1210,24 @@ static int openvzGetMaxVCPUs(virConnectPtr conn ATTRIBUTE_UNUSED, return -1; } +static int +openvzDomainGetVcpusFlags(virDomainPtr dom ATTRIBUTE_UNUSED, + unsigned int flags) +{ + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + openvzError(VIR_ERR_INVALID_ARG, _("unsupported flags (0x%x)"), flags); + return -1; + } -static int openvzDomainGetMaxVcpus(virDomainPtr dom ATTRIBUTE_UNUSED) { return openvzGetMaxVCPUs(NULL, "openvz"); } +static int openvzDomainGetMaxVcpus(virDomainPtr dom) +{ + return openvzDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, unsigned int nvcpus) { @@ -1241,12 +1253,18 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, return 0; } -static int openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) +static int openvzDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) { virDomainObjPtr vm; struct openvz_driver *driver = dom->conn->privateData; int ret = -1; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + openvzError(VIR_ERR_INVALID_ARG, _("unsupported flags (0x%x)"), flags); + return -1; + } + openvzDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); openvzDriverUnlock(driver); @@ -1272,6 +1290,12 @@ cleanup: return ret; } +static int +openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) +{ + return openvzDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + static virDrvOpenStatus openvzOpen(virConnectPtr conn, virConnectAuthPtr auth ATTRIBUTE_UNUSED, int flags ATTRIBUTE_UNUSED) @@ -1590,8 +1614,8 @@ static virDriver openvzDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ openvzDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + openvzDomainSetVcpusFlags, /* domainSetVcpusFlags */ + openvzDomainGetVcpusFlags, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ openvzDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index 56b0f30..74c22bb 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -1497,15 +1497,27 @@ phypGetLparCPU(virConnectPtr conn, const char *managed_system, int lpar_id) } static int -phypGetLparCPUMAX(virDomainPtr dom) +phypDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) { phyp_driverPtr phyp_driver = dom->conn->privateData; char *managed_system = phyp_driver->managed_system; + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + PHYP_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + return phypGetLparCPUGeneric(dom->conn, managed_system, dom->id, 1); } static int +phypGetLparCPUMAX(virDomainPtr dom) +{ + return phypDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + +static int phypGetRemoteSlot(virConnectPtr conn, const char *managed_system, const char *lpar_name) { @@ -3826,7 +3838,8 @@ phypConnectGetCapabilities(virConnectPtr conn) } static int -phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus) +phypDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) { ConnectionData *connection_data = dom->conn->networkPrivateData; phyp_driverPtr phyp_driver = dom->conn->privateData; @@ -3841,6 +3854,11 @@ phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus) unsigned int amount = 0; virBuffer buf = VIR_BUFFER_INITIALIZER; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + PHYP_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + if ((ncpus = phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0) return 0; @@ -3886,6 +3904,12 @@ phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus) } +static int +phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus) +{ + return phypDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + static virDrvOpenStatus phypVIOSDriverOpen(virConnectPtr conn, virConnectAuthPtr auth ATTRIBUTE_UNUSED, @@ -3936,8 +3960,8 @@ static virDriver phypDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ phypDomainSetCPU, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + phypDomainSetVcpusFlags, /* domainSetVcpusFlags */ + phypDomainGetVcpusFlags, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ phypGetLparCPUMAX, /* domainGetMaxVcpus */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9d21f35..943cad4 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5897,13 +5897,22 @@ unsupported: } -static int qemudDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) { +static int +qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) +{ struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; const char * type; int max; int ret = -1; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); qemuDriverUnlock(driver); @@ -5957,6 +5966,12 @@ cleanup: return ret; } +static int +qemudDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) +{ + return qemudDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + static int qemudDomainPinVcpu(virDomainPtr dom, @@ -6113,12 +6128,20 @@ cleanup: } -static int qemudDomainGetMaxVcpus(virDomainPtr dom) { +static int +qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) +{ struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; const char *type; int ret = -1; + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); qemuDriverUnlock(driver); @@ -6146,6 +6169,13 @@ cleanup: return ret; } +static int +qemudDomainGetMaxVcpus(virDomainPtr dom) +{ + return qemudDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + static int qemudDomainGetSecurityLabel(virDomainPtr dom, virSecurityLabelPtr seclabel) { struct qemud_driver *driver = (struct qemud_driver *)dom->conn->privateData; @@ -12643,8 +12673,8 @@ static virDriver qemuDriver = { qemudDomainRestore, /* domainRestore */ qemudDomainCoreDump, /* domainCoreDump */ qemudDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + qemudDomainSetVcpusFlags, /* domainSetVcpusFlags */ + qemudDomainGetVcpusFlags, /* domainGetVcpusFlags */ qemudDomainPinVcpu, /* domainPinVcpu */ qemudDomainGetVcpus, /* domainGetVcpus */ qemudDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/test/test_driver.c b/src/test/test_driver.c index f30de50..1643c81 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -2029,17 +2029,37 @@ cleanup: return ret; } -static int testDomainGetMaxVcpus(virDomainPtr domain) +static int +testDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) { + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + return testGetMaxVCPUs(domain->conn, "test"); } -static int testSetVcpus(virDomainPtr domain, - unsigned int nrCpus) { +static int +testDomainGetMaxVcpus(virDomainPtr domain) +{ + return testDomainGetVcpusFlags(domain, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + +static int +testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus, + unsigned int flags) +{ testConnPtr privconn = domain->conn->privateData; virDomainObjPtr privdom = NULL; int ret = -1, maxvcpus; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + /* Do this first before locking */ maxvcpus = testDomainGetMaxVcpus(domain); if (maxvcpus < 0) @@ -2082,6 +2102,12 @@ cleanup: return ret; } +static int +testSetVcpus(virDomainPtr domain, unsigned int nrCpus) +{ + return testDomainSetVcpusFlags(domain, nrCpus, VIR_DOMAIN_VCPU_ACTIVE); +} + static int testDomainGetVcpus(virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, @@ -5260,8 +5286,8 @@ static virDriver testDriver = { testDomainRestore, /* domainRestore */ testDomainCoreDump, /* domainCoreDump */ testSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + testDomainSetVcpusFlags, /* domainSetVcpusFlags */ + testDomainGetVcpusFlags, /* domainGetVcpusFlags */ testDomainPinVcpu, /* domainPinVcpu */ testDomainGetVcpus, /* domainGetVcpus */ testDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c index a14b06b..960bcb3 100644 --- a/src/vbox/vbox_tmpl.c +++ b/src/vbox/vbox_tmpl.c @@ -1839,13 +1839,21 @@ cleanup: return ret; } -static int vboxDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) { +static int +vboxDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) +{ VBOX_OBJECT_CHECK(dom->conn, int, -1); IMachine *machine = NULL; vboxIID *iid = NULL; PRUint32 CPUCount = nvcpus; nsresult rc; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + vboxError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + #if VBOX_API_VERSION == 2002 if (VIR_ALLOC(iid) < 0) { virReportOOMError(); @@ -1887,11 +1895,24 @@ cleanup: return ret; } -static int vboxDomainGetMaxVcpus(virDomainPtr dom) { +static int +vboxDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) +{ + return vboxDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + +static int +vboxDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) +{ VBOX_OBJECT_CHECK(dom->conn, int, -1); ISystemProperties *systemProperties = NULL; PRUint32 maxCPUCount = 0; + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + vboxError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + /* Currently every domain supports the same number of max cpus * as that supported by vbox and thus take it directly from * the systemproperties. @@ -1909,6 +1930,13 @@ static int vboxDomainGetMaxVcpus(virDomainPtr dom) { return ret; } +static int +vboxDomainGetMaxVcpus(virDomainPtr dom) +{ + return vboxDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + static char *vboxDomainDumpXML(virDomainPtr dom, int flags) { VBOX_OBJECT_CHECK(dom->conn, char *, NULL); virDomainDefPtr def = NULL; @@ -8267,8 +8295,8 @@ virDriver NAME(Driver) = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ vboxDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + vboxDomainSetVcpusFlags, /* domainSetVcpusFlags */ + vboxDomainGetVcpusFlags, /* domainGetVcpusFlags */ NULL, /* domainPinVcpu */ NULL, /* domainGetVcpus */ vboxDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c index 4bbca94..845cebb 100644 --- a/src/xen/xen_driver.c +++ b/src/xen/xen_driver.c @@ -1069,11 +1069,18 @@ xenUnifiedDomainCoreDump (virDomainPtr dom, const char *to, int flags) } static int -xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) +xenUnifiedDomainSetVcpusFlags (virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) { GET_PRIVATE(dom->conn); int i; + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + /* Try non-hypervisor methods first, then hypervisor direct method * as a last resort. */ @@ -1093,6 +1100,12 @@ xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) } static int +xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) +{ + return xenUnifiedDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + +static int xenUnifiedDomainPinVcpu (virDomainPtr dom, unsigned int vcpu, unsigned char *cpumap, int maplen) { @@ -1126,11 +1139,17 @@ xenUnifiedDomainGetVcpus (virDomainPtr dom, } static int -xenUnifiedDomainGetMaxVcpus (virDomainPtr dom) +xenUnifiedDomainGetVcpusFlags (virDomainPtr dom, unsigned int flags) { GET_PRIVATE(dom->conn); int i, ret; + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + for (i = 0; i < XEN_UNIFIED_NR_DRIVERS; ++i) if (priv->opened[i] && drivers[i]->domainGetMaxVcpus) { ret = drivers[i]->domainGetMaxVcpus (dom); @@ -1140,6 +1159,13 @@ xenUnifiedDomainGetMaxVcpus (virDomainPtr dom) return -1; } +static int +xenUnifiedDomainGetMaxVcpus (virDomainPtr dom) +{ + return xenUnifiedDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + static char * xenUnifiedDomainDumpXML (virDomainPtr dom, int flags) { @@ -1939,8 +1965,8 @@ static virDriver xenUnifiedDriver = { xenUnifiedDomainRestore, /* domainRestore */ xenUnifiedDomainCoreDump, /* domainCoreDump */ xenUnifiedDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + xenUnifiedDomainSetVcpusFlags, /* domainSetVcpusFlags */ + xenUnifiedDomainGetVcpusFlags, /* domainGetVcpusFlags */ xenUnifiedDomainPinVcpu, /* domainPinVcpu */ xenUnifiedDomainGetVcpus, /* domainGetVcpus */ xenUnifiedDomainGetMaxVcpus, /* domainGetMaxVcpus */ diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index f57759a..a5690a0 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -40,6 +40,11 @@ #include "xenapi_driver_private.h" #include "xenapi_utils.h" +#define VIR_FROM_THIS VIR_FROM_XENAPI + +#define xenapiError(code, ...) \ + virReportErrorHelper(NULL, VIR_FROM_THIS, code, __FILE__, \ + __FUNCTION__, __LINE__, __VA_ARGS__) /* * getCapsObject @@ -987,19 +992,26 @@ xenapiDomainGetInfo (virDomainPtr dom, virDomainInfoPtr info) /* - * xenapiDomainSetVcpus + * xenapiDomainSetVcpusFlags * * Sets the VCPUs on the domain * Return 0 on success or -1 in case of error */ static int -xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) +xenapiDomainSetVcpusFlags (virDomainPtr dom, unsigned int nvcpus, + unsigned int flags) { - /* vm.set_vcpus_max */ xen_vm vm; xen_vm_set *vms; xen_session *session = ((struct _xenapiPrivate *)(dom->conn->privateData))->session; + + if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + xenapiError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + if (xen_vm_get_by_name_label(session, &vms, dom->name) && vms->size > 0) { if (vms->size != 1) { xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR, @@ -1019,6 +1031,18 @@ xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) } /* + * xenapiDomainSetVcpus + * + * Sets the VCPUs on the domain + * Return 0 on success or -1 in case of error + */ +static int +xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus) +{ + return xenapiDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} + +/* * xenapiDomainPinVcpu * * Dynamically change the real CPUs which can be allocated to a virtual CPU @@ -1140,19 +1164,26 @@ xenapiDomainGetVcpus (virDomainPtr dom, } /* - * xenapiDomainGetMaxVcpus + * xenapiDomainGetVcpusFlags * * - * Returns maximum number of Vcpus on success or -1 in case of error + * Returns Vcpus count on success or -1 in case of error */ static int -xenapiDomainGetMaxVcpus (virDomainPtr dom) +xenapiDomainGetVcpusFlags (virDomainPtr dom, unsigned int flags) { xen_vm vm; xen_vm_set *vms; int64_t maxvcpu = 0; enum xen_vm_power_state state; xen_session *session = ((struct _xenapiPrivate *)(dom->conn->privateData))->session; + + if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + xenapiError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), + flags); + return -1; + } + if (xen_vm_get_by_name_label(session, &vms, dom->name) && vms->size > 0) { if (vms->size != 1) { xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR, @@ -1176,6 +1207,19 @@ xenapiDomainGetMaxVcpus (virDomainPtr dom) } /* + * xenapiDomainGetMaxVcpus + * + * + * Returns maximum number of Vcpus on success or -1 in case of error + */ +static int +xenapiDomainGetMaxVcpus (virDomainPtr dom) +{ + return xenapiDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_MAXIMUM)); +} + +/* * xenapiDomainDumpXML * * @@ -1754,8 +1798,8 @@ static virDriver xenapiDriver = { NULL, /* domainRestore */ NULL, /* domainCoreDump */ xenapiDomainSetVcpus, /* domainSetVcpus */ - NULL, /* domainSetVcpusFlags */ - NULL, /* domainGetVcpusFlags */ + xenapiDomainSetVcpusFlags, /* domainSetVcpusFlags */ + xenapiDomainGetVcpusFlags, /* domainGetVcpusFlags */ xenapiDomainPinVcpu, /* domainPinVcpu */ xenapiDomainGetVcpus, /* domainGetVcpus */ xenapiDomainGetMaxVcpus, /* domainGetMaxVcpus */ -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:11PM -0600, Eric Blake wrote:
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus): Move guts... (esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new functions. (esxDriver): Trivially support the new API. * src/openvz/openvz_driver.c (openvzDomainSetVcpus) (openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus) (openvzDomainGetVcpusFlags, openvzDriver): Likewise. * src/phyp/phyp_driver.c (phypDomainSetCPU) (phypDomainSetVcpusFlags, phypGetLparCPUMAX) (phypDomainGetVcpusFlags, phypDriver): Likewise. * src/qemu/qemu_driver.c (qemudDomainSetVcpus) (qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus) (qemudDomainGetVcpusFlags, qemuDriver): Likewise. * src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags) (testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainSetVcpus) (vboxDomainSetVcpusFlags, virDomainGetMaxVcpus) (virDomainGetVcpusFlags, virDriver): Likewise. * src/xen/xen_driver.c (xenUnifiedDomainSetVcpus) (xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus) (xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise. * src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus) (xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus) (xenapiDomainGetVcpusFlags, xenapiDriver): Likewise. (xenapiError): New helper macro. ---
Long, but consistent - anywhere the old API exists, this implements the new API, makes the old one a wrapper around the new, and makes the new API fail for any flag combinations not yet implemented.
src/esx/esx_driver.c | 32 +++++++++++++++++++--- src/openvz/openvz_driver.c | 34 +++++++++++++++++++++--- src/phyp/phyp_driver.c | 32 ++++++++++++++++++++--- src/qemu/qemu_driver.c | 38 +++++++++++++++++++++++++--- src/test/test_driver.c | 36 ++++++++++++++++++++++--- src/vbox/vbox_tmpl.c | 36 +++++++++++++++++++++++--- src/xen/xen_driver.c | 34 ++++++++++++++++++++++--- src/xenapi/xenapi_driver.c | 60 ++++++++++++++++++++++++++++++++++++++------ 8 files changed, 263 insertions(+), 39 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c index 1db3a90..3d13d74 100644 --- a/src/esx/esx_driver.c +++ b/src/esx/esx_driver.c @@ -2382,7 +2382,8 @@ esxDomainGetInfo(virDomainPtr domain, virDomainInfoPtr info)
static int -esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) +esxDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus, + unsigned int flags) { int result = -1; esxPrivate *priv = domain->conn->privateData; @@ -2392,6 +2393,11 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) esxVI_ManagedObjectReference *task = NULL; esxVI_TaskInfoState taskInfoState;
+ if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } + if (nvcpus < 1) { ESX_ERROR(VIR_ERR_INVALID_ARG, "%s", _("Requested number of virtual CPUs must at least be 1")); @@ -2451,15 +2457,26 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) }
that error catching will end up being preempted at the main entry point but that's fine [...]
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index f57759a..a5690a0 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -40,6 +40,11 @@ #include "xenapi_driver_private.h" #include "xenapi_utils.h"
+#define VIR_FROM_THIS VIR_FROM_XENAPI + +#define xenapiError(code, ...) \ + virReportErrorHelper(NULL, VIR_FROM_THIS, code, __FILE__, \ + __FUNCTION__, __LINE__, __VA_ARGS__)
ah, maybe we should check that module for other uses of this, but it's independant from this patch Looks fine, actual implementation for QEmu coming in a following patch, ACK Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

2010/9/30 Eric Blake <eblake@redhat.com>:
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus): Move guts... (esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new functions. (esxDriver): Trivially support the new API. * src/openvz/openvz_driver.c (openvzDomainSetVcpus) (openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus) (openvzDomainGetVcpusFlags, openvzDriver): Likewise. * src/phyp/phyp_driver.c (phypDomainSetCPU) (phypDomainSetVcpusFlags, phypGetLparCPUMAX) (phypDomainGetVcpusFlags, phypDriver): Likewise. * src/qemu/qemu_driver.c (qemudDomainSetVcpus) (qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus) (qemudDomainGetVcpusFlags, qemuDriver): Likewise. * src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags) (testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainSetVcpus) (vboxDomainSetVcpusFlags, virDomainGetMaxVcpus) (virDomainGetVcpusFlags, virDriver): Likewise. * src/xen/xen_driver.c (xenUnifiedDomainSetVcpus) (xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus) (xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise. * src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus) (xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus) (xenapiDomainGetVcpusFlags, xenapiDriver): Likewise. (xenapiError): New helper macro. ---
Long, but consistent - anywhere the old API exists, this implements the new API, makes the old one a wrapper around the new, and makes the new API fail for any flag combinations not yet implemented.
 src/esx/esx_driver.c    |  32 +++++++++++++++++++---  src/openvz/openvz_driver.c |  34 +++++++++++++++++++++---  src/phyp/phyp_driver.c   |  32 ++++++++++++++++++++---  src/qemu/qemu_driver.c   |  38 +++++++++++++++++++++++++---  src/test/test_driver.c   |  36 ++++++++++++++++++++++---  src/vbox/vbox_tmpl.c    |  36 +++++++++++++++++++++++---  src/xen/xen_driver.c    |  34 ++++++++++++++++++++++---  src/xenapi/xenapi_driver.c |  60 ++++++++++++++++++++++++++++++++++++++------  8 files changed, 263 insertions(+), 39 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c index 1db3a90..3d13d74 100644 --- a/src/esx/esx_driver.c +++ b/src/esx/esx_driver.c @@ -2382,7 +2382,8 @@ esxDomainGetInfo(virDomainPtr domain, virDomainInfoPtr info)
 static int -esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) +esxDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus, +            unsigned int flags)  {   int result = -1;   esxPrivate *priv = domain->conn->privateData; @@ -2392,6 +2393,11 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)   esxVI_ManagedObjectReference *task = NULL;   esxVI_TaskInfoState taskInfoState;
+ Â Â if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + Â Â Â Â ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + Â Â Â Â return -1; + Â Â } +
Why not use virCheckFlags here?
  if (nvcpus < 1) {     ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",          _("Requested number of virtual CPUs must at least be 1")); @@ -2451,15 +2457,26 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)  }
+static int +esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) +{ + Â Â return esxDomainSetVcpusFlags(domain, nvcpus, VIR_DOMAIN_VCPU_ACTIVE); +} +
 static int -esxDomainGetMaxVcpus(virDomainPtr domain) +esxDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)  {   esxPrivate *priv = domain->conn->privateData;   esxVI_String *propertyNameList = NULL;   esxVI_ObjectContent *hostSystem = NULL;   esxVI_DynamicProperty *dynamicProperty = NULL;
+ Â Â if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { + Â Â Â Â ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + Â Â Â Â return -1; + Â Â } +
virCheckFlags? This pattern reoccurs through the rest of the patch. Matthias

On 10/01/2010 09:26 AM, Matthias Bolte wrote:
@@ -2392,6 +2393,11 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) esxVI_ManagedObjectReference *task = NULL; esxVI_TaskInfoState taskInfoState;
+ if (flags != VIR_DOMAIN_VCPU_ACTIVE) { + ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags); + return -1; + } +
Why not use virCheckFlags here?
virCheckFlags is useful for the maximal set of permitted flags, regardless of whether they are set or cleared. This instance was going for zero semantic change, therefore we expect an exact flag set. At which point, checking for the exact flag is easier than checking for all supported flags and then an exact flag check. See patch 11/12 for an example of how qemu replaces an exact flag check with virCheckFlags when we finally add real support for all flag combinations. The same thing can be done for esx when (and if) we figure out how to support all flag combinations. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned short, to match virDomainGetInfo limit. Add maxvcpus member. * src/conf/domain_conf.c (virDomainDefParseXML) (virDomainDefFormat): parse and print out vcpu details. * src/xen/xend_internal.c (xenDaemonParseSxpr) (xenDaemonFormatSxpr): Manage both vcpu numbers, and require them to be equal for now. * src/xen/xm_internal.c (xenXMDomainConfigParse) (xenXMDomainConfigFormat): Likewise. * src/phyp/phyp_driver.c (phypDomainDumpXML): Likewise. * src/openvz/openvz_conf.c (openvzLoadDomains): Likewise. * src/openvz/openvz_driver.c (openvzDomainDefineXML) (openvzDomainCreateXML, openvzDomainSetVcpusInternal): Likewise. * src/vbox/vbox_tmpl.c (vboxDomainDumpXML, vboxDomainDefineXML): Likewise. * src/xenapi/xenapi_driver.c (xenapiDomainDumpXML): Likewise. * src/xenapi/xenapi_utils.c (createVMRecordFromXml): Likewise. * src/esx/esx_vmx.c (esxVMX_ParseConfig, esxVMX_FormatConfig): Likewise. * src/qemu/qemu_conf.c (qemuBuildSmpArgStr) (qemuParseCommandLineSmp, qemuParseCommandLine): Likewise. * src/qemu/qemu_driver.c (qemudDomainHotplugVcpus): Likewise. * src/opennebula/one_conf.c (xmlOneTemplate): Likewise. --- Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code. Two: add a new maxvcpus member, but for now, ensure that all domains treat vcpus == maxvcpus at all times (domains that support hot-unplugging vcpus will be changed in later patches). src/conf/domain_conf.c | 35 ++++++++++++++++++++++++++++------- src/conf/domain_conf.h | 3 ++- src/esx/esx_vmx.c | 24 +++++++++++++++--------- src/opennebula/one_conf.c | 11 ++++++----- src/openvz/openvz_conf.c | 7 ++++--- src/openvz/openvz_driver.c | 15 ++++++++++----- src/phyp/phyp_driver.c | 2 +- src/qemu/qemu_conf.c | 14 ++++++++++++-- src/qemu/qemu_driver.c | 5 +++-- src/vbox/vbox_tmpl.c | 12 ++++++++---- src/xen/xend_internal.c | 9 +++++---- src/xen/xm_internal.c | 9 ++++++--- src/xenapi/xenapi_driver.c | 2 +- src/xenapi/xenapi_utils.c | 4 ++-- 14 files changed, 103 insertions(+), 49 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index afb06e8..540767f 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -4179,6 +4179,7 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, int i, n; long id = -1; virDomainDefPtr def; + unsigned long count; if (VIR_ALLOC(def) < 0) { virReportOOMError(); @@ -4244,8 +4245,27 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (node) def->hugepage_backed = 1; - if (virXPathULong("string(./vcpu[1])", ctxt, &def->vcpus) < 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt, &count) < 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count); + goto error; + } + } + + if (virXPathULong("string(./vcpu[1]/@current)", ctxt, &count) < 0) + def->vcpus = def->maxvcpus; + else { + def->vcpus = count; + if (def->vcpus != count || count == 0 || def->maxvcpus < count) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid current vcpus %lu"), count); + goto error; + } + } tmp = virXPathString("string(./vcpu[1]/@cpuset)", ctxt); if (tmp) { @@ -6394,17 +6414,18 @@ char *virDomainDefFormat(virDomainDefPtr def, if (def->cpumask[n] != 1) allones = 0; - if (allones) { - virBufferVSprintf(&buf, " <vcpu>%lu</vcpu>\n", def->vcpus); - } else { + virBufferAddLit(&buf, " <vcpu"); + if (!allones) { char *cpumask = NULL; if ((cpumask = virDomainCpuSetFormat(def->cpumask, def->cpumasklen)) == NULL) goto cleanup; - virBufferVSprintf(&buf, " <vcpu cpuset='%s'>%lu</vcpu>\n", - cpumask, def->vcpus); + virBufferVSprintf(&buf, " cpuset='%s'", cpumask); VIR_FREE(cpumask); } + if (def->vcpus != def->maxvcpus) + virBufferVSprintf(&buf, " current='%u'", def->vcpus); + virBufferVSprintf(&buf, ">%u</vcpu>\n", def->maxvcpus); if (def->os.bootloader) { virBufferEscapeString(&buf, " <bootloader>%s</bootloader>\n", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 7195c04..d3d3b5c 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -867,7 +867,8 @@ struct _virDomainDef { unsigned long memory; unsigned long maxmem; unsigned char hugepage_backed; - unsigned long vcpus; + unsigned short vcpus; + unsigned short maxvcpus; int cpumasklen; char *cpumask; diff --git a/src/esx/esx_vmx.c b/src/esx/esx_vmx.c index 59eb3b2..cb0044d 100644 --- a/src/esx/esx_vmx.c +++ b/src/esx/esx_vmx.c @@ -50,7 +50,7 @@ def->uuid = <value> <=> uuid.bios = "<value>" def->name = <value> <=> displayName = "<value>" def->maxmem = <value kilobyte> <=> memsize = "<value megabyte>" # must be a multiple of 4, defaults to 32 def->memory = <value kilobyte> <=> sched.mem.max = "<value megabyte>" # defaults to "unlimited" -> def->memory = def->maxmem -def->vcpus = <value> <=> numvcpus = "<value>" # must be 1 or a multiple of 2, defaults to 1 +def->maxvcpus = <value> <=> numvcpus = "<value>" # must be 1 or a multiple of 2, defaults to 1 def->cpumask = <uint list> <=> sched.cpu.affinity = "<uint list>" @@ -1021,7 +1021,7 @@ esxVMX_ParseConfig(esxVMX_Context *ctx, virCapsPtr caps, const char *vmx, goto cleanup; } - def->vcpus = numvcpus; + def->maxvcpus = def->vcpus = numvcpus; /* vmx:sched.cpu.affinity -> def:cpumask */ // VirtualMachine:config.cpuAffinity.affinitySet @@ -2497,16 +2497,22 @@ esxVMX_FormatConfig(esxVMX_Context *ctx, virCapsPtr caps, virDomainDefPtr def, (int)(def->memory / 1024)); } - /* def:vcpus -> vmx:numvcpus */ - if (def->vcpus <= 0 || (def->vcpus % 2 != 0 && def->vcpus != 1)) { + /* def:maxvcpus -> vmx:numvcpus */ + if (def->vcpus != def->maxvcpus) { + ESX_ERROR(VIR_ERR_CONFIG_UNSUPPORTED, + _("No support for domain XML entry 'vcpu' attribute " + "'current'")); + goto cleanup; + } + if (def->maxvcpus <= 0 || (def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { ESX_ERROR(VIR_ERR_INTERNAL_ERROR, _("Expecting domain XML entry 'vcpu' to be an unsigned " "integer (1 or a multiple of 2) but found %d"), - (int)def->vcpus); + def->maxvcpus); goto cleanup; } - virBufferVSprintf(&buffer, "numvcpus = \"%d\"\n", (int)def->vcpus); + virBufferVSprintf(&buffer, "numvcpus = \"%d\"\n", def->maxvcpus); /* def:cpumask -> vmx:sched.cpu.affinity */ if (def->cpumasklen > 0) { @@ -2520,11 +2526,11 @@ esxVMX_FormatConfig(esxVMX_Context *ctx, virCapsPtr caps, virDomainDefPtr def, } } - if (sched_cpu_affinity_length < def->vcpus) { + if (sched_cpu_affinity_length < def->maxvcpus) { ESX_ERROR(VIR_ERR_INTERNAL_ERROR, _("Expecting domain XML attribute 'cpuset' of entry " - "'vcpu' to contains at least %d CPU(s)"), - (int)def->vcpus); + "'vcpu' to contain at least %d CPU(s)"), + def->maxvcpus); goto cleanup; } diff --git a/src/opennebula/one_conf.c b/src/opennebula/one_conf.c index 029d475..f674f9b 100644 --- a/src/opennebula/one_conf.c +++ b/src/opennebula/one_conf.c @@ -1,5 +1,7 @@ /*----------------------------------------------------------------------------------*/ -/* Copyright 2002-2009, Distributed Systems Architecture Group, Universidad +/* + * Copyright (C) 2010 Red Hat, Inc. + * Copyright 2002-2009, Distributed Systems Architecture Group, Universidad * Complutense de Madrid (dsa-research.org) * * This library is free software; you can redistribute it and/or @@ -169,10 +171,9 @@ char* xmlOneTemplate(virDomainDefPtr def) { int i; virBuffer buf= VIR_BUFFER_INITIALIZER; - virBufferVSprintf(&buf,"#OpenNebula Template automatically generated by libvirt\nNAME = %s\nCPU = %ld\nMEMORY = %ld\n", - def->name, - def->vcpus, - (def->maxmem)/1024); + virBufferVSprintf(&buf, "#OpenNebula Template automatically generated " + "by libvirt\nNAME = %s\nCPU = %d\nMEMORY = %ld\n", + def->name, def->maxvcpus, (def->maxmem)/1024); /*Optional Booting OpenNebula Information:*/ if (def->os.kernel) { diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index ec11bbc..c84a6f3 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -507,11 +507,12 @@ int openvzLoadDomains(struct openvz_driver *driver) { veid); goto cleanup; } else if (ret > 0) { - dom->def->vcpus = strtoI(temp); + dom->def->maxvcpus = strtoI(temp); } - if (ret == 0 || dom->def->vcpus == 0) - dom->def->vcpus = openvzGetNodeCPUs(); + if (ret == 0 || dom->def->maxvcpus == 0) + dom->def->maxvcpus = openvzGetNodeCPUs(); + dom->def->vcpus = dom->def->maxvcpus; /* XXX load rest of VM config data .... */ diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 547646d..ebeb81d 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -925,8 +925,13 @@ openvzDomainDefineXML(virConnectPtr conn, const char *xml) if (openvzDomainSetNetworkConfig(conn, vm->def) < 0) goto cleanup; - if (vm->def->vcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->vcpus) < 0) { + if (vm->def->vcpus != vm->def->maxvcpus) { + openvzError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("current vcpu count must equal maximum")); + goto cleanup; + } + if (vm->def->maxvcpus > 0) { + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { openvzError(VIR_ERR_INTERNAL_ERROR, "%s", _("Could not set number of virtual cpu")); goto cleanup; @@ -1019,8 +1024,8 @@ openvzDomainCreateXML(virConnectPtr conn, const char *xml, vm->def->id = vm->pid; vm->state = VIR_DOMAIN_RUNNING; - if (vm->def->vcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->vcpus) < 0) { + if (vm->def->maxvcpus > 0) { + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { openvzError(VIR_ERR_INTERNAL_ERROR, "%s", _("Could not set number of virtual cpu")); goto cleanup; @@ -1249,7 +1254,7 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, return -1; } - vm->def->vcpus = nvcpus; + vm->def->maxvcpus = vm->def->vcpus = nvcpus; return 0; } diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index 74c22bb..bf1b45c 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3540,7 +3540,7 @@ phypDomainDumpXML(virDomainPtr dom, int flags) goto err; } - if ((def.vcpus = + if ((def.maxvcpus = def.vcpus = phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0) { VIR_ERROR0(_("Unable to determine domain's CPU.")); goto err; diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 7a37c70..5169e3c 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -3613,7 +3613,7 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, { virBuffer buf = VIR_BUFFER_INITIALIZER; - virBufferVSprintf(&buf, "%lu", def->vcpus); + virBufferVSprintf(&buf, "%u", def->vcpus); if ((qemuCmdFlags & QEMUD_CMD_FLAG_SMP_TOPOLOGY)) { /* sockets, cores, and threads are either all zero @@ -3624,11 +3624,18 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, virBufferVSprintf(&buf, ",threads=%u", def->cpu->threads); } else { - virBufferVSprintf(&buf, ",sockets=%lu", def->vcpus); + virBufferVSprintf(&buf, ",sockets=%u", def->maxvcpus); virBufferVSprintf(&buf, ",cores=%u", 1); virBufferVSprintf(&buf, ",threads=%u", 1); } } + if (def->vcpus != def->maxvcpus) { + virBufferFreeAndReset(&buf); + qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("setting current vcpu count less than maximum is " + "not supported yet")); + return NULL; + } if (virBufferError(&buf)) { virBufferFreeAndReset(&buf); @@ -6051,6 +6058,8 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, } } + dom->maxvcpus = dom->vcpus; + if (sockets && cores && threads) { virCPUDefPtr cpu; @@ -6120,6 +6129,7 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps, def->id = -1; def->memory = def->maxmem = 64 * 1024; + def->maxvcpus = 1; def->vcpus = 1; def->clock.offset = VIR_DOMAIN_CLOCK_OFFSET_UTC; def->features = (1 << VIR_DOMAIN_FEATURE_ACPI) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 943cad4..c31eeb9 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2422,8 +2422,9 @@ qemuDetectVcpuPIDs(struct qemud_driver *driver, if (ncpupids != vm->def->vcpus) { qemuReportError(VIR_ERR_INTERNAL_ERROR, - _("got wrong number of vCPU pids from QEMU monitor. got %d, wanted %d"), - ncpupids, (int)vm->def->vcpus); + _("got wrong number of vCPU pids from QEMU monitor. " + "got %d, wanted %d"), + ncpupids, vm->def->vcpus); VIR_FREE(cpupids); return -1; } diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c index 960bcb3..690cbd0 100644 --- a/src/vbox/vbox_tmpl.c +++ b/src/vbox/vbox_tmpl.c @@ -2028,7 +2028,7 @@ static char *vboxDomainDumpXML(virDomainPtr dom, int flags) { def->maxmem = memorySize * 1024; machine->vtbl->GetCPUCount(machine, &CPUCount); - def->vcpus = CPUCount; + def->maxvcpus = def->vcpus = CPUCount; /* Skip cpumasklen, cpumask, onReboot, onPoweroff, onCrash */ @@ -4598,11 +4598,15 @@ static virDomainPtr vboxDomainDefineXML(virConnectPtr conn, const char *xml) { def->memory, (unsigned)rc); } - rc = machine->vtbl->SetCPUCount(machine, def->vcpus); + if (def->vcpus != def->maxvcpus) { + vboxError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("current vcpu count must equal maximum")); + } + rc = machine->vtbl->SetCPUCount(machine, def->maxvcpus); if (NS_FAILED(rc)) { vboxError(VIR_ERR_INTERNAL_ERROR, - _("could not set the number of virtual CPUs to: %lu, rc=%08x"), - def->vcpus, (unsigned)rc); + _("could not set the number of virtual CPUs to: %u, rc=%08x"), + def->maxvcpus, (unsigned)rc); } #if VBOX_API_VERSION < 3001 diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c index fce0233..16140d1 100644 --- a/src/xen/xend_internal.c +++ b/src/xen/xend_internal.c @@ -2188,7 +2188,8 @@ xenDaemonParseSxpr(virConnectPtr conn, } } - def->vcpus = sexpr_int(root, "domain/vcpus"); + def->maxvcpus = sexpr_int(root, "domain/vcpus"); + def->vcpus = def->maxvcpus; tmp = sexpr_node(root, "domain/on_poweroff"); if (tmp != NULL) { @@ -5644,7 +5645,7 @@ xenDaemonFormatSxprInput(virDomainInputDefPtr input, * * Generate an SEXPR representing the domain configuration. * - * Returns the 0 terminatedi S-Expr string or NULL in case of error. + * Returns the 0 terminated S-Expr string or NULL in case of error. * the caller must free() the returned value. */ char * @@ -5661,7 +5662,7 @@ xenDaemonFormatSxpr(virConnectPtr conn, virBufferVSprintf(&buf, "(name '%s')", def->name); virBufferVSprintf(&buf, "(memory %lu)(maxmem %lu)", def->memory/1024, def->maxmem/1024); - virBufferVSprintf(&buf, "(vcpus %lu)", def->vcpus); + virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus); if (def->cpumask) { char *ranges = virDomainCpuSetFormat(def->cpumask, def->cpumasklen); @@ -5756,7 +5757,7 @@ xenDaemonFormatSxpr(virConnectPtr conn, else virBufferVSprintf(&buf, "(kernel '%s')", def->os.loader); - virBufferVSprintf(&buf, "(vcpus %lu)", def->vcpus); + virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus); for (i = 0 ; i < def->os.nBootDevs ; i++) { switch (def->os.bootDevs[i]) { diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index b9cb4c3..e658c19 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -678,6 +678,7 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) { int i; const char *defaultArch, *defaultMachine; int vmlocaltime = 0; + unsigned long count; if (VIR_ALLOC(def) < 0) { virReportOOMError(); @@ -768,9 +769,11 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) { def->memory *= 1024; def->maxmem *= 1024; - - if (xenXMConfigGetULong(conf, "vcpus", &def->vcpus, 1) < 0) + if (xenXMConfigGetULong(conf, "vcpus", &count, 1) < 0 || + (unsigned short) count != count) goto cleanup; + def->maxvcpus = count; + def->vcpus = def->maxvcpus; if (xenXMConfigGetString(conf, "cpus", &str, NULL) < 0) goto cleanup; @@ -2239,7 +2242,7 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn, if (xenXMConfigSetInt(conf, "memory", def->memory / 1024) < 0) goto no_memory; - if (xenXMConfigSetInt(conf, "vcpus", def->vcpus) < 0) + if (xenXMConfigSetInt(conf, "vcpus", def->maxvcpus) < 0) goto no_memory; if ((def->cpumask != NULL) && diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index a5690a0..1e40a25 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -1335,7 +1335,7 @@ xenapiDomainDumpXML (virDomainPtr dom, int flags ATTRIBUTE_UNUSED) } else { defPtr->memory = memory; } - defPtr->vcpus = xenapiDomainGetMaxVcpus(dom); + defPtr->maxvcpus = defPtr->vcpus = xenapiDomainGetMaxVcpus(dom); enum xen_on_normal_exit action; if (xen_vm_get_actions_after_shutdown(session, &action, vm)) { defPtr->onPoweroff = xenapiNormalExitEnum2virDomainLifecycle(action); diff --git a/src/xenapi/xenapi_utils.c b/src/xenapi/xenapi_utils.c index 252073c..423f83a 100644 --- a/src/xenapi/xenapi_utils.c +++ b/src/xenapi/xenapi_utils.c @@ -510,8 +510,8 @@ createVMRecordFromXml (virConnectPtr conn, virDomainDefPtr def, else (*record)->memory_dynamic_max = (*record)->memory_static_max; - if (def->vcpus) { - (*record)->vcpus_max = (int64_t) def->vcpus; + if (def->maxvcpus) { + (*record)->vcpus_max = (int64_t) def->maxvcpus; (*record)->vcpus_at_startup = (int64_t) def->vcpus; } if (def->onPoweroff) -- 1.7.2.3

[self-review] On 09/29/2010 06:02 PM, Eric Blake wrote:
Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code. Two: add a new maxvcpus member, but for now, ensure that all domains treat vcpus == maxvcpus at all times (domains that support hot-unplugging vcpus will be changed in later patches).
@@ -4244,8 +4245,27 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (node) def->hugepage_backed = 1;
- if (virXPathULong("string(./vcpu[1])", ctxt,&def->vcpus)< 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt,&count)< 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count);
Should this be VIR_ERR_XML_ERROR instead? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Thu, Sep 30, 2010 at 01:27:27PM -0600, Eric Blake wrote:
[self-review]
On 09/29/2010 06:02 PM, Eric Blake wrote:
Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code. Two: add a new maxvcpus member, but for now, ensure that all domains treat vcpus == maxvcpus at all times (domains that support hot-unplugging vcpus will be changed in later patches).
@@ -4244,8 +4245,27 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (node) def->hugepage_backed = 1;
- if (virXPathULong("string(./vcpu[1])", ctxt,&def->vcpus)< 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt,&count)< 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count);
Should this be VIR_ERR_XML_ERROR instead?
Ah, yes :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On Wed, Sep 29, 2010 at 06:02:12PM -0600, Eric Blake wrote:
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned short, to match virDomainGetInfo limit. Add maxvcpus member. [...]
Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code.
well, yes and no, in a few years we may look ridiculous, but it would be good to have the new APIs cleared up from that limitation.
Two: add a new maxvcpus member, but for now, ensure that all domains treat vcpus == maxvcpus at all times (domains that support hot-unplugging vcpus will be changed in later patches). [...] --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -4179,6 +4179,7 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, int i, n; long id = -1; virDomainDefPtr def; + unsigned long count;
if (VIR_ALLOC(def) < 0) { virReportOOMError(); @@ -4244,8 +4245,27 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (node) def->hugepage_backed = 1;
- if (virXPathULong("string(./vcpu[1])", ctxt, &def->vcpus) < 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt, &count) < 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count); + goto error; + } + }
Hum, virXPathULong will return -2 for an non ULong format, and we discard the error by just setting up maxvcpus = 1 silently but on the other hand we make a fuss about 0 being provided :-) If we start raising an error on invalid values maybe it should be done for both (-2 need to be checked) but not a big deal.
+ if (virXPathULong("string(./vcpu[1]/@current)", ctxt, &count) < 0) + def->vcpus = def->maxvcpus; + else { + def->vcpus = count; + if (def->vcpus != count || count == 0 || def->maxvcpus < count) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid current vcpus %lu"), count); + goto error; + } + }
same here ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 10/01/2010 09:01 AM, Daniel Veillard wrote:
On Wed, Sep 29, 2010 at 06:02:12PM -0600, Eric Blake wrote:
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned short, to match virDomainGetInfo limit. Add maxvcpus member. [...]
Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code.
well, yes and no, in a few years we may look ridiculous, but it would be good to have the new APIs cleared up from that limitation.
So, should I keep the change to 16-bit in my v2 series, or should I keep things at 32-bit and add code that makes virDomainGetInfo fail on 16-bit overflow? Right now, it's easier to keep things at 16-bit throughout. Also, note that the new APIs (virDomainGetVcpusFlags and virDomainSetVcpusFlags) can go up to 31 bits. Get must return -1 on failure, so a full 32-bits cannot be returned; so even though Set passes an unsigned value, we should not accept 0x80000000, even if we decide someday that we want to support 0x10000 vcpus. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Mon, Oct 04, 2010 at 12:18:00PM -0600, Eric Blake wrote:
On 10/01/2010 09:01 AM, Daniel Veillard wrote:
On Wed, Sep 29, 2010 at 06:02:12PM -0600, Eric Blake wrote:
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned short, to match virDomainGetInfo limit. Add maxvcpus member. [...]
Two tightly-related changes. One: virDomainGetInfo implicitly limits vcpus to a 16-bit number; so there's no need to pretend otherwise through the rest of the code.
well, yes and no, in a few years we may look ridiculous, but it would be good to have the new APIs cleared up from that limitation.
So, should I keep the change to 16-bit in my v2 series, or should I keep things at 32-bit and add code that makes virDomainGetInfo fail on 16-bit overflow? Right now, it's easier to keep things at 16-bit throughout. Also, note that the new APIs (virDomainGetVcpusFlags and virDomainSetVcpusFlags) can go up to 31 bits. Get must return -1 on failure, so a full 32-bits cannot be returned; so even though Set passes an unsigned value, we should not accept 0x80000000, even if we decide someday that we want to support 0x10000 vcpus.
No need for it now. What you're adding is to an internal data structure, that's not part of any API, so it's just fine (i.e. we will just be able to bump up the type if the need arise). The problem is the structure from the API using a short. All the other new APIs return ints so I think we should be safe there. so no need to change the patches IMHO Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 10/01/2010 09:01 AM, Daniel Veillard wrote:
- if (virXPathULong("string(./vcpu[1])", ctxt,&def->vcpus)< 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt,&count)< 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count); + goto error; + } + }
Hum, virXPathULong will return -2 for an non ULong format, and we discard the error by just setting up maxvcpus = 1 silently but on the other hand we make a fuss about 0 being provided :-) If we start raising an error on invalid values maybe it should be done for both (-2 need to be checked)
Which is better? Relying on the .rng file for error checking (in which case, XML has already been validated, so not only do we know virXPathULong would never return -2, but we also know that current='0' would fail validation, so v2 should drop the redundant check for 0), or repeat all error checking in the C code (in which case adding a check for virXPathULong == -2 is a good thing for v2)? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Mon, Oct 04, 2010 at 12:34:16PM -0600, Eric Blake wrote:
On 10/01/2010 09:01 AM, Daniel Veillard wrote:
- if (virXPathULong("string(./vcpu[1])", ctxt,&def->vcpus)< 0) - def->vcpus = 1; + if (virXPathULong("string(./vcpu[1])", ctxt,&count)< 0) + def->maxvcpus = 1; + else { + def->maxvcpus = count; + if (def->maxvcpus != count || count == 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + _("invalid maxvcpus %lu"), count); + goto error; + } + }
Hum, virXPathULong will return -2 for an non ULong format, and we discard the error by just setting up maxvcpus = 1 silently but on the other hand we make a fuss about 0 being provided :-) If we start raising an error on invalid values maybe it should be done for both (-2 need to be checked)
Which is better? Relying on the .rng file for error checking (in which case, XML has already been validated, so not only do we know virXPathULong would never return -2, but we also know that current='0' would fail validation, so v2 should drop the redundant check for 0), or repeat all error checking in the C code (in which case adding a check for virXPathULong == -2 is a good thing for v2)?
We don't rely on RNG validation at runtime (the RNG fails on old libxml2 versions for example), so best to do the checking at parsing time, but I don't consider this urgent, it's just an improvement over the current code :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

* tools/virsh.c (cmdSetvcpus): Add new flags. Let invalid commands through to driver, to ease testing of hypervisor argument validation. (cmdVcpucount): New command. (commands): Add new command. * tools/virsh.pod (setvcpus, vcpucount): Document new behavior. --- I know - the typical API addition sequence adds driver support first and then virsh support. I can rearrange the patch order if desired. tools/virsh.c | 211 +++++++++++++++++++++++++++++++++++++++++++++++++------ tools/virsh.pod | 28 +++++++- 2 files changed, 217 insertions(+), 22 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 85014f2..0fcfdb7 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -2220,10 +2220,181 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd) } /* + * "vcpucount" command + */ +static const vshCmdInfo info_vcpucount[] = { + {"help", N_("domain vcpu counts")}, + {"desc", N_("Returns the number of domain virtual CPUs.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_vcpucount[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"maximum", VSH_OT_BOOL, 0, N_("get maximum cap on vcpus")}, + {"current", VSH_OT_BOOL, 0, N_("get current vcpu usage")}, + {"persistent", VSH_OT_BOOL, 0, N_("get value to be used on next boot")}, + {"active", VSH_OT_BOOL, 0, N_("get value from running domain")}, + {NULL, 0, 0, NULL} +}; + +static int +cmdVcpucount(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom; + int ret = TRUE; + int maximum = vshCommandOptBool(cmd, "maximum"); + int current = vshCommandOptBool(cmd, "current"); + int persistent = vshCommandOptBool(cmd, "persistent"); + int active = vshCommandOptBool(cmd, "active"); + bool all = maximum + current + persistent + active == 0; + int count; + + if (maximum && current) { + vshError(ctl, "%s", + _("--maximum and --current cannot both be specified")); + return FALSE; + } + if (persistent && active) { + vshError(ctl, "%s", + _("--persistent and --active cannot both be specified")); + return FALSE; + } + if (maximum + current + persistent + active == 1) { + vshError(ctl, + _("when using --%s, either --%s or --%s must be specified"), + maximum ? "maximum" : current ? "current" + : persistent ? "persistent" : "active", + maximum + current ? "persistent" : "maximum", + maximum + current ? "active" : "current"); + return FALSE; + } + + if (!vshConnectionUsability(ctl, ctl->conn)) + return FALSE; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return FALSE; + + /* In all cases, try the new API first; if it fails because we are + * talking to an older client, try a fallback API before giving + * up. */ + if (all || (maximum && persistent)) { + count = virDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_MAXIMUM | + VIR_DOMAIN_VCPU_PERSISTENT)); + if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT + || last_error->code == VIR_ERR_INVALID_ARG)) { + char *tmp; + char *xml = virDomainGetXMLDesc(dom, VIR_DOMAIN_XML_INACTIVE); + if (xml && (tmp = strstr(xml, "<vcpu"))) { + tmp = strchr(tmp, '>'); + if (!tmp || virStrToLong_i(tmp + 1, &tmp, 10, &count) < 0) + count = -1; + } + VIR_FREE(xml); + } + + if (count < 0) { + virshReportError(ctl); + ret = FALSE; + } else if (all) { + vshPrint(ctl, "%-12s %-12s %3d\n", _("maximum"), _("persistent"), + count); + } else { + vshPrint(ctl, "%d\n", count); + } + virFreeError(last_error); + last_error = NULL; + } + + if (all || (maximum && active)) { + count = virDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_MAXIMUM | + VIR_DOMAIN_VCPU_ACTIVE)); + if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT + || last_error->code == VIR_ERR_INVALID_ARG)) { + count = virDomainGetMaxVcpus(dom); + } + + if (count < 0) { + virshReportError(ctl); + ret = FALSE; + } else if (all) { + vshPrint(ctl, "%-12s %-12s %3d\n", _("maximum"), _("active"), + count); + } else { + vshPrint(ctl, "%d\n", count); + } + virFreeError(last_error); + last_error = NULL; + } + + if (all || (current && persistent)) { + count = virDomainGetVcpusFlags(dom, VIR_DOMAIN_VCPU_PERSISTENT); + if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT + || last_error->code == VIR_ERR_INVALID_ARG)) { + char *tmp, *end; + char *xml = virDomainGetXMLDesc(dom, VIR_DOMAIN_XML_INACTIVE); + if (xml && (tmp = strstr(xml, "<vcpu"))) { + end = strchr(tmp, '>'); + if (end) { + *end = '\0'; + tmp = strstr(tmp, "current="); + if (!tmp) + tmp = end + 1; + else { + tmp += strlen("current="); + tmp += *tmp == '\'' || *tmp == '"'; + } + } + if (!tmp || virStrToLong_i(tmp, &tmp, 10, &count) < 0) + count = -1; + } + VIR_FREE(xml); + } + + if (count < 0) { + virshReportError(ctl); + ret = FALSE; + } else if (all) { + vshPrint(ctl, "%-12s %-12s %3d\n", _("current"), _("persistent"), + count); + } else { + vshPrint(ctl, "%d\n", count); + } + virFreeError(last_error); + last_error = NULL; + } + + if (all || (current && active)) { + count = virDomainGetVcpusFlags(dom, VIR_DOMAIN_VCPU_ACTIVE); + if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT + || last_error->code == VIR_ERR_INVALID_ARG)) { + virDomainInfo info; + if (virDomainGetInfo(dom, &info) == 0) + count = info.nrVirtCpu; + } + + if (count < 0) { + virshReportError(ctl); + ret = FALSE; + } else if (all) { + vshPrint(ctl, "%-12s %-12s %3d\n", _("current"), _("active"), + count); + } else { + vshPrint(ctl, "%d\n", count); + } + virFreeError(last_error); + last_error = NULL; + } + + virDomainFree(dom); + return ret; +} + +/* * "vcpuinfo" command */ static const vshCmdInfo info_vcpuinfo[] = { - {"help", N_("domain vcpu information")}, + {"help", N_("detailed domain vcpu information")}, {"desc", N_("Returns basic information about the domain virtual CPUs.")}, {NULL, NULL} }; @@ -2453,6 +2624,9 @@ static const vshCmdInfo info_setvcpus[] = { static const vshCmdOptDef opts_setvcpus[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, {"count", VSH_OT_DATA, VSH_OFLAG_REQ, N_("number of virtual CPUs")}, + {"maximum", VSH_OT_BOOL, 0, N_("set maximum limit on next boot")}, + {"persistent", VSH_OT_BOOL, 0, N_("affect next boot")}, + {"active", VSH_OT_BOOL, 0, N_("affect running domain")}, {NULL, 0, 0, NULL} }; @@ -2461,8 +2635,13 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) { virDomainPtr dom; int count; - int maxcpu; int ret = TRUE; + int maximum = vshCommandOptBool(cmd, "maximum"); + int persistent = vshCommandOptBool(cmd, "persistent"); + int active = vshCommandOptBool(cmd, "active"); + int flags = ((maximum ? VIR_DOMAIN_VCPU_MAXIMUM : 0) | + (persistent ? VIR_DOMAIN_VCPU_PERSISTENT : 0) | + (active ? VIR_DOMAIN_VCPU_ACTIVE : 0)); if (!vshConnectionUsability(ctl, ctl->conn)) return FALSE; @@ -2471,26 +2650,15 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) return FALSE; count = vshCommandOptInt(cmd, "count", &count); - if (count <= 0) { - vshError(ctl, "%s", _("Invalid number of virtual CPUs.")); - virDomainFree(dom); - return FALSE; - } - - maxcpu = virDomainGetMaxVcpus(dom); - if (maxcpu <= 0) { - virDomainFree(dom); - return FALSE; - } - if (count > maxcpu) { - vshError(ctl, "%s", _("Too many virtual CPUs.")); - virDomainFree(dom); - return FALSE; - } - - if (virDomainSetVcpus(dom, count) != 0) { - ret = FALSE; + if (!flags) { + if (virDomainSetVcpus(dom, count) != 0) { + ret = FALSE; + } + } else { + if (virDomainSetVcpusFlags(dom, count, flags) < 0) { + ret = FALSE; + } } virDomainFree(dom); @@ -9452,6 +9620,7 @@ static const vshCmdDef commands[] = { {"vol-name", cmdVolName, opts_vol_name, info_vol_name}, {"vol-key", cmdVolKey, opts_vol_key, info_vol_key}, + {"vcpucount", cmdVcpucount, opts_vcpucount, info_vcpucount}, {"vcpuinfo", cmdVcpuinfo, opts_vcpuinfo, info_vcpuinfo}, {"vcpupin", cmdVcpupin, opts_vcpupin, info_vcpupin}, {"version", cmdVersion, NULL, info_version}, diff --git a/tools/virsh.pod b/tools/virsh.pod index e0471b1..87fecaa 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -456,7 +456,8 @@ Change the maximum memory allocation limit in the guest domain. This should not change the current memory use. The memory limit is specified in kilobytes. -=item B<setvcpus> I<domain-id> I<count> +=item B<setvcpus> I<domain-id> I<count> optional I<--maximum> I<--persistent> +I<--active> Change the number of virtual CPUs active in the guest domain. Note that I<count> may be limited by host, hypervisor or limit coming from the @@ -465,6 +466,16 @@ original description of domain. For Xen, you can only adjust the virtual CPUs of a running domain if the domain is paravirtualized. +If I<--persistent> is specified, the change will only affect the next +boot of a domain. If I<--active> is specified, the domain must be +running, and the change takes place immediately. Both flags may be +specified, if supported by the hypervisor; and I<--active> is implied +when neither flag is given. + +If I<--maximum> is specified, then you must use I<--persistent> and +avoid I<--active>; this flag controls the maximum limit of vcpus that +can be hot-plugged the next time the domain is booted. + =item B<shutdown> I<domain-id> Gracefully shuts down a domain. This coordinates with the domain OS @@ -503,6 +514,21 @@ is not available the processes will provide an exit code of 1. Undefine the configuration for an inactive domain. Since it's not running the domain name or UUID must be used as the I<domain-id>. +=item B<vcpucount> I<domain-id> optional I<--maximum> I<--current> +I<--persistent> I<--active> + +Print information about the virtual cpu counts of the given +I<domain-id>. If no flags are specified, all possible counts are +listed in a table; otherwise, the output is limited to just the +numeric value requested. + +I<--maximum> requests information on the maximum cap of vcpus that a +domain can add via B<setvcpus>, while I<--current> shows the current +usage; these two flags cannot both be specified. I<--persistent> +requests information regarding the next time the domain will be +booted, while I<--active> requires a running domain and lists current +values; these two flags cannot both be specified. + =item B<vcpuinfo> I<domain-id> Returns basic information about the domain virtual CPUs, like the number of -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:13PM -0600, Eric Blake wrote:
* tools/virsh.c (cmdSetvcpus): Add new flags. Let invalid commands through to driver, to ease testing of hypervisor argument validation. (cmdVcpucount): New command. (commands): Add new command. * tools/virsh.pod (setvcpus, vcpucount): Document new behavior. ---
I know - the typical API addition sequence adds driver support first and then virsh support. I can rearrange the patch order if desired.
Nahh, fine, actually the entry points are there, so really there is no problem ! [...]
/* + * "vcpucount" command + */ +static const vshCmdInfo info_vcpucount[] = { + {"help", N_("domain vcpu counts")}, + {"desc", N_("Returns the number of domain virtual CPUs.")},
Ouch "Returns the number of virtual CPUs used by the domain." would be clearer I think [...]
+ if (maximum + current + persistent + active == 1) { + vshError(ctl, + _("when using --%s, either --%s or --%s must be specified"), + maximum ? "maximum" : current ? "current" + : persistent ? "persistent" : "active", + maximum + current ? "persistent" : "maximum", + maximum + current ? "active" : "current");
Ouch, headache :-) but but looks right Okay, code ended up being a bit more complex than I expected but that's due to the various options and the fallback to old APIs, fine, ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 09/29/2010 06:02 PM, Eric Blake wrote:
* tools/virsh.c (cmdSetvcpus): Add new flags. Let invalid commands through to driver, to ease testing of hypervisor argument validation. (cmdVcpucount): New command. (commands): Add new command. * tools/virsh.pod (setvcpus, vcpucount): Document new behavior. ---
In testing this, I found it useful to add one more command. Previously, virsh had no way to get at virConnectGetMaxVcpus/virDomainGetMaxVcpus (it is used it inside setvcpus, but not exposed to the user). And now that virDomainGetVcpusFlags is smarter about reading the maximum limit associated with the XML of a domain, it is harder to tell whether the maximum associated with a domain is due to the domain's xml or due to the XML omitting the <vcpus> element and inheriting the hypervisor's maximum. That is, with more flexibility in vcpu management, it is also more important to know, for example, that the max vcpus permitted by xen is 32, and the max vcpus permitted by qemu is dependent on the number of cores in the host, whether or not a given domain is using that many vcpus. I debated between two interfaces: 1. Make vcpucount smarter: vcpucount => virConnectGetMaxVcpus, plus table of all vcpu stats on all domains vcpucount domain => table of all vcpu stats on domain vcpucount {--live|--config} {--curent|--maximum} domain => single stat vcpucount domain... => table of all vcpu stats on listed domains 2. Add second command, leaving vcpucount unchanged from v1: maxvcpus [--type=string] => virConnectGetMaxVcpus then decided to go with option 2 in my v2 posting of the vcpu series, unless anyone has a reason why overloading a single command makes more sense than keeping the separate API calls under separate commands. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

* src/qemu/qemu_conf.c (qemuParseCommandLineSmp): Distinguish between vcpus and maxvcpus, for new enough qemu. * tests/qemuargv2xmltest.c (mymain): Add new test. * tests/qemuxml2argvtest.c (mymain): Likewise. * tests/qemuxml2xmltest.c (mymain): Likewise. * tests/qemuxml2argvdata/qemuxml2argv-smp.args: New file. --- This manages just the command-line aspects needed for qemu 0.12+; so far, I haven't done anything about 0.11 or earlier (on those versions of qemu, maxvcpus and vcpus must always be equal). src/qemu/qemu_conf.c | 13 +++++++++---- tests/qemuargv2xmltest.c | 2 ++ tests/qemuxml2argvdata/qemuxml2argv-smp.args | 1 + tests/qemuxml2argvtest.c | 2 ++ tests/qemuxml2xmltest.c | 2 ++ 5 files changed, 16 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.args diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 5169e3c..f387a5d 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -3616,6 +3616,8 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, virBufferVSprintf(&buf, "%u", def->vcpus); if ((qemuCmdFlags & QEMUD_CMD_FLAG_SMP_TOPOLOGY)) { + if (def->vcpus != def->maxvcpus) + virBufferVSprintf(&buf, ",maxcpus=%u", def->maxvcpus); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ if (def->cpu && def->cpu->sockets) { @@ -3628,12 +3630,12 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, virBufferVSprintf(&buf, ",cores=%u", 1); virBufferVSprintf(&buf, ",threads=%u", 1); } - } - if (def->vcpus != def->maxvcpus) { + } else if (def->vcpus != def->maxvcpus) { virBufferFreeAndReset(&buf); + // FIXME - consider hot-unplugging cpus after boot qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("setting current vcpu count less than maximum is " - "not supported yet")); + "not supported with this QEMU binary")); return NULL; } @@ -6026,6 +6028,7 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, unsigned int sockets = 0; unsigned int cores = 0; unsigned int threads = 0; + unsigned int maxcpus = 0; int i; int nkws; char **kws; @@ -6053,12 +6056,14 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, cores = n; else if (STREQ(kws[i], "threads")) threads = n; + else if (STREQ(kws[i], "maxcpus")) + maxcpus = n; else goto syntax; } } - dom->maxvcpus = dom->vcpus; + dom->maxvcpus = maxcpus ? maxcpus : dom->vcpus; if (sockets && cores && threads) { virCPUDefPtr cpu; diff --git a/tests/qemuargv2xmltest.c b/tests/qemuargv2xmltest.c index 4f9ec84..d941b0b 100644 --- a/tests/qemuargv2xmltest.c +++ b/tests/qemuargv2xmltest.c @@ -221,6 +221,8 @@ mymain(int argc, char **argv) DO_TEST("hostdev-pci-address"); + DO_TEST("smp"); + DO_TEST_FULL("restore-v1", 0, "stdio"); DO_TEST_FULL("restore-v2", 0, "stdio"); DO_TEST_FULL("restore-v2", 0, "exec:cat"); diff --git a/tests/qemuxml2argvdata/qemuxml2argv-smp.args b/tests/qemuxml2argvdata/qemuxml2argv-smp.args new file mode 100644 index 0000000..3ec8f15 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-smp.args @@ -0,0 +1 @@ +LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1,maxcpus=2,sockets=2,cores=1,threads=1 -nographic -monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c -hda /dev/HostVG/QEMUGuest1 -net none -serial none -parallel none -usb diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 7b9df09..5f1e55a 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -383,6 +383,8 @@ mymain(int argc, char **argv) DO_TEST("qemu-ns", 0); + DO_TEST("smp", QEMUD_CMD_FLAG_SMP_TOPOLOGY); + free(driver.stateDir); virCapabilitiesFree(driver.caps); diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c index 904183c..48d9b7a 100644 --- a/tests/qemuxml2xmltest.c +++ b/tests/qemuxml2xmltest.c @@ -179,6 +179,8 @@ mymain(int argc, char **argv) DO_TEST("encrypted-disk"); + DO_TEST("smp"); + /* These tests generate different XML */ DO_TEST_DIFFERENT("balloon-device-auto"); DO_TEST_DIFFERENT("channel-virtio-auto"); -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:02:14PM -0600, Eric Blake wrote:
* src/qemu/qemu_conf.c (qemuParseCommandLineSmp): Distinguish between vcpus and maxvcpus, for new enough qemu. * tests/qemuargv2xmltest.c (mymain): Add new test. * tests/qemuxml2argvtest.c (mymain): Likewise. * tests/qemuxml2xmltest.c (mymain): Likewise. * tests/qemuxml2argvdata/qemuxml2argv-smp.args: New file. ---
This manages just the command-line aspects needed for qemu 0.12+; so far, I haven't done anything about 0.11 or earlier (on those versions of qemu, maxvcpus and vcpus must always be equal).
src/qemu/qemu_conf.c | 13 +++++++++---- tests/qemuargv2xmltest.c | 2 ++ tests/qemuxml2argvdata/qemuxml2argv-smp.args | 1 + tests/qemuxml2argvtest.c | 2 ++ tests/qemuxml2xmltest.c | 2 ++ 5 files changed, 16 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.args
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 5169e3c..f387a5d 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -3616,6 +3616,8 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, virBufferVSprintf(&buf, "%u", def->vcpus);
if ((qemuCmdFlags & QEMUD_CMD_FLAG_SMP_TOPOLOGY)) {
ohhh, we were already loking for this option at run time ...
+ if (def->vcpus != def->maxvcpus) + virBufferVSprintf(&buf, ",maxcpus=%u", def->maxvcpus); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ if (def->cpu && def->cpu->sockets) { @@ -3628,12 +3630,12 @@ qemuBuildSmpArgStr(const virDomainDefPtr def, virBufferVSprintf(&buf, ",cores=%u", 1); virBufferVSprintf(&buf, ",threads=%u", 1); } - } - if (def->vcpus != def->maxvcpus) { + } else if (def->vcpus != def->maxvcpus) { virBufferFreeAndReset(&buf); + // FIXME - consider hot-unplugging cpus after boot
Please no C++ like comments :-)
qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("setting current vcpu count less than maximum is " - "not supported yet")); + "not supported with this QEMU binary")); return NULL; }
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

* src/qemu/qemu_driver.c (qemudDomainSetVcpusFlags) (qemudDomainGetVcpusFlags): Support all feasible flag combinations. --- Aargh - my ISP SMTP server said that 11 emails is the limit for one session, so I have to resend the last two of my series (and figure out how to config git to use multiple sessions to avoid that problem in the future). Anyways, this is the meat of the patch series, in converting qemu 0.12+ to remember whether vcpus have been hot-unplugged below the maximum, in order to boot that way the next time around. I'm a bit fuzzy on whether I used all the driver internals correctly, especially on how and when vm->newDef should be created for managing persistent state separately from live state, so a careful review is appreciated, and I'll probably need a v2. src/qemu/qemu_driver.c | 90 ++++++++++++++++++++++++++++++++++++++++-------- 1 files changed, 75 insertions(+), 15 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c31eeb9..4b1519d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5908,9 +5908,22 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, int max; int ret = -1; - if (flags != VIR_DOMAIN_VCPU_ACTIVE) { - qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), - flags); + virCheckFlags(VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_PERSISTENT | + VIR_DOMAIN_VCPU_MAXIMUM, -1); + + /* At least one of ACTIVE or PERSISTENT must be set. MAXIMUM + * requires exactly PERSISTENT. */ + if ((flags & (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_PERSISTENT)) == 0 || + (flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_ACTIVE)) == + (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_ACTIVE)) { + qemuReportError(VIR_ERR_INVALID_ARG, + _("invalid flag combination: (0x%x)"), flags); + return -1; + } + if (!nvcpus || (unsigned short) nvcpus != nvcpus) { + qemuReportError(VIR_ERR_INVALID_ARG, + _("argument out of range: %d"), nvcpus); return -1; } @@ -5929,7 +5942,7 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (qemuDomainObjBeginJob(vm) < 0) goto cleanup; - if (!virDomainObjIsActive(vm)) { + if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_VCPU_ACTIVE)) { qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", _("domain is not running")); goto endjob; @@ -5948,6 +5961,11 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, goto endjob; } + if ((flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_ACTIVE)) == + VIR_DOMAIN_VCPU_ACTIVE && vm->def->maxvcpus < max) { + max = vm->def->maxvcpus; + } + if (nvcpus > max) { qemuReportError(VIR_ERR_INVALID_ARG, _("requested vcpus is greater than max allowable" @@ -5955,7 +5973,39 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, goto endjob; } - ret = qemudDomainHotplugVcpus(vm, nvcpus); + switch (flags) { + case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_PERSISTENT: + if (!vm->newDef) + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("no persistent state")); + else { + vm->newDef->maxvcpus = nvcpus; + if (nvcpus < vm->newDef->vcpus) + vm->newDef->vcpus = nvcpus; + ret = 0; + } + break; + + case VIR_DOMAIN_VCPU_PERSISTENT: + if (!vm->newDef) + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("no persistent state")); + else { + vm->newDef->vcpus = nvcpus; + ret = 0; + } + break; + + case VIR_DOMAIN_VCPU_ACTIVE: + ret = qemudDomainHotplugVcpus(vm, nvcpus); + break; + + case VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_PERSISTENT: + ret = qemudDomainHotplugVcpus(vm, nvcpus); + if (ret == 0 && vm->newDef) + vm->newDef->vcpus = nvcpus; + break; + } endjob: if (qemuDomainObjEndJob(vm) == 0) @@ -6134,12 +6184,18 @@ qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) { struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; - const char *type; + virDomainDefPtr def; int ret = -1; - if (flags != (VIR_DOMAIN_VCPU_ACTIVE | VIR_DOMAIN_VCPU_MAXIMUM)) { - qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), - flags); + virCheckFlags(VIR_DOMAIN_VCPU_ACTIVE | + VIR_DOMAIN_VCPU_PERSISTENT | + VIR_DOMAIN_VCPU_MAXIMUM, -1); + + /* Exactly one of ACTIVE or PERSISTENT must be set. */ + if (!(flags & VIR_DOMAIN_VCPU_ACTIVE) == + !(flags & VIR_DOMAIN_VCPU_PERSISTENT)) { + qemuReportError(VIR_ERR_INVALID_ARG, + _("invalid flag combination: (0x%x)"), flags); return -1; } @@ -6155,14 +6211,18 @@ qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) goto cleanup; } - if (!(type = virDomainVirtTypeToString(vm->def->virtType))) { - qemuReportError(VIR_ERR_INTERNAL_ERROR, - _("unknown virt type in domain definition '%d'"), - vm->def->virtType); - goto cleanup; + if (flags & VIR_DOMAIN_VCPU_ACTIVE) { + if (!virDomainObjIsActive(vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("domain not active")); + goto cleanup; + } + def = vm->def; + } else { + def = vm->newDef ? vm->newDef : vm->def; } - ret = qemudGetMaxVCPUs(NULL, type); + ret = (flags & VIR_DOMAIN_VCPU_MAXIMUM) ? def->maxvcpus : def->vcpus; cleanup: if (vm) -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:09:21PM -0600, Eric Blake wrote:
* src/qemu/qemu_driver.c (qemudDomainSetVcpusFlags) (qemudDomainGetVcpusFlags): Support all feasible flag combinations. ---
Aargh - my ISP SMTP server said that 11 emails is the limit for one session, so I have to resend the last two of my series (and figure out how to config git to use multiple sessions to avoid that problem in the future).
worked fine, I saw them in a batch, properly aligned,
Anyways, this is the meat of the patch series, in converting qemu 0.12+ to remember whether vcpus have been hot-unplugged below the maximum, in order to boot that way the next time around. I'm a bit fuzzy on whether I used all the driver internals correctly, especially on how and when vm->newDef should be created for managing persistent state separately from live state, so a careful review is appreciated, and I'll probably need a v2. [...] - ret = qemudDomainHotplugVcpus(vm, nvcpus); + switch (flags) { + case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_PERSISTENT:
hum I usually indent the cases 4 spaces from the switch, minor nit ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 10/01/2010 09:31 AM, Daniel Veillard wrote:
- ret = qemudDomainHotplugVcpus(vm, nvcpus); + switch (flags) { + case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_PERSISTENT:
hum I usually indent the cases 4 spaces from the switch, minor nit
I just used what emacs gave by default, after applying the suggested ~/.emacs settings from HACKING. In that mode, emacs puts the labels flush with the switch. The libvirt code base has a decent mix of both styles, but favors my style over yours (don't ask me how much time I wasted trying to figure out that sed script): $ git grep -A1 switch src \ | sed -n 's/:/-/;/switch/{N;/^\(.*\)switch.*\n\1case/p}' | wc -l 350 $ git grep -A1 switch src \ | sed -n 's/:/-/;/switch/{N;/^\(.*\)switch.*\n\1 *case/p}' | wc -l 96 Speaking of indentation nits, are we any closer to Jim's idea of using an automated indent-like tool for setting on a consistent formatting style? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Fri, Oct 01, 2010 at 02:21:05PM -0600, Eric Blake wrote:
On 10/01/2010 09:31 AM, Daniel Veillard wrote:
- ret = qemudDomainHotplugVcpus(vm, nvcpus); + switch (flags) { + case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_PERSISTENT:
hum I usually indent the cases 4 spaces from the switch, minor nit
I just used what emacs gave by default, after applying the suggested ~/.emacs settings from HACKING. In that mode, emacs puts the labels flush with the switch. The libvirt code base has a decent mix of both styles, but favors my style over yours (don't ask me how much time I wasted trying to figure out that sed script):
$ git grep -A1 switch src \ | sed -n 's/:/-/;/switch/{N;/^\(.*\)switch.*\n\1case/p}' | wc -l 350 $ git grep -A1 switch src \ | sed -n 's/:/-/;/switch/{N;/^\(.*\)switch.*\n\1 *case/p}' | wc -l 96
Speaking of indentation nits, are we any closer to Jim's idea of using an automated indent-like tool for setting on a consistent formatting style?
Well if the editor/typing habits and the SCM checking rules start to conflict it can get really messy I'm afraid, but maybe I don't remember the proposal well. What I usuall do when I want to reformat a piece of code is run it through indent: indent -bad -bap -bbb -bli4 -br -ce -brs -cs -i4 -l75 -lc75 -nut -sbi4 -psl -saf -sai -saw -sbi4 -ss -sc -cdw -cli4 -npcs -nbc and in general it does what I'm used to, but I often have to clean the thing up a bit still. Automating it may be a bit painful, what you commit is not what you have in the patch, I'm a bit vary of that... Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

Here's where I ran out of time for the day. I'm much less familiar with xen than with qemu, so I have no idea how to tell if xen's documented domain/vcpu_avail (which is what we want for current vcpus) is usable in contrast to domain/vcpus (the maximum amount). For that matter, I'm not even sure if modifying the Sxpr parsing/generating code is enough to make xen use the new attribute, or what else might be involved. Hints on what I need to do from here are greatly appreciated. --- src/xen/xend_internal.c | 6 ++++++ src/xen/xm_internal.c | 6 ++++++ src/xenapi/xenapi_utils.c | 2 +- 3 files changed, 13 insertions(+), 1 deletions(-) diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c index 16140d1..9f1c741 100644 --- a/src/xen/xend_internal.c +++ b/src/xen/xend_internal.c @@ -2189,6 +2189,7 @@ xenDaemonParseSxpr(virConnectPtr conn, } def->maxvcpus = sexpr_int(root, "domain/vcpus"); +// def->vcpus = sexpr_int(root, "domain/vcpu_avail"); def->vcpus = def->maxvcpus; tmp = sexpr_node(root, "domain/on_poweroff"); @@ -2462,6 +2463,7 @@ sexpr_to_xend_domain_info(virDomainPtr domain, const struct sexpr *root, info->state = VIR_DOMAIN_NOSTATE; } info->cpuTime = sexpr_float(root, "domain/cpu_time") * 1000000000; +// info->nrVirtCpu = sexpr_int(root, "domain/vcpu_avail"); info->nrVirtCpu = sexpr_int(root, "domain/vcpus"); return (0); } @@ -5662,6 +5664,8 @@ xenDaemonFormatSxpr(virConnectPtr conn, virBufferVSprintf(&buf, "(name '%s')", def->name); virBufferVSprintf(&buf, "(memory %lu)(maxmem %lu)", def->memory/1024, def->maxmem/1024); +// virBufferVSprintf(&buf, "(vcpus %u)(vcpu_avail %u)", def->maxvcpus, +// def->vcpus); virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus); if (def->cpumask) { @@ -5757,6 +5761,8 @@ xenDaemonFormatSxpr(virConnectPtr conn, else virBufferVSprintf(&buf, "(kernel '%s')", def->os.loader); +// virBufferVSprintf(&buf, "(vcpus %u)(vcpu_avail %u)", +// def->maxvcpus, def->vcpus); virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus); for (i = 0 ; i < def->os.nBootDevs ; i++) { diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index e658c19..dd768e1 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -773,6 +773,10 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) { (unsigned short) count != count) goto cleanup; def->maxvcpus = count; +// if (xenXMConfigGetULong(conf, "vcpu_avail", &count, 1) < 0 || +// (unsigned short) count != count) +// goto cleanup; +// def->vcpus = count; def->vcpus = def->maxvcpus; if (xenXMConfigGetString(conf, "cpus", &str, NULL) < 0) @@ -2244,6 +2248,8 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn, if (xenXMConfigSetInt(conf, "vcpus", def->maxvcpus) < 0) goto no_memory; +// if (xenXMConfigSetInt(conf, "vcpu_avail", def->vcpus) < 0) +// goto no_memory; if ((def->cpumask != NULL) && ((cpus = virDomainCpuSetFormat(def->cpumask, diff --git a/src/xenapi/xenapi_utils.c b/src/xenapi/xenapi_utils.c index 423f83a..5bf9138 100644 --- a/src/xenapi/xenapi_utils.c +++ b/src/xenapi/xenapi_utils.c @@ -512,7 +512,7 @@ createVMRecordFromXml (virConnectPtr conn, virDomainDefPtr def, if (def->maxvcpus) { (*record)->vcpus_max = (int64_t) def->maxvcpus; - (*record)->vcpus_at_startup = (int64_t) def->vcpus; + (*record)->vcpus_at_startup = (int64_t) def->maxvcpus; // FIXME } if (def->onPoweroff) (*record)->actions_after_shutdown = actionShutdownLibvirt2XenapiEnum(def->onPoweroff); -- 1.7.2.3

On Wed, Sep 29, 2010 at 06:09:22PM -0600, Eric Blake wrote:
Here's where I ran out of time for the day. I'm much less familiar with xen than with qemu, so I have no idea how to tell if xen's documented domain/vcpu_avail (which is what we want for current vcpus) is usable in contrast to domain/vcpus (the maximum amount). For that matter, I'm not even sure if modifying the Sxpr parsing/generating code is enough to make xen use the new attribute, or what else might be involved. Hints on what I need to do from here are greatly appreciated.
I would think augmenting the sexpr should be sufficient for this but the problem is really to find out when the feature is available, and I don't know how to do this reliably either (except trying and if there is an identifiable error keep it disabled). Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 10/01/2010 04:18 PM, Daniel Veillard wrote:
I would think augmenting the sexpr should be sufficient for this but the problem is really to find out when the feature is available, and I don't know how to do this reliably either (except trying and if there is an identifiable error keep it disabled).
It seems to date back to upstream change set 7222, which is old enough not to worry. Paolo

On 09/29/2010 06:09 PM, Eric Blake wrote:
Here's where I ran out of time for the day. I'm much less familiar with xen than with qemu, so I have no idea how to tell if xen's documented domain/vcpu_avail (which is what we want for current vcpus) is usable in contrast to domain/vcpus (the maximum amount). For that matter, I'm not even sure if modifying the Sxpr parsing/generating code is enough to make xen use the new attribute, or what else might be involved. Hints on what I need to do from here are greatly appreciated.
--- src/xen/xend_internal.c | 6 ++++++ src/xen/xm_internal.c | 6 ++++++ src/xenapi/xenapi_utils.c | 2 +- 3 files changed, 13 insertions(+), 1 deletions(-)
Xen is more complex than I first thought. I'm debating whether to support xendConfigVersion <= 2 (xm_internal.h) or just xendConfigVersion >= 3 (xend_internal.h). Additional things I've discovered: http://lists.xensource.com/archives/html/xen-devel/2009-11/msg01061.html documents that the config file (xm_internal.h) has two parameters, vcpus is the max, and vcpu_avail is a bitmask of which vcpus to enable at boot. I'm assuming that xen cannot support more than 32 vcpus? http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/interface/interface.h... documents that each /domain sexpr contains vcpus (maximum), online_vcpus (current), and vcpu_avail (bitmask of online vcpus). The xm vcpu-set command can only change the current vcpus of a running domain, so I'm not sure how that interacts with persistent storage. I can't figure out whether I have to modify xm_internal.h to set the vcpu_avail bitmask to get this to happen, or whether changing the /domain sexpr also affects the persistent configuration. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On 10/06/2010 06:18 PM, Eric Blake wrote:
Xen is more complex than I first thought.
I'm debating whether to support xendConfigVersion <= 2 (xm_internal.h) or just xendConfigVersion >= 3 (xend_internal.h).
RHEL 5.5 uses xendConfigVersion==2, so that answers the question (although not in the way I was expecting - more work before I'm ready to post).
Additional things I've discovered:
http://lists.xensource.com/archives/html/xen-devel/2009-11/msg01061.html
documents that the config file (xm_internal.h) has two parameters, vcpus is the max, and vcpu_avail is a bitmask of which vcpus to enable at boot. I'm assuming that xen cannot support more than 32 vcpus?
I confirmed in the xen source that there is a cap of 32 vcpus for paravirt machines, at least per the MAX_VIRT_CPUS macro as used in libvirt. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On 09/29/2010 08:02 PM, Eric Blake wrote:
As promised, here's my first round of patches to make<vcpu> support add the ability to persistently remember if a domain should have fewer vcpus at boot than the maximum allowed for hot-plugging. Patches 1-10 are pretty well tested, 11 may need some tweaking to get the semantics right, and 12 is mainly for RFC, since I need help finishing off the series.
Eric Blake (12): vcpu: improve cpuset attribute vcpu: add current attribute to<vcpu> element vcpu: add new public API vcpu: define internal driver API vcpu: implement the public APIs vcpu: implement the remote protocol vcpu: make old API trivially wrap to new API vcpu: support maxvcpu in domain_conf vcpu: add virsh support vcpu: improve vcpu support in qemu command line vcpu: complete vcpu support in qemu driver I looked at them and they all look fine to me. ->
ACK Stefan
participants (5)
-
Daniel Veillard
-
Eric Blake
-
Matthias Bolte
-
Paolo Bonzini
-
Stefan Berger