On 10/01/2010 10:13 AM, Matthias Bolte wrote:
> However, in implementing things, I'm wondering if I should
use the names:
>
> VIR_DOMAIN_VCPU_CONFIG (instead of VIR_DOMAIN_VCPU_PERSISTENT)
> VIR_DOMAIN_VCPU_LIVE (instead of VIR_DOMAIN_VCPU_ACTIVE)
>
> to match virDomainDeviceModifyFlags, where _CONFIG and _LIVE have
> the same semantics of setting one or both aspects of a domain.
>
If you want to match the semantics of _CONFIG and _LIVE from
virDomainDeviceModifyFlags then you need to change the ESX driver
differently in patch 7/12.
If I understand the semantics of virDomainDeviceModifyFlags correctly,
then _LIVE is only affecting the runtime state of the domain and
changes done with the _LIVE flag only are lost on domain restart. In
the same way _CONFIG only affects the persistent config and not the
runtime state.
Yes, that was the usage pattern I had envisioned, to allow for domains
like qemu where you can make a live-only change.
With ESX there is no such thing as a distinct runtime state. Changes
to the number of vCPUs affect the runtime state and the persistent
config at the same time.
And, in playing with things a bit more, this is what I see pre-patch on
Fedora 14 (# for host, $ for guest):
# virsh edit fedora_14
# virsh dumpxml fedora_14 | grep vcpu
<vcpu>2</vcpu>
# virsh start fedora_14
Domain fedora_14 started
# virsh setvcpus fedora_14 3
[hmm, trying to exceed the max killed my guest - not very nice]
# virsh setvcpus fedora_14 1
error: Requested operation is not valid: domain is not running
[it currently requires a live domain]
# virsh dumpxml fedora_14 | grep vcpu
<vcpu>3</vcpu>
[but it changed the configured value]
# virsh start fedora_14
Domain fedora_14 started
# virsh setvcpus fedora_14 2
# virsh dumpxml fedora_14 | grep vcpu
<vcpu>2</vcpu>
$ nproc
3
[hmm - the live value is still 3]
$ shutdown -h now
# virsh start fedora_14
Domain fedora_14 started
$ nproc
2
[yep - definitely a persistent setting, and I couldn't figure how it is
supposed to be live, since I could not see qemuMonitorSetCPU having a
visible effect on nproc(1) in the guest?]
Currently you altered the ESX driver in patch 7/12 to
esxDomainSetVcpus = esxDomainSetVcpusFlags(VIR_DOMAIN_VCPU_ACTIVE)
But this should actually use _LIVE | _CONFIG as flags to match what
ESX can do only.
I'm starting to think that should be the case in general:
virDomainSetVcpus should map to
virDomainSetVcpusFlags(VIR_DOMAIN_VCPU_LIVE|VIR_DOMAIN_VCPU_CONFIG), and
the new behavior becomes that of specifying live-only or config-only.
There may yet be some logic bugs for some drivers, and certainly this
argues for additions to the TCK on hot-plugged vcpu behavior. At any
rate, I really need to post a v2 series that tries to be more careful
with my handling of live vs. configuration.
Also in 5/12 you state that trying to change the vCPU count of an
inactive domain using the _LIVE flag should fail. The implementation
in the ESX driver in 7/12 doesn't match this either.
If the domain is inactive, then _LIVE can't possibly change anything. I
could probably agree that it makes more sense to have _CONFIG|_LIVE
silently ignore _LIVE when a domain is inactive rather than erroring
out, especially given the argument that the existing setVcpus command
without flags should probably be using _CONFIG|_LIVE; except that the
qemu example above proved that the existing setVcpus errors out if the
domain is not live. And I still think that _LIVE in isolation on an
inactive domain should error out (the _CONFIG|_LIVE combination is
different, because _CONFIG can still do useful work).
esxDomainSetVcpus needs to call esxDomainSetVcpusFlags with _CONFIG
and add _LIVE to the set of flags when the domain is active. Likewise
esxDomainSetVcpusFlags needs to enforce that _CONFIG is always set and
that _LIVE is set iff the domain is active.
The same might apply to other drivers, I didn't check this yet.
Matthias
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library
http://libvirt.org