[libvirt] [PATCH V2 0/4]] qemu: report actual vcpu state in virVcpuInfo

Currently, the virVcpuInfo returned by virDomainGetVcpus() will always report a state of VIR_VCPU_RUNNING for each defined domain vcpu even if the vcpu is currently in the halted state. As the monitor interface is in fact reporting the accurate state, it is rather easy to transport this information with the existing API. This is done by - adding a new state of VIR_VCPU_HALTED - extending the monitor to pass back the halted state for the vcpus - adding a new field to the private domain vcpu object reflecting the halted state for the vcpu - modifying the driver code to report the vcpu state based on the halted indicator - extending virsh vcpuinfo to also display the halted state The vcpu state is however not recorded in the internal XML format, since the state can change asynchronously (without notification). V2 is a rebase on top of Peter Krempa's CPU hotplug modernization. Viktor Mihajlovski (4): domain: Add new VCPU state "halted" qemu: Add monitor support for CPU halted state qemu: Add domain support for VCPU halted state qemu: Ensure reported VCPU state is current in driver API include/libvirt/libvirt-domain.h | 1 + src/qemu/qemu_domain.c | 69 ++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 5 +++ src/qemu/qemu_driver.c | 23 ++++++++++++-- src/qemu/qemu_monitor.c | 6 +++- src/qemu/qemu_monitor.h | 5 +++ src/qemu/qemu_monitor_json.c | 3 ++ src/qemu/qemu_monitor_text.c | 8 ++++- tests/qemumonitorjsontest.c | 8 ++--- tools/virsh-domain.c | 3 +- 10 files changed, 122 insertions(+), 9 deletions(-) -- 1.9.1

QEMU virtual CPUs can assume a halted state, which - depending on the architecture - can indicate whether a CPU is in use by the guest operating system. A new VCPU state halted is introduced in preparation of the support in the QEMU driver (and optionally others where deemed appropriate). Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index f1c35d9..0da2003 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1682,6 +1682,7 @@ typedef enum { VIR_VCPU_OFFLINE = 0, /* the virtual CPU is offline */ VIR_VCPU_RUNNING = 1, /* the virtual CPU is running */ VIR_VCPU_BLOCKED = 2, /* the virtual CPU is blocked on resource */ + VIR_VCPU_HALTED = 3, /* the virtual CPU is halted */ # ifdef VIR_ENUM_SENTINELS VIR_VCPU_LAST diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 3829b17..a6b239b 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -181,7 +181,8 @@ VIR_ENUM_IMPL(virshDomainVcpuState, VIR_VCPU_LAST, N_("offline"), N_("running"), - N_("blocked")) + N_("blocked"), + N_("halted")) static const char * virshDomainVcpuStateToString(int state) -- 1.9.1

On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
QEMU virtual CPUs can assume a halted state, which - depending on the architecture - can indicate whether a CPU is in use by the guest operating system.
A new VCPU state halted is introduced in preparation of the support in the QEMU driver (and optionally others where deemed appropriate).
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-)
FWIW: Could be last patch, but that's just an ordering thing. Considering points raised in review of other patches in this series, extra credit to modify virsh.pod for 'vcpuinfo' (and 'domstats') to describe the states ;-) (especially if they differ per architecture!). Maybe there just needs to be "common" place to describe the VCPU states much like there is a section describing the Domain States
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index f1c35d9..0da2003 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1682,6 +1682,7 @@ typedef enum { VIR_VCPU_OFFLINE = 0, /* the virtual CPU is offline */ VIR_VCPU_RUNNING = 1, /* the virtual CPU is running */ VIR_VCPU_BLOCKED = 2, /* the virtual CPU is blocked on resource */ + VIR_VCPU_HALTED = 3, /* the virtual CPU is halted */
# ifdef VIR_ENUM_SENTINELS VIR_VCPU_LAST diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 3829b17..a6b239b 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -181,7 +181,8 @@ VIR_ENUM_IMPL(virshDomainVcpuState, VIR_VCPU_LAST, N_("offline"), N_("running"), - N_("blocked")) + N_("blocked"), + N_("halted"))
I've got some concerns over using "halted" here considering on my findings...
static const char * virshDomainVcpuStateToString(int state)

Extended the qemuMonitorCPUInfo with a halted flag. Extract the halted flag for both text and JSON monitor. Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_monitor.c | 6 +++++- src/qemu/qemu_monitor.h | 5 +++++ src/qemu/qemu_monitor_json.c | 3 +++ src/qemu/qemu_monitor_text.c | 8 +++++++- tests/qemumonitorjsontest.c | 8 ++++---- 5 files changed, 24 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index b9e2910..c61fac7 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1677,6 +1677,7 @@ qemuMonitorCPUInfoClear(qemuMonitorCPUInfoPtr cpus, cpus[i].thread_id = -1; cpus[i].vcpus = 0; cpus[i].tid = 0; + cpus[i].halted = true; VIR_FREE(cpus[i].qom_path); VIR_FREE(cpus[i].alias); @@ -1725,8 +1726,10 @@ qemuMonitorGetCPUInfoLegacy(struct qemuMonitorQueryCpusEntry *cpuentries, size_t i; for (i = 0; i < maxvcpus; i++) { - if (i < ncpuentries) + if (i < ncpuentries) { vcpus[i].tid = cpuentries[i].tid; + vcpus[i].halted = cpuentries[i].halted; + } /* for legacy hotplug to work we need to fake the vcpu count added by * enabling a given vcpu */ @@ -1864,6 +1867,7 @@ qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotpl } vcpus[anyvcpu].tid = cpuentries[j].tid; + vcpus[anyvcpu].halted = cpuentries[j].halted; } return 0; diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 615ab3e..8d0aa01 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -394,6 +394,8 @@ int qemuMonitorSystemPowerdown(qemuMonitorPtr mon); struct qemuMonitorQueryCpusEntry { pid_t tid; char *qom_path; + /* halted indicator */ + bool halted; }; void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, size_t nentries); @@ -441,6 +443,9 @@ struct _qemuMonitorCPUInfo { /* internal for use in the matching code */ char *qom_path; + + /* halted indicator */ + bool halted; }; typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index de746f1..64f9d01 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1349,6 +1349,7 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, for (i = 0; i < ncpus; i++) { virJSONValuePtr entry = virJSONValueArrayGet(data, i); int thread = 0; + bool halted = true; const char *qom_path; if (!entry) { ret = -2; @@ -1358,9 +1359,11 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, /* Some older qemu versions don't report the thread_id so treat this as * non-fatal, simply returning no data */ ignore_value(virJSONValueObjectGetNumberInt(entry, "thread_id", &thread)); + ignore_value(virJSONValueObjectGetBoolean(entry, "halted", &halted)); qom_path = virJSONValueObjectGetString(entry, "qom_path"); cpus[i].tid = thread; + cpus[i].halted = halted; if (VIR_STRDUP(cpus[i].qom_path, qom_path) < 0) goto cleanup; } diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index ff7cc79..f975347 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -521,7 +521,7 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, * (qemu) info cpus * * CPU #0: pc=0x00000000000f0c4a thread_id=30019 * CPU #1: pc=0x00000000fffffff0 thread_id=30020 - * CPU #2: pc=0x00000000fffffff0 thread_id=30021 + * CPU #2: pc=0x00000000fffffff0 (halted) thread_id=30021 * */ line = qemucpus; @@ -541,6 +541,12 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, cpu.tid = tid; + /* Extract halted indicator */ + if ((offset = strstr(line, "(halted)")) != NULL) + cpu.halted = true; + else + cpu.halted = false; + if (VIR_APPEND_ELEMENT_COPY(cpus, ncpus, cpu) < 0) { ret = -1; goto cleanup; diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 9e195d7..8749ea1 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -1221,10 +1221,10 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) int ret = -1; struct qemuMonitorQueryCpusEntry *cpudata = NULL; struct qemuMonitorQueryCpusEntry expect[] = { - {17622, (char *) "/machine/unattached/device[0]"}, - {17624, (char *) "/machine/unattached/device[1]"}, - {17626, (char *) "/machine/unattached/device[2]"}, - {17628, NULL}, + {17622, (char *) "/machine/unattached/device[0]", true}, + {17624, (char *) "/machine/unattached/device[1]", true}, + {17626, (char *) "/machine/unattached/device[2]", true}, + {17628, NULL, true}, }; size_t ncpudata = 0; size_t i; -- 1.9.1

On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
Extended the qemuMonitorCPUInfo with a halted flag. Extract the halted flag for both text and JSON monitor.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_monitor.c | 6 +++++- src/qemu/qemu_monitor.h | 5 +++++ src/qemu/qemu_monitor_json.c | 3 +++ src/qemu/qemu_monitor_text.c | 8 +++++++- tests/qemumonitorjsontest.c | 8 ++++---- 5 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index b9e2910..c61fac7 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1677,6 +1677,7 @@ qemuMonitorCPUInfoClear(qemuMonitorCPUInfoPtr cpus, cpus[i].thread_id = -1; cpus[i].vcpus = 0; cpus[i].tid = 0; + cpus[i].halted = true;
VIR_FREE(cpus[i].qom_path); VIR_FREE(cpus[i].alias); @@ -1725,8 +1726,10 @@ qemuMonitorGetCPUInfoLegacy(struct qemuMonitorQueryCpusEntry *cpuentries, size_t i;
for (i = 0; i < maxvcpus; i++) { - if (i < ncpuentries) + if (i < ncpuentries) { vcpus[i].tid = cpuentries[i].tid; + vcpus[i].halted = cpuentries[i].halted; + }
/* for legacy hotplug to work we need to fake the vcpu count added by * enabling a given vcpu */ @@ -1864,6 +1867,7 @@ qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotpl }
vcpus[anyvcpu].tid = cpuentries[j].tid; + vcpus[anyvcpu].halted = cpuentries[j].halted; }
return 0; diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 615ab3e..8d0aa01 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -394,6 +394,8 @@ int qemuMonitorSystemPowerdown(qemuMonitorPtr mon); struct qemuMonitorQueryCpusEntry { pid_t tid; char *qom_path; + /* halted indicator */
I can't believe I'm typing this, I'll blame pkrempa's influence ;-) - but the comment is unnecessary since the name of the variable makes it really obvious.
+ bool halted; }; void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, size_t nentries); @@ -441,6 +443,9 @@ struct _qemuMonitorCPUInfo {
/* internal for use in the matching code */ char *qom_path; + + /* halted indicator */
Ditto John
+ bool halted; }; typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index de746f1..64f9d01 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1349,6 +1349,7 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, for (i = 0; i < ncpus; i++) { virJSONValuePtr entry = virJSONValueArrayGet(data, i); int thread = 0; + bool halted = true; const char *qom_path; if (!entry) { ret = -2; @@ -1358,9 +1359,11 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, /* Some older qemu versions don't report the thread_id so treat this as * non-fatal, simply returning no data */ ignore_value(virJSONValueObjectGetNumberInt(entry, "thread_id", &thread)); + ignore_value(virJSONValueObjectGetBoolean(entry, "halted", &halted)); qom_path = virJSONValueObjectGetString(entry, "qom_path");
cpus[i].tid = thread; + cpus[i].halted = halted; if (VIR_STRDUP(cpus[i].qom_path, qom_path) < 0) goto cleanup; } diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index ff7cc79..f975347 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -521,7 +521,7 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, * (qemu) info cpus * * CPU #0: pc=0x00000000000f0c4a thread_id=30019 * CPU #1: pc=0x00000000fffffff0 thread_id=30020 - * CPU #2: pc=0x00000000fffffff0 thread_id=30021 + * CPU #2: pc=0x00000000fffffff0 (halted) thread_id=30021 * */ line = qemucpus; @@ -541,6 +541,12 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon,
cpu.tid = tid;
+ /* Extract halted indicator */ + if ((offset = strstr(line, "(halted)")) != NULL) + cpu.halted = true; + else + cpu.halted = false; + if (VIR_APPEND_ELEMENT_COPY(cpus, ncpus, cpu) < 0) { ret = -1; goto cleanup; diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 9e195d7..8749ea1 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -1221,10 +1221,10 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) int ret = -1; struct qemuMonitorQueryCpusEntry *cpudata = NULL; struct qemuMonitorQueryCpusEntry expect[] = { - {17622, (char *) "/machine/unattached/device[0]"}, - {17624, (char *) "/machine/unattached/device[1]"}, - {17626, (char *) "/machine/unattached/device[2]"}, - {17628, NULL}, + {17622, (char *) "/machine/unattached/device[0]", true}, + {17624, (char *) "/machine/unattached/device[1]", true}, + {17626, (char *) "/machine/unattached/device[2]", true}, + {17628, NULL, true}, }; size_t ncpudata = 0; size_t i;

Adding a field to the domain's private vcpu object to hold the halted state information. Adding two functions in support of the halted state: - qemuDomainGetVcpuHalted: retrieve the halted state from a private vcpu object - qemuDomainRefreshVcpuHalted: obtain the per-vcpu halted states via qemu monitor and store the results in the private vcpu objects Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: Hao QingFeng <haoqf@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_domain.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 5 ++++ 2 files changed, 74 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 3f16dbe..3fb9b4f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5956,6 +5956,75 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, return ret; } +/** + * qemuDomainGetVcpuHalted: + * @vm: domain object + * @vcpu: cpu id + * + * Returns the vCPU halted state. + */ +bool +qemuDomainGetVcpuHalted(virDomainObjPtr vm, + unsigned int vcpuid) +{ + virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, vcpuid); + return QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->halted; +} + +/** + * qemuDomainRefreshVcpuHalted: + * @driver: qemu driver data + * @vm: domain object + * @asyncJob: current asynchronous job type + * + * Updates vCPU halted state in the private data of @vm. + * + * Returns number of detected vCPUs on success, -1 on error and reports + * an appropriate error, -2 if the domain doesn't exist any more. + */ +int +qemuDomainRefreshVcpuHalted(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int asyncJob) +{ + virDomainVcpuDefPtr vcpu; + qemuMonitorCPUInfoPtr info = NULL; + size_t maxvcpus = virDomainDefGetVcpusMax(vm->def); + size_t i; + bool hotplug; + int rc; + int ret = -1; + + /* Not supported currently for TCG, see qemuDomainRefreshVcpuInfo */ + if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU) + return 0; + + hotplug = qemuDomainSupportsNewVcpuHotplug(vm); + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; + + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, hotplug); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) { + ret = -2; + goto cleanup; + } + + if (rc < 0) + goto cleanup; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(vm->def, i); + QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->halted = info[i].halted; + } + + ret = 0; + + cleanup: + qemuMonitorCPUInfoFree(info, maxvcpus); + return ret; +} bool qemuDomainSupportsNicdev(virDomainDefPtr def, diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index a1404d0..03e58c5 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -317,6 +317,7 @@ struct _qemuDomainVcpuPrivate { pid_t tid; /* vcpu thread id */ int enable_id; /* order in which the vcpus were enabled in qemu */ char *alias; + bool halted; /* vcpu halted state */ /* information for hotpluggable cpus */ char *type; @@ -662,6 +663,10 @@ int qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, virDomainObjPtr vm, int asyncJob, bool state); +bool qemuDomainGetVcpuHalted(virDomainObjPtr vm, unsigned int vcpu); +int qemuDomainRefreshVcpuHalted(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int asyncJob); bool qemuDomainSupportsNicdev(virDomainDefPtr def, virDomainNetDefPtr net); -- 1.9.1

On 09/20/2016 04:11 AM, Viktor Mihajlovski wrote:
Adding a field to the domain's private vcpu object to hold the halted state information. Adding two functions in support of the halted state: - qemuDomainGetVcpuHalted: retrieve the halted state from a private vcpu object - qemuDomainRefreshVcpuHalted: obtain the per-vcpu halted states via qemu monitor and store the results in the private vcpu objects
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: Hao QingFeng <haoqf@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_domain.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 5 ++++ 2 files changed, 74 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 3f16dbe..3fb9b4f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5956,6 +5956,75 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, return ret; }
+/** + * qemuDomainGetVcpuHalted: + * @vm: domain object + * @vcpu: cpu id + * + * Returns the vCPU halted state. + */ +bool +qemuDomainGetVcpuHalted(virDomainObjPtr vm, + unsigned int vcpuid) +{ + virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, vcpuid); + return QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->halted; +} + +/** + * qemuDomainRefreshVcpuHalted: + * @driver: qemu driver data + * @vm: domain object + * @asyncJob: current asynchronous job type + * + * Updates vCPU halted state in the private data of @vm. + * + * Returns number of detected vCPUs on success, -1 on error and reports + * an appropriate error, -2 if the domain doesn't exist any more.
Neither of the callers checks -1 or -2, just < 0, so is this really necessary?
+ */ +int +qemuDomainRefreshVcpuHalted(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int asyncJob) +{ + virDomainVcpuDefPtr vcpu; + qemuMonitorCPUInfoPtr info = NULL; + size_t maxvcpus = virDomainDefGetVcpusMax(vm->def); + size_t i; + bool hotplug; + int rc; + int ret = -1; + + /* Not supported currently for TCG, see qemuDomainRefreshVcpuInfo */ + if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU) + return 0;
Since the "default" is halted = true could we run into a situation where "halted" (or "running (inactive)") is always set for TCG...
+ + hotplug = qemuDomainSupportsNewVcpuHotplug(vm); + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; + + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, hotplug); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) { + ret = -2;
I see no need to ret = -2;
+ goto cleanup; + } + + if (rc < 0) + goto cleanup; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(vm->def, i); + QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->halted = info[i].halted; + } + + ret = 0; + + cleanup: + qemuMonitorCPUInfoFree(info, maxvcpus); + return ret; +}
bool qemuDomainSupportsNicdev(virDomainDefPtr def, diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index a1404d0..03e58c5 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -317,6 +317,7 @@ struct _qemuDomainVcpuPrivate { pid_t tid; /* vcpu thread id */ int enable_id; /* order in which the vcpus were enabled in qemu */ char *alias; + bool halted; /* vcpu halted state */
Another less than necessary comment ;-) John
/* information for hotpluggable cpus */ char *type; @@ -662,6 +663,10 @@ int qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, virDomainObjPtr vm, int asyncJob, bool state); +bool qemuDomainGetVcpuHalted(virDomainObjPtr vm, unsigned int vcpu); +int qemuDomainRefreshVcpuHalted(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int asyncJob);
bool qemuDomainSupportsNicdev(virDomainDefPtr def, virDomainNetDefPtr net);

On 29.09.2016 16:34, John Ferlan wrote: [...]
--- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5956,6 +5956,75 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, return ret; }
+/** + * qemuDomainGetVcpuHalted: + * @vm: domain object + * @vcpu: cpu id + * + * Returns the vCPU halted state. + */ +bool +qemuDomainGetVcpuHalted(virDomainObjPtr vm, + unsigned int vcpuid) +{ + virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, vcpuid); + return QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->halted; +} + +/** + * qemuDomainRefreshVcpuHalted: + * @driver: qemu driver data + * @vm: domain object + * @asyncJob: current asynchronous job type + * + * Updates vCPU halted state in the private data of @vm. + * + * Returns number of detected vCPUs on success, -1 on error and reports + * an appropriate error, -2 if the domain doesn't exist any more.
Neither of the callers checks -1 or -2, just < 0, so is this really necessary?
yes, this is nonsense, IIRC this was needed in the first version of the series to differentiate the two cases
+ */ +int +qemuDomainRefreshVcpuHalted(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int asyncJob) +{ + virDomainVcpuDefPtr vcpu; + qemuMonitorCPUInfoPtr info = NULL; + size_t maxvcpus = virDomainDefGetVcpusMax(vm->def); + size_t i; + bool hotplug; + int rc; + int ret = -1; + + /* Not supported currently for TCG, see qemuDomainRefreshVcpuInfo */ + if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU) + return 0;
Since the "default" is halted = true could we run into a situation where "halted" (or "running (inactive)") is always set for TCG...
good point, "halted" as the new kid in town should actually be false by default, as you have observed the issue is introduced in patch 2/4 (monitors). [...] -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

Refresh the VCPU halted states in API functions returning domain VCPU state information to make sure it's current. This affects qemuDomainGetVcpus and qemuDomainGetStatsVcpu Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: Hao QingFeng <haoqf@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_driver.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e29180d..7105d26 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1478,13 +1478,17 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, i); pid_t vcpupid = qemuDomainGetVcpuPid(vm, i); virVcpuInfoPtr vcpuinfo = info + ncpuinfo; + bool vcpuhalted = qemuDomainGetVcpuHalted(vm, i); if (!vcpu->online) continue; if (info) { vcpuinfo->number = i; - vcpuinfo->state = VIR_VCPU_RUNNING; + if (vcpuhalted) + vcpuinfo->state = VIR_VCPU_HALTED; + else + vcpuinfo->state = VIR_VCPU_RUNNING; if (qemuGetProcessInfo(&vcpuinfo->cpuTime, &vcpuinfo->cpu, NULL, @@ -5370,6 +5374,7 @@ qemuDomainGetVcpus(virDomainPtr dom, unsigned char *cpumaps, int maplen) { + virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; int ret = -1; @@ -5385,6 +5390,13 @@ qemuDomainGetVcpus(virDomainPtr dom, goto cleanup; } + if (qemuDomainRefreshVcpuHalted(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", + _("could not refresh CPU states")); + goto cleanup; + } + ret = qemuDomainHelperGetVcpus(vm, info, NULL, maxinfo, cpumaps, maplen); cleanup: @@ -18863,7 +18875,7 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver, static int -qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, +qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, virDomainObjPtr dom, virDomainStatsRecordPtr record, int *maxparams, @@ -18893,6 +18905,13 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, VIR_ALLOC_N(cpuwait, virDomainDefGetVcpus(dom->def)) < 0) goto cleanup; + if (qemuDomainRefreshVcpuHalted(driver, dom, QEMU_ASYNC_JOB_NONE) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", + _("could not refresh CPU states")); + goto cleanup; + } + if (qemuDomainHelperGetVcpus(dom, cpuinfo, cpuwait, virDomainDefGetVcpus(dom->def), NULL, 0) < 0) { -- 1.9.1

On 09/20/2016 04:11 AM, Viktor Mihajlovski wrote:
Refresh the VCPU halted states in API functions returning domain VCPU state information to make sure it's current. This affects qemuDomainGetVcpus and qemuDomainGetStatsVcpu
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: Hao QingFeng <haoqf@linux.vnet.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com> --- src/qemu/qemu_driver.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e29180d..7105d26 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1478,13 +1478,17 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, i); pid_t vcpupid = qemuDomainGetVcpuPid(vm, i); virVcpuInfoPtr vcpuinfo = info + ncpuinfo; + bool vcpuhalted = qemuDomainGetVcpuHalted(vm, i);
if (!vcpu->online) continue;
if (info) { vcpuinfo->number = i; - vcpuinfo->state = VIR_VCPU_RUNNING; + if (vcpuhalted) + vcpuinfo->state = VIR_VCPU_HALTED;
And this causes the client to see "halted" even though the vcpu may be running, but just not busy. Also if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU), then we'll always be halted since qemuDomainRefreshVcpuHalted will avoid the refetch of data.
+ else + vcpuinfo->state = VIR_VCPU_RUNNING;
if (qemuGetProcessInfo(&vcpuinfo->cpuTime, &vcpuinfo->cpu, NULL, @@ -5370,6 +5374,7 @@ qemuDomainGetVcpus(virDomainPtr dom, unsigned char *cpumaps, int maplen) {
The opposite end of virDomainGetVcpus a/k/a 'virsh vcpuinfo'
+ virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; int ret = -1;
@@ -5385,6 +5390,13 @@ qemuDomainGetVcpus(virDomainPtr dom, goto cleanup; }
+ if (qemuDomainRefreshVcpuHalted(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", + _("could not refresh CPU states"));
This overwrites what message qemuDomainRefreshVcpuHalted should have generated. Besides the "%s", could be on the previous line...
+ goto cleanup; + } + ret = qemuDomainHelperGetVcpus(vm, info, NULL, maxinfo, cpumaps, maplen);
cleanup: @@ -18863,7 +18875,7 @@ qemuDomainGetStatsBalloon(virQEMUDriverPtr driver,
static int -qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, +qemuDomainGetStatsVcpu(virQEMUDriverPtr driver, virDomainObjPtr dom, virDomainStatsRecordPtr record, int *maxparams, @@ -18893,6 +18905,13 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, VIR_ALLOC_N(cpuwait, virDomainDefGetVcpus(dom->def)) < 0) goto cleanup;
+ if (qemuDomainRefreshVcpuHalted(driver, dom, QEMU_ASYNC_JOB_NONE) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", + _("could not refresh CPU states"));
Same comment John
+ goto cleanup; + } + if (qemuDomainHelperGetVcpus(dom, cpuinfo, cpuwait, virDomainDefGetVcpus(dom->def), NULL, 0) < 0) {

On 29.09.2016 16:35, John Ferlan wrote: [...]
--- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1478,13 +1478,17 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(vm->def, i); pid_t vcpupid = qemuDomainGetVcpuPid(vm, i); virVcpuInfoPtr vcpuinfo = info + ncpuinfo; + bool vcpuhalted = qemuDomainGetVcpuHalted(vm, i);
if (!vcpu->online) continue;
if (info) { vcpuinfo->number = i; - vcpuinfo->state = VIR_VCPU_RUNNING; + if (vcpuhalted) + vcpuinfo->state = VIR_VCPU_HALTED;
And this causes the client to see "halted" even though the vcpu may be running, but just not busy.
Also if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU), then we'll always be halted since qemuDomainRefreshVcpuHalted will avoid the refetch of data.
agree: as discussed in 3/4, wrong default for vcpuhalted
+ else + vcpuinfo->state = VIR_VCPU_RUNNING;
if (qemuGetProcessInfo(&vcpuinfo->cpuTime, &vcpuinfo->cpu, NULL, @@ -5370,6 +5374,7 @@ qemuDomainGetVcpus(virDomainPtr dom, unsigned char *cpumaps, int maplen) {
The opposite end of virDomainGetVcpus a/k/a 'virsh vcpuinfo'
+ virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; int ret = -1;
@@ -5385,6 +5390,13 @@ qemuDomainGetVcpus(virDomainPtr dom, goto cleanup; }
+ if (qemuDomainRefreshVcpuHalted(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", + _("could not refresh CPU states"));
This overwrites what message qemuDomainRefreshVcpuHalted should have generated. Besides the "%s", could be on the previous line...
yeah, rebase damage (similar to rc = 2) [...] now, the comments are rather easy to incorporate in a V3 of this series, but the main question for me is how to address your concerns about exposing the idleness of x86 vcpus? -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
Currently, the virVcpuInfo returned by virDomainGetVcpus() will always report a state of VIR_VCPU_RUNNING for each defined domain vcpu even if the vcpu is currently in the halted state.
As the monitor interface is in fact reporting the accurate state, it is rather easy to transport this information with the existing API.
This is done by - adding a new state of VIR_VCPU_HALTED - extending the monitor to pass back the halted state for the vcpus - adding a new field to the private domain vcpu object reflecting the halted state for the vcpu - modifying the driver code to report the vcpu state based on the halted indicator - extending virsh vcpuinfo to also display the halted state
The vcpu state is however not recorded in the internal XML format, since the state can change asynchronously (without notification).
V2 is a rebase on top of Peter Krempa's CPU hotplug modernization.
So, I have a question based on a little bit of testing I did with one of my guests and reading up on the qemu qapi-schema.json which states: # Notes: @halted is a transient state that changes frequently. By the time the # data is sent to the client, the guest may no longer be halted. It seems "halted" is returned whenever a vCPU is not actively processing anything (or not scheduled), which I suppose is "expected" for idle guests; however, if I used the vcpuinfo command and saw: # virsh vcpuinfo $dom VCPU: 0 CPU: 7 State: halted CPU time: 84.8s CPU Affinity: yyyyyyyy ... I might be concerned that it's not "running" or "running (active)" vs. "running (inactive)" (or paused or waiting or something to indicate not actively processing/scheduled). The halted state could be the "norm". So is this more of a "stats" type value vs. purely an "info" type value? Also, considering hotplug differences w/ CPU's for x86, ppc, s390, and arm - could this be some sort of architecture difference too (I'm using x86)? Primarily I'm concerned the transient nature of the field based on whether something is scheduled for the thread could lead to some "erroneous" bug reports especially since "running" may not be the dominant state any more. John
Viktor Mihajlovski (4): domain: Add new VCPU state "halted" qemu: Add monitor support for CPU halted state qemu: Add domain support for VCPU halted state qemu: Ensure reported VCPU state is current in driver API
include/libvirt/libvirt-domain.h | 1 + src/qemu/qemu_domain.c | 69 ++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 5 +++ src/qemu/qemu_driver.c | 23 ++++++++++++-- src/qemu/qemu_monitor.c | 6 +++- src/qemu/qemu_monitor.h | 5 +++ src/qemu/qemu_monitor_json.c | 3 ++ src/qemu/qemu_monitor_text.c | 8 ++++- tests/qemumonitorjsontest.c | 8 ++--- tools/virsh-domain.c | 3 +- 10 files changed, 122 insertions(+), 9 deletions(-)

On Thu, Sep 29, 2016 at 10:30:07 -0400, John Ferlan wrote:
On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
Currently, the virVcpuInfo returned by virDomainGetVcpus() will always report a state of VIR_VCPU_RUNNING for each defined domain vcpu even if the vcpu is currently in the halted state.
As the monitor interface is in fact reporting the accurate state, it is rather easy to transport this information with the existing API.
This is done by - adding a new state of VIR_VCPU_HALTED - extending the monitor to pass back the halted state for the vcpus - adding a new field to the private domain vcpu object reflecting the halted state for the vcpu - modifying the driver code to report the vcpu state based on the halted indicator - extending virsh vcpuinfo to also display the halted state
The vcpu state is however not recorded in the internal XML format, since the state can change asynchronously (without notification).
V2 is a rebase on top of Peter Krempa's CPU hotplug modernization.
So, I have a question based on a little bit of testing I did with one of my guests and reading up on the qemu qapi-schema.json which states:
# Notes: @halted is a transient state that changes frequently. By the time the # data is sent to the client, the guest may no longer be halted.
It seems "halted" is returned whenever a vCPU is not actively processing anything (or not scheduled), which I suppose is "expected" for idle guests; however, if I used the vcpuinfo command and saw:
As it was pointed out in the previous posting the "halted" state has different meaning on certain arches. On intel it states that the cpu is idle and waiting. On s390 it is supposed to mean that the guest did not start using it yet.
# virsh vcpuinfo $dom VCPU: 0 CPU: 7 State: halted CPU time: 84.8s CPU Affinity: yyyyyyyy
I was worried that this exact change would happen at least for x86. I don't think we should do this. This would become misleading to many users.
...
I might be concerned that it's not "running" or "running (active)" vs. "running (inactive)" (or paused or waiting or something to indicate not actively processing/scheduled). The halted state could be the "norm".
So is this more of a "stats" type value vs. purely an "info" type value? Also, considering hotplug differences w/ CPU's for x86, ppc, s390, and arm - could this be some sort of architecture difference too (I'm using x86)?
See above
Primarily I'm concerned the transient nature of the field based on whether something is scheduled for the thread could lead to some "erroneous" bug reports especially since "running" may not be the dominant state any more.
I fully agree. This does not seem to be a good thing to do mainly because of the x86 implications. I don't think that the benefit for s390 hotplug would outweigh the semantics change for the running state in any way. Peter

On 09/29/2016 11:04 AM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 10:30:07 -0400, John Ferlan wrote:
On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
Currently, the virVcpuInfo returned by virDomainGetVcpus() will always report a state of VIR_VCPU_RUNNING for each defined domain vcpu even if the vcpu is currently in the halted state.
As the monitor interface is in fact reporting the accurate state, it is rather easy to transport this information with the existing API.
This is done by - adding a new state of VIR_VCPU_HALTED - extending the monitor to pass back the halted state for the vcpus - adding a new field to the private domain vcpu object reflecting the halted state for the vcpu - modifying the driver code to report the vcpu state based on the halted indicator - extending virsh vcpuinfo to also display the halted state
The vcpu state is however not recorded in the internal XML format, since the state can change asynchronously (without notification).
V2 is a rebase on top of Peter Krempa's CPU hotplug modernization.
So, I have a question based on a little bit of testing I did with one of my guests and reading up on the qemu qapi-schema.json which states:
# Notes: @halted is a transient state that changes frequently. By the time the # data is sent to the client, the guest may no longer be halted.
It seems "halted" is returned whenever a vCPU is not actively processing anything (or not scheduled), which I suppose is "expected" for idle guests; however, if I used the vcpuinfo command and saw:
As it was pointed out in the previous posting the "halted" state has different meaning on certain arches. On intel it states that the cpu is idle and waiting. On s390 it is supposed to mean that the guest did not start using it yet.
Hence the sly comment to document the various differences! Seems it may be more of an "idle" or "active" detector for x86 and if it were to be added, then a new "stats" value would be created rather than sharing/using the existing State which is really a detector of vcpu running or offline for most hypervisors (xen added BLOCKED).
# virsh vcpuinfo $dom VCPU: 0 CPU: 7 State: halted CPU time: 84.8s CPU Affinity: yyyyyyyy
I was worried that this exact change would happen at least for x86. I don't think we should do this. This would become misleading to many users.
...
I might be concerned that it's not "running" or "running (active)" vs. "running (inactive)" (or paused or waiting or something to indicate not actively processing/scheduled). The halted state could be the "norm".
So is this more of a "stats" type value vs. purely an "info" type value? Also, considering hotplug differences w/ CPU's for x86, ppc, s390, and arm - could this be some sort of architecture difference too (I'm using x86)?
See above
Primarily I'm concerned the transient nature of the field based on whether something is scheduled for the thread could lead to some "erroneous" bug reports especially since "running" may not be the dominant state any more.
I fully agree. This does not seem to be a good thing to do mainly because of the x86 implications. I don't think that the benefit for s390 hotplug would outweigh the semantics change for the running state in any way.
Again - if something were to be added, then it's a stats only value. That value would need to be well described for the various cpu architectures (and it'd be qemu biased). Cannot extend _virVcpuInfo since it's allocated as a contiguous array and who's to say which side has which version. John

On 29.09.2016 21:44, John Ferlan wrote:
On 09/29/2016 11:04 AM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 10:30:07 -0400, John Ferlan wrote:
On 09/20/2016 04:10 AM, Viktor Mihajlovski wrote:
[...]
So, I have a question based on a little bit of testing I did with one of my guests and reading up on the qemu qapi-schema.json which states:
# Notes: @halted is a transient state that changes frequently. By the time the # data is sent to the client, the guest may no longer be halted.
It seems "halted" is returned whenever a vCPU is not actively processing anything (or not scheduled), which I suppose is "expected" for idle guests; however, if I used the vcpuinfo command and saw:
As it was pointed out in the previous posting the "halted" state has different meaning on certain arches. On intel it states that the cpu is idle and waiting. On s390 it is supposed to mean that the guest did not start using it yet.
Hence the sly comment to document the various differences!
Seems it may be more of an "idle" or "active" detector for x86 and if it were to be added, then a new "stats" value would be created rather than sharing/using the existing State which is really a detector of vcpu running or offline for most hypervisors (xen added BLOCKED).
# virsh vcpuinfo $dom VCPU: 0 CPU: 7 State: halted CPU time: 84.8s CPU Affinity: yyyyyyyy
I was worried that this exact change would happen at least for x86. I don't think we should do this. This would become misleading to many users.
...
I might be concerned that it's not "running" or "running (active)" vs. "running (inactive)" (or paused or waiting or something to indicate not actively processing/scheduled). The halted state could be the "norm".
Would your concerns be somewhat relieved by using a different term like "idle" or "running (idle)" which is what the state means on x86 for all
"halted" as reported by QAPI is definitely a state indication and as such does qualify IMO to be represented in virVcpuInfo. And on s390x it basically means offline, in the sense it will not be used by the OS. A state should not be re-interpreted as metric value just because of its volatility (which is platform dependent). I partly understand the concerns about potential confusion, which could be remedied by proper documentation or better terminology. practical purposes? -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294
participants (3)
-
John Ferlan
-
Peter Krempa
-
Viktor Mihajlovski