[PATCH 0/4] Implement virsh hypervisor-cpu-models

Allows for the query of hypervisor-known CPU models via the simple command: virsh hypervisor-cpu-models. For the QEMU driver, the models are queried via the capabilities file. Each model is printed to the terminal on its own line similar to the cpu-models command, and there is no order to the listing. There is a related libvirt-python merge request at https://gitlab.com/libvirt/libvirt-python/-/merge_requests/159 David Judkovics (4): libvirt: Introduce virConnectGetHypervisorCPUModelNames public API remote: Implement virConnectGetHypervisorCPUModelNames qemu_driver: Implement qemuConnectGetHypervisorCPUModelNames virsh: Introduce new hypervisor-cpu-models command docs/manpages/virsh.rst | 20 ++++++++ include/libvirt/libvirt-host.h | 7 +++ src/driver-hypervisor.h | 10 ++++ src/libvirt-host.c | 65 +++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++ src/qemu/qemu_driver.c | 59 +++++++++++++++++++++++ src/remote/remote_daemon_dispatch.c | 62 ++++++++++++++++++++++++ src/remote/remote_driver.c | 52 ++++++++++++++++++++ src/remote/remote_protocol.x | 26 +++++++++- src/remote_protocol-structs | 16 +++++++ tools/virsh-host.c | 74 +++++++++++++++++++++++++++++ 11 files changed, 395 insertions(+), 1 deletion(-) -- 2.47.0

From: David Judkovics <djudkovi@linux.ibm.com> The new API collects a list of CPU model names supported by the specified hypervisor. This is a more useful version of virConnectGetCPUNames, which does not consider any hypervisor capabilities when querying model names. Signed-off-by: David Judkovics <djudkovi@linux.ibm.com> Signed-off-by: Collin Walling <walling@linux.ibm.com> Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com> --- include/libvirt/libvirt-host.h | 7 ++++ src/driver-hypervisor.h | 10 ++++++ src/libvirt-host.c | 65 ++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 +++ 4 files changed, 87 insertions(+) diff --git a/include/libvirt/libvirt-host.h b/include/libvirt/libvirt-host.h index 3112f2b676..67f57cdd16 100644 --- a/include/libvirt/libvirt-host.h +++ b/include/libvirt/libvirt-host.h @@ -961,6 +961,13 @@ int virConnectGetCPUModelNames(virConnectPtr conn, const char *arch, char ***models, unsigned int flags); +int virConnectGetHypervisorCPUModelNames(virConnectPtr conn, + const char *emulator, + const char *arch, + const char *machine, + const char *virttype, + char ***models, + unsigned int flags); /** * virConnectBaselineCPUFlags: diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 4ce8da078d..ed42e3553d 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -1453,6 +1453,15 @@ typedef int unsigned int type, unsigned int flags); +typedef int +(*virDrvConnectGetHypervisorCPUModelNames)(virConnectPtr conn, + const char *emulator, + const char *arch, + const char *machine, + const char *virttype, + char ***models, + unsigned int flags); + typedef struct _virHypervisorDriver virHypervisorDriver; /** @@ -1726,4 +1735,5 @@ struct _virHypervisorDriver { virDrvDomainStartDirtyRateCalc domainStartDirtyRateCalc; virDrvDomainFDAssociate domainFDAssociate; virDrvDomainGraphicsReload domainGraphicsReload; + virDrvConnectGetHypervisorCPUModelNames connectGetHypervisorCPUModelNames; }; diff --git a/src/libvirt-host.c b/src/libvirt-host.c index b3a6421a7f..ba845a7126 100644 --- a/src/libvirt-host.c +++ b/src/libvirt-host.c @@ -1222,6 +1222,71 @@ virConnectGetCPUModelNames(virConnectPtr conn, const char *arch, char ***models, } +/** + * virConnectGetHypervisorCPUModelNames: + * + * @conn: pointer to the hypervisor connection + * @emulator: path to the emulator binary + * @arch: CPU architecture + * @machine: machine type + * @virttype: virtualization type + * @models: Pointer to a variable to store the NULL-terminated array of the + * CPU models supported by the hypervisor. Each element + * and the array itself must be freed by the caller with free. Pass + * NULL if only the list length is needed. + * @flags: extra flags; not used yet, so callers should always pass 0. + * + * Get the list of CPU models recognized by the hypervisor for a specific + * architecture. Note that even if the hypervisor reports a particular CPU + * model, hardware limitations may impose restrictions on which CPU models + * may be supported on the host (e.g. on s390 the hypervisor may report + * model gen15a, but this model will not run on an older machine such as z14). + * + * Returns -1 on error, number of elements in @models on success. + * + * Since: 11.1.0 + */ +int +virConnectGetHypervisorCPUModelNames(virConnectPtr conn, + const char *emulator, + const char *arch, + const char *machine, + const char *virttype, + char ***models, + unsigned int flags) +{ + + VIR_DEBUG("conn=%p, emulator=%s, arch=%s, " + "machine=%s, virttype=%s,flags=0x%x", + conn, NULLSTR(emulator), NULLSTR(arch), + NULLSTR(machine), NULLSTR(virttype), flags); + + virResetLastError(); + virCheckConnectReturn(conn, -1); + + if (conn->driver->connectGetHypervisorCPUModelNames) { + int nmodels; + nmodels = conn->driver->connectGetHypervisorCPUModelNames(conn, + emulator, + arch, + machine, + virttype, + models, + flags); + if (nmodels < 0) { + goto error; + } + + return nmodels; + } + virReportUnsupportedError(); + + error: + virDispatchError(conn); + return -1; +} + + /** * virConnectBaselineCPU: * diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 7a3492d9d7..88f33427cc 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -948,4 +948,9 @@ LIBVIRT_10.2.0 { virDomainGraphicsReload; } LIBVIRT_10.1.0; +LIBVIRT_11.1.0 { + global: + virConnectGetHypervisorCPUModelNames; +} LIBVIRT_10.2.0; + # .... define new API here using predicted next version number .... -- 2.47.0

From: David Judkovics <djudkovi@linux.ibm.com> Adding access to virConnectGetHypervisorCPUModelNames running on a remote machine. Signed-off-by: David Judkovics <djudkovi@linux.ibm.com> Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com> --- src/remote/remote_daemon_dispatch.c | 62 +++++++++++++++++++++++++++++ src/remote/remote_driver.c | 52 ++++++++++++++++++++++++ src/remote/remote_protocol.x | 26 +++++++++++- src/remote_protocol-structs | 16 ++++++++ 4 files changed, 155 insertions(+), 1 deletion(-) diff --git a/src/remote/remote_daemon_dispatch.c b/src/remote/remote_daemon_dispatch.c index e812f5c3e9..2d3db1c74e 100644 --- a/src/remote/remote_daemon_dispatch.c +++ b/src/remote/remote_daemon_dispatch.c @@ -5924,6 +5924,68 @@ remoteDispatchConnectGetCPUModelNames(virNetServer *server G_GNUC_UNUSED, } +static int +remoteDispatchConnectGetHypervisorCPUModelNames(virNetServer *server G_GNUC_UNUSED, + virNetServerClient *client, + virNetMessage *msg G_GNUC_UNUSED, + struct virNetMessageError *rerr, + remote_connect_get_hypervisor_cpu_model_names_args *args, + remote_connect_get_hypervisor_cpu_model_names_ret *ret) +{ + int len, rv = -1; + char *emulator; + char *arch; + char *machine; + char *virttype; + + g_auto(GStrv) models = NULL; + virConnectPtr conn = remoteGetHypervisorConn(client); + + if (!conn) + goto cleanup; + + emulator = args->emulator ? *args->emulator : NULL; + arch = args->arch ? *args->arch : NULL; + machine = args->machine ? *args->machine : NULL; + virttype = args->virttype ? *args->virttype : NULL; + + len = virConnectGetHypervisorCPUModelNames(conn, + emulator, + arch, + machine, + virttype, + args->need_results ? &models : NULL, + args->flags); + + if (len < 0) + goto cleanup; + + if (len > REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX) { + virReportError(VIR_ERR_RPC, + _("Too many CPU models '%1$d' for limit '%2$d'"), + len, REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX); + goto cleanup; + } + + if (len && models) { + ret->models.models_val = g_steal_pointer(&models); + ret->models.models_len = len; + } else { + ret->models.models_val = NULL; + ret->models.models_len = 0; + } + + ret->ret = len; + + rv = 0; + + cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + return rv; +} + + static int remoteDispatchDomainCreateXMLWithFiles(virNetServer *server G_GNUC_UNUSED, virNetServerClient *client, diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 307f9ca945..f5e2c88723 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -5865,6 +5865,57 @@ remoteConnectGetCPUModelNames(virConnectPtr conn, } +static int +remoteConnectGetHypervisorCPUModelNames(virConnectPtr conn, + const char *emulator, + const char *arch, + const char *machine, + const char *virttype, + char ***models, + unsigned int flags) +{ + size_t i; + g_auto(GStrv) retmodels = NULL; + remote_connect_get_hypervisor_cpu_model_names_args args = {0}; + g_auto(remote_connect_get_hypervisor_cpu_model_names_ret) ret = {0}; + struct private_data *priv = conn->privateData; + VIR_LOCK_GUARD lock = remoteDriverLock(priv); + + args.emulator = emulator ? (char **)&emulator : NULL; + args.arch = arch ? (char **)&arch : NULL; + args.machine = machine ? (char **)&machine : NULL; + args.virttype = virttype ? (char **)&virttype : NULL; + args.need_results = !!models; + args.flags = flags; + + if (call(conn, priv, 0, REMOTE_PROC_CONNECT_GET_HYPERVISOR_CPU_MODEL_NAMES, + (xdrproc_t) xdr_remote_connect_get_hypervisor_cpu_model_names_args, + (char *) &args, + (xdrproc_t) xdr_remote_connect_get_hypervisor_cpu_model_names_ret, + (char *) &ret) < 0) + return -1; + + /* Check the length of the returned list carefully. */ + if (ret.models.models_len > REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX) { + virReportError(VIR_ERR_RPC, + _("Too many model names '%1$d' for limit '%2$d'"), + ret.models.models_len, + REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX); + return -1; + } + + if (models) { + retmodels = g_new0(char *, ret.models.models_len + 1); + for (i = 0; i < ret.models.models_len; i++) { + retmodels[i] = g_steal_pointer(&ret.models.models_val[i]); + } + *models = g_steal_pointer(&retmodels); + } + + return ret.ret; +} + + static int remoteDomainOpenGraphics(virDomainPtr dom, unsigned int idx, @@ -7815,6 +7866,7 @@ static virHypervisorDriver hypervisor_driver = { .domainSetLifecycleAction = remoteDomainSetLifecycleAction, /* 3.9.0 */ .connectCompareHypervisorCPU = remoteConnectCompareHypervisorCPU, /* 4.4.0 */ .connectBaselineHypervisorCPU = remoteConnectBaselineHypervisorCPU, /* 4.4.0 */ + .connectGetHypervisorCPUModelNames = remoteConnectGetHypervisorCPUModelNames, /* 11.1.0 */ .nodeGetSEVInfo = remoteNodeGetSEVInfo, /* 4.5.0 */ .domainGetLaunchSecurityInfo = remoteDomainGetLaunchSecurityInfo, /* 4.5.0 */ .domainCheckpointCreateXML = remoteDomainCheckpointCreateXML, /* 5.6.0 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 41c045ff78..3daf991e2d 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -239,6 +239,9 @@ const REMOTE_DOMAIN_JOB_STATS_MAX = 64; /* Upper limit on number of CPU models */ const REMOTE_CONNECT_CPU_MODELS_MAX = 8192; +/* Upper limit on number Hypervisor CPU models */ +const REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX = 256; + /* Upper limit on number of mountpoints to frozen */ const REMOTE_DOMAIN_FSFREEZE_MOUNTPOINTS_MAX = 256; @@ -3679,6 +3682,20 @@ struct remote_connect_baseline_hypervisor_cpu_ret { remote_nonnull_string cpu; }; +struct remote_connect_get_hypervisor_cpu_model_names_args { + remote_string emulator; + remote_string arch; + remote_string machine; + remote_string virttype; + int need_results; + unsigned int flags; +}; + +struct remote_connect_get_hypervisor_cpu_model_names_ret { + remote_nonnull_string models<REMOTE_CONNECT_HYPERVISOR_CPU_MODELS_MAX>; + int ret; +}; + struct remote_node_get_sev_info_args { int nparams; unsigned int flags; @@ -7048,5 +7065,12 @@ enum remote_procedure { * @generate: both * @acl: domain:write */ - REMOTE_PROC_DOMAIN_GRAPHICS_RELOAD = 448 + REMOTE_PROC_DOMAIN_GRAPHICS_RELOAD = 448, + + /** + * @generate: none + * @priority: high + * @acl: connect:write + */ + REMOTE_PROC_CONNECT_GET_HYPERVISOR_CPU_MODEL_NAMES = 449 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 4d3dc2d249..6d354ab6cc 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -3031,6 +3031,21 @@ struct remote_connect_baseline_hypervisor_cpu_args { struct remote_connect_baseline_hypervisor_cpu_ret { remote_nonnull_string cpu; }; +struct remote_connect_get_hypervisor_cpu_model_names_args { + remote_string emulator; + remote_string arch; + remote_string machine; + remote_string virttype; + int need_results; + u_int flags; +}; +struct remote_connect_get_hypervisor_cpu_model_names_ret { + struct { + u_int models_len; + remote_nonnull_string * models_val; + } models; + int ret; +}; struct remote_node_get_sev_info_args { int nparams; u_int flags; @@ -3755,4 +3770,5 @@ enum remote_procedure { REMOTE_PROC_NETWORK_EVENT_CALLBACK_METADATA_CHANGE = 446, REMOTE_PROC_NODE_DEVICE_UPDATE = 447, REMOTE_PROC_DOMAIN_GRAPHICS_RELOAD = 448, + REMOTE_PROC_CONNECT_GET_HYPERVISOR_CPU_MODEL_NAMES = 449, }; -- 2.47.0

From: David Judkovics <djudkovi@linux.ibm.com> This function is utilized by the new virsh hypervisor-cpu-models command. The CPU models are read directly from the QEMU capabilities file, which contains a list of all models queried from the hypervisor. Signed-off-by: David Judkovics <djudkovi@linux.ibm.com> Signed-off-by: Collin Walling <walling@linux.ibm.com> Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com> --- src/qemu/qemu_driver.c | 59 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d2eddbd9ae..1a795cebd0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -11948,6 +11948,64 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn, } +static int +qemuConnectGetHypervisorCPUModelNames(virConnectPtr conn, + const char *emulator, + const char *archStr, + const char *machine, + const char *virttypeStr, + char ***models, + unsigned int flags) +{ + virQEMUDriver *driver = conn->privateData; + g_autoptr(virQEMUCaps) qemuCaps = NULL; + virArch arch; + virDomainVirtType virttype; + g_autoptr(virDomainCaps) domCaps = NULL; + size_t i = 0; + + virDomainCapsCPUModel *customModels = NULL; + int customNumModels; + + virCheckFlags(0, -1); + + if (virConnectGetHypervisorCPUModelNamesEnsureACL(conn) < 0) + return -1; + + qemuCaps = virQEMUCapsCacheLookupDefault(driver->qemuCapsCache, + emulator, + archStr, + virttypeStr, + machine, + &arch, + &virttype, + &machine); + if (!qemuCaps) + return -1; + + if (!(domCaps = virQEMUDriverGetDomainCapabilities(driver, + qemuCaps, + machine, + arch, + virttype))) + return -1; + + customModels = domCaps->cpu.custom->models; + customNumModels = domCaps->cpu.custom->nmodels; + + if (models) { + *models = g_new0(char *, customNumModels); + for (i = 0; i < customNumModels; i++) { + (*models)[i] = g_strdup(customModels[i].name); + VIR_DEBUG("adding models[%zu] = name[%s]", + i, customModels[i].name); + } + } + + return customNumModels; +} + + static int qemuDomainGetJobInfoMigrationStats(virDomainObj *vm, virDomainJobData *jobData) @@ -20289,6 +20347,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainSetLifecycleAction = qemuDomainSetLifecycleAction, /* 3.9.0 */ .connectCompareHypervisorCPU = qemuConnectCompareHypervisorCPU, /* 4.4.0 */ .connectBaselineHypervisorCPU = qemuConnectBaselineHypervisorCPU, /* 4.4.0 */ + .connectGetHypervisorCPUModelNames = qemuConnectGetHypervisorCPUModelNames, /* 11.1.0 */ .nodeGetSEVInfo = qemuNodeGetSEVInfo, /* 4.5.0 */ .domainGetLaunchSecurityInfo = qemuDomainGetLaunchSecurityInfo, /* 4.5.0 */ .domainCheckpointCreateXML = qemuDomainCheckpointCreateXML, /* 5.6.0 */ -- 2.47.0

From: David Judkovics <djudkovi@linux.ibm.com> Add new virsh 'host' group command 'hypervisor-cpu-models' to virsh. Signed-off-by: David Judkovics <djudkovi@linux.ibm.com> Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com> Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com> --- docs/manpages/virsh.rst | 20 +++++++++++ tools/virsh-host.c | 74 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 94 insertions(+) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index e801037c04..0f165944eb 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -1023,6 +1023,26 @@ listed in the XML description. If *--migratable* is specified, features that block migration will not be included in the resulting CPU. +hypervisor-cpu-models +--------------------- + +**Syntax:** + +:: + + hypervisor-cpu-models [virttype] [emulator] [arch] [machine] + +Print the list of CPU models known by the hypervisor for the specified architecture. +It is not guaranteed that a listed CPU will run on the host. To determine CPU +model compatibility with the host, see ``virsh hypervisor-cpu-baseline`` and +``virsh hypervisor-cpu-compare``. + +The *virttype* option specifies the virtualization type (usable in the 'type' +attribute of the <domain> top level element from the domain XML). *emulator* +specifies the path to the emulator, *arch* specifies the CPU architecture, and +*machine* specifies the machine type. + + DOMAIN COMMANDS =============== diff --git a/tools/virsh-host.c b/tools/virsh-host.c index f4e7324f42..7d52f22baa 100644 --- a/tools/virsh-host.c +++ b/tools/virsh-host.c @@ -1758,6 +1758,74 @@ cmdHypervisorCPUBaseline(vshControl *ctl, } +/* + * "hypervisor-cpu-models" command + */ +static const vshCmdInfo info_hypervisor_cpu_models = { + .help = N_("Hypervisor reported CPU models"), + .desc = N_("Get the CPU models reported by the hypervisor."), +}; + +static const vshCmdOptDef opts_hypervisor_cpu_models[] = { + {.name = "virttype", + .type = VSH_OT_STRING, + .completer = virshDomainVirtTypeCompleter, + .help = N_("virtualization type (/domain/@type)"), + }, + {.name = "emulator", + .type = VSH_OT_STRING, + .help = N_("path to emulator binary (/domain/devices/emulator)"), + }, + {.name = "arch", + .type = VSH_OT_STRING, + .completer = virshArchCompleter, + .help = N_("CPU architecture (/domain/os/type/@arch)"), + }, + {.name = "machine", + .type = VSH_OT_STRING, + .help = N_("machine type (/domain/os/type/@machine)"), + }, + {.name = NULL} +}; + +static bool +cmdHypervisorCPUModelNames(vshControl *ctl, + const vshCmd *cmd) +{ + const char *virttype = NULL; + char **models = NULL; + const char *emulator = NULL; + const char *arch = NULL; + const char *machine = NULL; + virshControl *priv = ctl->privData; + size_t i; + int nmodels; + + if (vshCommandOptString(ctl, cmd, "virttype", &virttype) < 0 || + vshCommandOptString(ctl, cmd, "emulator", &emulator) < 0 || + vshCommandOptString(ctl, cmd, "arch", &arch) < 0 || + vshCommandOptString(ctl, cmd, "machine", &machine) < 0) + return false; + + nmodels = virConnectGetHypervisorCPUModelNames(priv->conn, emulator, arch, + machine, virttype, &models, 0); + + if (nmodels < 0) { + vshError(ctl, "%s", _("failed to get Hypervisor CPU model names")); + return false; + } + + for (i = 0; i < nmodels; i++) { + vshPrint(ctl, "%s\n", models[i]); + VIR_FREE(models[i]); + } + + VIR_FREE(models); + + return true; +} + + const vshCmdDef hostAndHypervisorCmds[] = { {.name = "allocpages", .handler = cmdAllocpages, @@ -1825,6 +1893,12 @@ const vshCmdDef hostAndHypervisorCmds[] = { .info = &info_hypervisor_cpu_compare, .flags = 0 }, + {.name = "hypervisor-cpu-models", + .handler = cmdHypervisorCPUModelNames, + .opts = opts_hypervisor_cpu_models, + .info = &info_hypervisor_cpu_models, + .flags = 0 + }, {.name = "maxvcpus", .handler = cmdMaxvcpus, .opts = opts_maxvcpus, -- 2.47.0

On Wed, Jan 15, 2025 at 11:34:05AM +0100, Boris Fiuczynski wrote:
Allows for the query of hypervisor-known CPU models via the simple command: virsh hypervisor-cpu-models. For the QEMU driver, the models are queried via the capabilities file. Each model is printed to the terminal on its own line similar to the cpu-models command, and there is no order to the listing.
This is just duplicating what we already expose to users in virConnectGetDomainCapabilities / virsh domcapabilities. eg # virsh domcapabilities --xpath '//cpu//model/text()' Snowridge qemu64 qemu32 phenom pentium3 pentium2 pentium n270 kvm64 kvm32 coreduo core2duo athlon Westmere-IBRS Westmere Snowridge Skylake-Server-noTSX-IBRS Skylake-Server-IBRS Skylake-Server Skylake-Client-noTSX-IBRS Skylake-Client-IBRS Skylake-Client SapphireRapids ...snip... Yes, we currently have a virConnectGetCPUModelNames(), but when we introduced the new QEMU-detectectable CPU support, we intended for the virConnectGetDomainCapabilities API to replace the former use case, so didn't add a new CPUModelNames API With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 15/01/2025 11.43, Daniel P. Berrangé wrote:
On Wed, Jan 15, 2025 at 11:34:05AM +0100, Boris Fiuczynski wrote:
Allows for the query of hypervisor-known CPU models via the simple command: virsh hypervisor-cpu-models. For the QEMU driver, the models are queried via the capabilities file. Each model is printed to the terminal on its own line similar to the cpu-models command, and there is no order to the listing.
This is just duplicating what we already expose to users in virConnectGetDomainCapabilities / virsh domcapabilities.
eg
# virsh domcapabilities --xpath '//cpu//model/text()'
Hi Daniel! From a plain end-users perspective (which I belong to here in this case), I think it would be very helpful to have a "virsh hypervisor-cpu-models" command! I already ran into this problem in the past: I wanted to get a list of CPU models that are supported by the hypervisor, I had to discover that "virsh cpu-models" is completely useless on s390x, I saw that there are already "hypervisor-cpu-compare" and "hypervisor-cpu-baseline" and then started wondering why there is nothing similar for easily getting the list of CPU models here. As an average user, it can be quite hard to figure out that you have to use "virsh domcapabilities" for this instead. Last time this has been discussed, you also mentioned that you wouldn't object such a command if it's based on GetDomainCapabilities (see https://marc.info/?i=ZHCZeXbywKCR5jzw@redhat.com), which seems now to be the case if I got patch 3/4 right? And considering that other people ran into this problem already, too (see https://web.archive.org/web/20230817214310/https://listman.redhat.com/archiv... ), I'd appreciate if we could give this a chance, please? Thanks, Thomas

On Wed, Jan 15, 2025 at 04:07:38PM +0100, Thomas Huth wrote:
On 15/01/2025 11.43, Daniel P. Berrangé wrote:
On Wed, Jan 15, 2025 at 11:34:05AM +0100, Boris Fiuczynski wrote:
Allows for the query of hypervisor-known CPU models via the simple command: virsh hypervisor-cpu-models. For the QEMU driver, the models are queried via the capabilities file. Each model is printed to the terminal on its own line similar to the cpu-models command, and there is no order to the listing.
This is just duplicating what we already expose to users in virConnectGetDomainCapabilities / virsh domcapabilities.
eg
# virsh domcapabilities --xpath '//cpu//model/text()'
Hi Daniel!
From a plain end-users perspective (which I belong to here in this case), I think it would be very helpful to have a "virsh hypervisor-cpu-models" command! I already ran into this problem in the past: I wanted to get a list of CPU models that are supported by the hypervisor, I had to discover that "virsh cpu-models" is completely useless on s390x, I saw that there are already "hypervisor-cpu-compare" and "hypervisor-cpu-baseline" and then started wondering why there is nothing similar for easily getting the list of CPU models here. As an average user, it can be quite hard to figure out that you have to use "virsh domcapabilities" for this instead.
Last time this has been discussed, you also mentioned that you wouldn't object such a command if it's based on GetDomainCapabilities (see https://marc.info/?i=ZHCZeXbywKCR5jzw@redhat.com), which seems now to be the case if I got patch 3/4 right? And considering that other people ran into this problem already, too (see https://web.archive.org/web/20230817214310/https://listman.redhat.com/archiv... ), I'd appreciate if we could give this a chance, please?
Ah, I knew I had dejavu here. Indeed I did say that a virsh command based on GetDomainCapabilities was OK, but that is not what this series is doing. This is essentially the same as the old series, adding a new public virConnectGetHypervisorCPUModelNames API, which virsh then calls. virsh needs to call virConnectGetDomainCapabilities directly if we want a 'hypervisor-cpu-models' command. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Indeed I did say that a virsh command based on GetDomainCapabilities was OK, but that is not what this series is doing.
This is essentially the same as the old series, adding a new public virConnectGetHypervisorCPUModelNames API, which virsh then calls.
virsh needs to call virConnectGetDomainCapabilities directly if we want a 'hypervisor-cpu-models' command.
Thanks for your feedback. It seems we have a disconnect with respect to how this should be designed. The first iteration I posted some time back pulled the CPU models directly from the qemuCaps. This iteration instead pulls from the domCaps, which I thought met the requirements of the feedback since both the proposed API and existing virConnectGetDomainCapabilities interface extract data from this structure. From my perspective, it makes sense to extract the CPU models directly from the data structure and format the output as desired versus adding an extra step to parse the XML. This is my understanding of the operations of both approaches if the steps were to be unfurled: 1. proposed API operations: Retrieve hypervisor caps -> extract domain caps -> extract CPU models -> print to stdout 2. virConnectGetDomainCapabilities operations: Retrieve hypervisor caps -> extract domain caps -> format into XML -> parse CPU models -> print to stdout My goal is to work together to clear up any misunderstandings by laying out the design as much as possible so we can agree on the approach for the next iteration. -- Regards, Collin

On Fri, Jan 17, 2025 at 04:08:40PM -0000, walling@linux.ibm.com wrote:
Indeed I did say that a virsh command based on GetDomainCapabilities was OK, but that is not what this series is doing.
This is essentially the same as the old series, adding a new public virConnectGetHypervisorCPUModelNames API, which virsh then calls.
virsh needs to call virConnectGetDomainCapabilities directly if we want a 'hypervisor-cpu-models' command.
Thanks for your feedback. It seems we have a disconnect with respect to how this should be designed.
The first iteration I posted some time back pulled the CPU models directly from the qemuCaps. This iteration instead pulls from the domCaps, which I thought met the requirements of the feedback since both the proposed API and existing virConnectGetDomainCapabilities interface extract data from this structure. From my perspective, it makes sense to extract the CPU models directly from the data structure and format the output as desired versus adding an extra step to parse the XML.
Adding new public APIs is an expensive task, especially as it ripples out through many language bindings and/or 3rd party protocol impls. We try to design APIs to be extensible to minimize the number of new APIs we must add in future. The virConnectGetDomainCapabilities API used XML format because we knew we have a huge amount of info to return that is growing all the time and we didn't want APIs for each different bit of information. Thus the idea of adding a virConnectGetHypervisorCPUModelNames API is contrary to our desired design approach and not acceptable.
This is my understanding of the operations of both approaches if the steps were to be unfurled:
1. proposed API operations: Retrieve hypervisor caps -> extract domain caps -> extract CPU models -> print to stdout
2. virConnectGetDomainCapabilities operations: Retrieve hypervisor caps -> extract domain caps -> format into XML -> parse CPU models -> print to stdout
My goal is to work together to clear up any misunderstandings by laying out the design as much as possible so we can agree on the approach for the next iteration.
To be clear if you want such a command in virsh, the expectation is that virsh calls virConnectGetDomainCapabilities, and parses the XML it gets back to extract the CPU model names. This shouldn't actually be that difficult, as you don't really need to parse the XML manually, just invoke an XPath expression to directly extract the CPU model names, letting the xpath library do the hard work. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 1/17/25 11:15 AM, Daniel P. Berrangé wrote:
On Fri, Jan 17, 2025 at 04:08:40PM -0000, walling@linux.ibm.com wrote:
Indeed I did say that a virsh command based on GetDomainCapabilities was OK, but that is not what this series is doing.
This is essentially the same as the old series, adding a new public virConnectGetHypervisorCPUModelNames API, which virsh then calls.
virsh needs to call virConnectGetDomainCapabilities directly if we want a 'hypervisor-cpu-models' command.
Thanks for your feedback. It seems we have a disconnect with respect to how this should be designed.
The first iteration I posted some time back pulled the CPU models directly from the qemuCaps. This iteration instead pulls from the domCaps, which I thought met the requirements of the feedback since both the proposed API and existing virConnectGetDomainCapabilities interface extract data from this structure. From my perspective, it makes sense to extract the CPU models directly from the data structure and format the output as desired versus adding an extra step to parse the XML.
Adding new public APIs is an expensive task, especially as it ripples out through many language bindings and/or 3rd party protocol impls.
We try to design APIs to be extensible to minimize the number of new APIs we must add in future. The virConnectGetDomainCapabilities API used XML format because we knew we have a huge amount of info to return that is growing all the time and we didn't want APIs for each different bit of information.
Thus the idea of adding a virConnectGetHypervisorCPUModelNames API is contrary to our desired design approach and not acceptable.
Thanks for explaining this. It helps to gain insight into libvirt's approach to these things. I never considered the 3rd party implementations (even though the remote/* code eludes to this).
This is my understanding of the operations of both approaches if the steps were to be unfurled:
1. proposed API operations: Retrieve hypervisor caps -> extract domain caps -> extract CPU models -> print to stdout
2. virConnectGetDomainCapabilities operations: Retrieve hypervisor caps -> extract domain caps -> format into XML -> parse CPU models -> print to stdout
My goal is to work together to clear up any misunderstandings by laying out the design as much as possible so we can agree on the approach for the next iteration.
To be clear if you want such a command in virsh, the expectation is that virsh calls virConnectGetDomainCapabilities, and parses the XML it gets back to extract the CPU model names. This shouldn't actually be that difficult, as you don't really need to parse the XML manually, just invoke an XPath expression to directly extract the CPU model names, letting the xpath library do the hard work.
We'll do this for v2.
With regards, Daniel
-- Regards, Collin
participants (5)
-
Boris Fiuczynski
-
Collin Walling
-
Daniel P. Berrangé
-
Thomas Huth
-
walling@linux.ibm.com