On Tue, Feb 21, 2017 at 12:24:19 -0500, John Ferlan wrote:
On 02/15/2017 11:44 AM, Jiri Denemark wrote:
> Querying "host" CPU model expansion only makes sense for KVM. QEMU 2.9.0
> introduces a new "max" CPU model which can be used to ask QEMU what the
> best CPU it can provide to a TCG domain is.
But how do we know "max" is available to be used for TCG? It seems to
me that virQEMUCapsProbeQMPHostCPU now is changed such that there's an
assumption that "max" is available, but the above makes it appear as if
it's a new paremter/attribute [1].
We don't know if max is available, we just try to probe it. And if it
succeeds we know it's available and we can use the returned CPU model.
If max model is not available, the code does the same thing it did
before this patch series.
Every time I see TCG I think "Trusted Computing Group" and
I certainly
don't think VIR_DOMAIN_TYPE_QEMU. What gets "confusing" eventually is
that virQEMUCapsInitHostCPUModel is called with VIR_DOMAIN_VIRT_KVM and
VIR_DOMAIN_VIRT_QEMU, but qemuCaps->tcgCPUModel is filled in for
VIR_DOMAIN_VIRT_QEMU.
"tcg" and "kvm" is the QEMU's terminology while in libvirt we use
VIR_DOMAIN_TYPE_QEMU and VIR_DOMAIN_VIRT_KVM to distinguish between the
two types of domains our QEMU driver supports. Both are clear, but
VIR_DOMAIN_TYPE_QEMU is pretty long and shortening it to "qemu" and
"kvm" would be very confusing. That's why I use the QEMU's terminology
when referring to the types of domains.
It's not overtly obvious to me from just reading the commit
message, but
it seems TCG now becomes a fallback position of sorts which I think is
fine, it's just not described.
It's not described because it doesn't happen. Both the supported CPU
models and the host CPU model depend on the accelerator (tcg/kvm) so we
need to probe QEMU for both. We only probe kvm if it's available, but
tcg is always checked.
...
> @@ -2822,14 +2838,24 @@
virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr qemuCaps,
>
> static int
> virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
> - qemuMonitorPtr mon)
> + qemuMonitorPtr mon,
> + bool tcg)
> {
> - if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION) ||
> - !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM))
> + qemuMonitorCPUModelInfoPtr *modelInfo;
> + const char *model;
> +
> + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION))
> return 0;
So if no model expansion, then neither is filled in, but there are later
checks which seem to assume one or the other is filled in. I didn't
chase all the callers to check whether they would expect/require
something to be filled in (too many patches, too little time).
Nope, none of them needs to be set. The code will fallback to using the
host CPU model from host capabilities probed directly by libvirt. The
only users of the {kvm,tcg}CPUModelInfo are virQEMUCapsInitCPUModel*
which transform the raw data from QEMU into virCPUDefPtr
{kvm,tcg}CPUModel.
> - return qemuMonitorGetCPUModelExpansion(mon,
"static", "host",
> - &qemuCaps->hostCPUModelInfo);
> + if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
[1]
It's noot clear to the casual reviewer when tcg and in particular the
"max" parameter was supported. IOW: The first thought I had was why
isn't there a capability being checked for "max" being available. The
commit message seems to indicate that it's new for 2.9.0, but I don't
see a libvirt capability check for it being available before being used.
As already said, if max is not available, tcgCPUModelInfo will be NULL,
virQEMUCapsInitCPUModel will return 1, and virQEMUCapsInitHostCPUModel
will take the ``VIR_DEBUG("No host CPU model info from QEMU; using host
capabilities");'' branch.
The logic changes now such that if !QEMU_CAPS_KVM that model
expansion
is called with "max"; whereas, previously a 0 was returned.
If "max" or query-cpu-model-expansion are not available,
qemuMonitorJSONGetCPUModelExpansion will return 0, which is exactly what
happened before. If both are available, we'll get the description of the
"max" CPU model in tcgCPUModelInfo.
Thus if !tcg and !QEMU_CAPS_KVM, then "max" is used and
ModelExpansion
is called. If TCG becomes the "de facto" default -
virQEMUCapsProbeQMPHostCPU may be called twice. First time it is called
with tcg == false and it uses the QEMU_CAPS_KVM capability to see
whether it is probing QEMU using tcg or kvm accelerator. Now if kvm was
not available we have tcgCPUModelInfo filled in (if max is available, of
course) and we're done. If kvm was available, we have kvmCPUModelInfo
filled in (if model expansion is available) and
virQEMUCapsProbeQMPHostCPU will be called once more with tcg == true to
try to set tcgCPUModelInfo.
...
> @@ -3560,12 +3607,26 @@ virQEMUCapsLoadCache(virCapsPtr caps,
>
> static void
> virQEMUCapsFormatHostCPUModelInfo(virQEMUCapsPtr qemuCaps,
> - virBufferPtr buf)
> + virBufferPtr buf,
> + virDomainVirtType type)
> {
> - qemuMonitorCPUModelInfoPtr model = qemuCaps->hostCPUModelInfo;
> + qemuMonitorCPUModelInfoPtr model;
> + const char *typeStr;
> size_t i;
>
> - virBufferAsprintf(buf, "<hostCPU model='%s'>\n",
model->name);
> + if (type == VIR_DOMAIN_VIRT_KVM) {
> + typeStr = "kvm";
> + model = qemuCaps->kvmCPUModelInfo;
> + } else {
> + typeStr = "tcg";
> + model = qemuCaps->tcgCPUModelInfo;
> + }
> +
> + if (!model)
> + return;
if !model can happen, how does that affect other places which assume
that are making a similar check to decide which ModelInfo to use?
!model means we cannot use the "host" or "max" CPU model data from
QEMU
and thus the CPU model from host capabilities XML will be used instead.
Also this is "similar" to virQEMUCapsGetHostModel at least
for 'model'...
It's not. Are you confusing *CPUModelInfo with *CPUModel? The first one
is the raw data from QEMU while the second one is virCPUDefPtr. Only the
latter is ever used anywhere in the code. The raw data gets only stored
in the capabilities cache and used for creating the virCPUDefPtr.
...
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index 92fa69b3c..b9df01da5 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -5147,13 +5147,14 @@ qemuProcessUpdateGuestCPU(virDomainDefPtr def,
> /* custom CPUs in TCG mode don't need to be compared to host CPU */
> if (def->virtType != VIR_DOMAIN_VIRT_QEMU ||
> def->cpu->mode != VIR_CPU_MODE_CUSTOM) {
> - if (virCPUCompare(caps->host.arch, virQEMUCapsGetHostModel(qemuCaps),
> + if (virCPUCompare(caps->host.arch,
> + virQEMUCapsGetHostModel(qemuCaps, def->virtType),
THis is about where I started wondering about a NULL return here and the
"default" of TCG now...
There's no "default" of TCG. According to the requested virtType we will
either use kvmCPUModel or tcgCPUModel. Either of them can either be just
a copy of the host CPU model from host capabilities or a model we got by
querying QEMU (i.e., asking for a model expansion of either "host" or
"max") or it can even be NULL. However, NULL means we cannot detect host
CPU model and thus host-model CPU mode is not supported. In any case
virCPUCompare will report an appropriate error if it founds host CPU
model is not set but required to do the job.
Although looking at the code around it, this would seemingly only be
called for passthrough or host-model, true?
Nope. It's never called for host-passthrough. It can only be called for
either host-model or custom CPUs.
...
> diff --git a/tests/qemucapabilitiesdata/caps_2.8.0.s390x.replies
b/tests/qemucapabilitiesdata/caps_2.8.0.s390x.replies
> index 0405d5d7b..c3cbeee0a 100644
> --- a/tests/qemucapabilitiesdata/caps_2.8.0.s390x.replies
> +++ b/tests/qemucapabilitiesdata/caps_2.8.0.s390x.replies
> @@ -13378,3 +13378,11 @@
> ],
> "id": "libvirt-2"
> }
> +
> +{
> + "id": "libvirt-3",
> + "error": {
> + "class": "GenericError",
> + "desc": "The CPU definition 'max' is unknown."
> + }
> +}
And this seems to indicate my comments before in
virQEMUCapsProbeQMPHostCPU - if "max" isn't available, but tcg is true
or if !QEMU_CAPS_KVM, then "max" will be tried, which doesn't seem right.
Yes, it will be tried and the error will be ignored.
Also does something similar need to be done in caps_2.8.0.x86_64 or
does it not really matter since if it doesn't exist, it wouldn't be
defined or usable.
QEMU 2.8.0 on x86_64 does not have QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION
set and thus we will never ask it for any model expansion.
Jirka