[libvirt] inconsistent handling of "qemu64" CPU model

Hi, I'm not sure where the problem lies, hence the CC to both lists. Please copy me on the reply. I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a Celeron 2961Y CPU. (libvirt detects it as a Nehalem with a bunch of extra features.) Qemu gives version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.7~cloud2). If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU, and /proc/cpuinfo in the guest instance looks something like this: processor 0 vendor_id GenuineIntel cpu family 6 model 6 model name: QEMU Virtual CPU version 2.2.0 stepping: 3 microcode: 0x1 flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt hypervisor lahf_lm abm vnmi ept However, if I explicitly specify a custom CPU model of "qemu64" the instance refuses to boot and I get a log saying: libvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svmlibvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svm When this happens, some of the XML for the domain looks like this: <os> <type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type> .... <cpu mode='custom' match='exact'> <model fallback='allow'>qemu64</model> <topology sockets='1' cores='1' threads='1'/> </cpu> Of course "svm" is an AMD flag and I'm running an Intel CPU. But why does it work when I just rely on the default virtual CPU? Is kvm_default_unset_features handled differently when it's implicit vs explicit? If I explicitly specify a custom CPU model of "kvm64" then it boots, but of course I get a different virtual CPU from what I get if I don't specify anything. Following some old suggestions I tried turning off nested kvm, deleting /var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd. Didn't help. So...anyone got any ideas what's going on? Is there no way to explicitly specify the model that you get by default? Thanks, Chris

On Wed, May 25, 2016 at 11:13:24PM -0600, Chris Friesen wrote: [...]
However, if I explicitly specify a custom CPU model of "qemu64" the instance refuses to boot and I get a log saying:
[Not a direct answer to the exact issue you're facing, but a related issue that is being investigated presently...] Currently there's a related (regression) in upstream libvirt 1.3.4: The crux of the issue here is: the libvirt custom 'gate64' model is not being translated into a CPU definition that QEMU can recognize (which you can find from `qemu-system-x86 -cpu \?`). See this bug (it has reproducer, and discussion): https://bugzilla.redhat.com/show_bug.cgi?id=1339680 -- libvirt CPU driver fails to translate a custom CPU model into something that QEMU recognizes The bug (regression) is bisected, by Jiri Denemark, to this commit: v1.2.9-31-g445a09b "qemu: Don't compare CPU against host for TCG".
libvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svmlibvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svm
When this happens, some of the XML for the domain looks like this: <os> <type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type> ....
<cpu mode='custom' match='exact'> <model fallback='allow'>qemu64</model> <topology sockets='1' cores='1' threads='1'/> </cpu>
Of course "svm" is an AMD flag and I'm running an Intel CPU. But why does it work when I just rely on the default virtual CPU? Is kvm_default_unset_features handled differently when it's implicit vs explicit?
If I explicitly specify a custom CPU model of "kvm64" then it boots, but of course I get a different virtual CPU from what I get if I don't specify anything.
Following some old suggestions I tried turning off nested kvm, deleting /var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd. Didn't help.
So...anyone got any ideas what's going on? Is there no way to explicitly specify the model that you get by default?
Thanks, Chris
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- /kashyap

On Wed, May 25, 2016 at 23:13:24 -0600, Chris Friesen wrote:
Hi,
If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU, and /proc/cpuinfo in the guest instance looks something like this:
processor 0 vendor_id GenuineIntel cpu family 6 model 6 model name: QEMU Virtual CPU version 2.2.0 stepping: 3 microcode: 0x1 flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt hypervisor lahf_lm abm vnmi ept
However, if I explicitly specify a custom CPU model of "qemu64" the instance refuses to boot and I get a log saying:
libvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svmlibvirtError: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: svm
The qemu64 CPU model contains svm and thus libvirt will always consider it incompatible with any Intel CPUs (which have vmx instead of svm). On the other hand, QEMU by default ignores features that are missing in the host CPU and has no problem using qemu64 CPU, the guest just won't see some of the features defined in qemu64 model. In your case, you should be able to use <cpu mode'custom' match='exact'> <model>qemu64</model> <feature name='svm' policy='disable'/> </cpu> to get the same CPU model you'd get by default (if not, you may need to also add <feature name='vmx' policy='require'/>). Alternatively <cpu mode'custom' match='exact'> <model>qemu64</model> <feature name='svm' policy='force'/> </cpu> should work too (and it would be better in case you use it on an AMD host). But why you even want to use qemu64 CPU in a domain XML explicitly? If you're fine with that CPU, just let QEMU use a default one. If not, use a CPU model that fits your host/needs better. BTW, using qemu64 with TCG (i.e., domain type='qemu' as oppose to type='kvm') is fine because libvirt won't check it against host CPU and QEMU will emulate all features so you'd get even the features that host CPU does not support. Jirka P.S. Kashyap is right, the issue he mentioned is not related at all to your case.

On 05/26/2016 04:41 AM, Jiri Denemark wrote:
The qemu64 CPU model contains svm and thus libvirt will always consider it incompatible with any Intel CPUs (which have vmx instead of svm). On the other hand, QEMU by default ignores features that are missing in the host CPU and has no problem using qemu64 CPU, the guest just won't see some of the features defined in qemu64 model.
In your case, you should be able to use
<cpu mode'custom' match='exact'> <model>qemu64</model> <feature name='svm' policy='disable'/> </cpu>
to get the same CPU model you'd get by default (if not, you may need to also add <feature name='vmx' policy='require'/>).
Alternatively
<cpu mode'custom' match='exact'> <model>qemu64</model> <feature name='svm' policy='force'/> </cpu>
should work too (and it would be better in case you use it on an AMD host).
It's actually OpenStack that is setting up the XML, not me, so I'd have to special-case the "qemu64" model and it'd get ugly. :) The question remains, why is "qemu64" okay when used implicitly but not explicitly? I would have expected them to behave the same.
But why you even want to use qemu64 CPU in a domain XML explicitly? If you're fine with that CPU, just let QEMU use a default one. If not, use a CPU model that fits your host/needs better.
Working around another issue would be simpler/cleaner if I could just explicitly set the model to qemu64. Chris
participants (3)
-
Chris Friesen
-
Jiri Denemark
-
Kashyap Chamarthy