On 05/26/2016 04:41 AM, Jiri Denemark wrote:
The qemu64 CPU model contains svm and thus libvirt will always
consider
it incompatible with any Intel CPUs (which have vmx instead of svm). On
the other hand, QEMU by default ignores features that are missing in the
host CPU and has no problem using qemu64 CPU, the guest just won't see
some of the features defined in qemu64 model.
In your case, you should be able to use
<cpu mode'custom' match='exact'>
<model>qemu64</model>
<feature name='svm' policy='disable'/>
</cpu>
to get the same CPU model you'd get by default (if not, you may need to
also add <feature name='vmx' policy='require'/>).
Alternatively
<cpu mode'custom' match='exact'>
<model>qemu64</model>
<feature name='svm' policy='force'/>
</cpu>
should work too (and it would be better in case you use it on an AMD
host).
It's actually OpenStack that is setting up the XML, not me, so I'd have to
special-case the "qemu64" model and it'd get ugly. :)
The question remains, why is "qemu64" okay when used implicitly but not
explicitly? I would have expected them to behave the same.
But why you even want to use qemu64 CPU in a domain XML explicitly?
If
you're fine with that CPU, just let QEMU use a default one. If not, use
a CPU model that fits your host/needs better.
Working around another issue would be simpler/cleaner if I could just explicitly
set the model to qemu64.
Chris