Hi.
I'm having a weird problem where libvirt/qemu/kvm won't let me use the model
processor I have defined in my domain's config file. Instead, I get the
error message in libvirtd.log that:
warning : x86Decode:1346 : Preferred CPU model Nehalem not allowed by
hypervisor; closest supported model will be used
If I review the qemu log for that particular domain, I see that my CPU has
been changed to this:
-cpu kvm64,+lahf_lm,+popcnt,+sse4.2,+sse4.1,+ssse3
(in other places, I see it set to core2duo rather than kvm64)
However, it *should* be Nehalem. For some background, I'm running lvm on a
westmere proc, which is the successor to Nehalem. I'm specifying Nehalem as
the target platform, though, to make it easier to migrate to another server
if necessary. I do have this same problem if I set this to Westmere,
though, so it's not unique to Nehalem.
As for support and capabilities, libvirt correctly detects my host CPU as
Westmere:
$ virsh capabilities | grep model
<model>Westmere</model>
qemu-kvm does (as far as I can tell) support Nehalem:
$ qemu-kvm -cpu ?model | grep Nehalem
x86 Nehalem Intel Core i7 9xx (Nehalem Class Core i7)
Nehalem is defined in qmeu's target-x86_64.conf:
grep Nehalem /etc/qemu/target-x86_64.conf
name = "Nehalem"
model_id = "Intel Core i7 9xx (Nehalem Class Core i7)"
And if I run a cpu check on the processor, it seems to work fine:
$ qemu-kvm -cpu Nehalem,check
VNC server running on `127.0.0.1:5900'
So, I I'm creating the new domain as a virt-install with the qemu-kvm
backend as follows:
virt-install --name=test --ram=1024 --arch=x86_64 --vcpus=2 --cpu=Nehalem
--virt-type=kvm <SNIP>
This results in the following cpu configuration:
<cpu mode='custom' match='exact'>
<model fallback='allow'>Nehalem</model>
</cpu>
But then, when I start this domain, I get the error message posted above.
Any ideas what's going on here? I'm at a loss, and unfortunately I don't
have much experience with kvm/libvirt just yet so I don't know what I should
be focusing on for troubleshooting. Would appreciate any suggestions or
guidance.
Thanks.
--
Jared