
On Sat, Mar 10, 2012 at 12:24 PM, Anthony Liguori <anthony@codemonkey.ws> wrote:
On 03/10/2012 09:58 AM, Eduardo Habkost wrote:
On Sat, Mar 10, 2012 at 12:42:46PM +0000, Daniel P. Berrange wrote:
I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults.
But looking at the code in QEMU, it doesn't seem we ever implemented this ?
Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but, contrary to our normal RHEL development practice, it was not based on a cherry-pick of an upstream patch :-(
For sake of reference, I'm attaching the two patches from the RHEL6 source RPM that do what I'm describing
NB, I'm not neccessarily advocating these patches for upstream. I still maintain that libvirt should write out a config file containing the exact CPU model description it desires and specify that with -readconfig. The end result would be identical from QEMU's POV and it would avoid playing games with QEMU's config loading code.
I agree that libvirt should just write the config somewhere. The problem here is to define: 1) what information should be mandatory on that config data; 2) who should be responsible to test and maintain sane defaults (and where should they be maintained).
The current cpudef definitions are simply too low-level to require it to be written from scratch. Lots of testing have to be done to make sure we have working combinations of CPUID bits defined, so they can be used as defaults or templates. Not facilitating reuse of those tested defauls/templates by libvirt is duplication of efforts.
Really, if we expect libvirt to define all the CPU bits from scratch on a config file, we could as well just expect libvirt to open /dev/kvm itself and call the all CPUID setup ioctl()s itself. That's how low-level some of the cpudef bits are.
Let's step back here.
Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to.
Humans probably do one of two things: 1) no cpu option or 2) -cpu host.
So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU.
But there's no intrinsic reason why it uses CPU model names. VMware doesn't do this. It has a concept of compatibility groups[1].
oVirt could just as well define compatibility groups like GroupA, GroupB, GroupC, etc. and then the -cpu option we would be discussing would be -cpu GroupA.
This is why it's a configuration option and not builtin to QEMU. It's a user interface as as such, should be defined at a higher level.
Perhaps it really should be VDSM that is providing the model info to libvirt? Then they can add whatever groups then want whenever they want as long as we have the appropriate feature bits.
P.S. I spent 30 minutes the other day helping a user who was attempting to figure out whether his processor was a Conroe, Penryn, etc. Making this determination is fairly difficult and it makes me wonder whether having CPU code names is even the best interface for oVirt..
[1] http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
Regards,
Anthony Liguori
FWIW, as a user this would be a good improvement. As it stands right now when a cluster of machines is established as being redundant migratable machines for each other I must do the following for each machine: virsh -c qemu://machine/system capabilities | xpath /capabilities/host/cpu > machine-cpu.xml Once I have that data I combine them together and use virsh cpu-baseline, which is a handy addition from the past of doing it manually, but still not optimal. This gives me a model which is mostly meaningless and uninteresting to me, but I know all the guests must use Penryn for example. If ovirt and by extension libvirt let me know that guest X is running on CPU-A, I know I could migrate it to any other machine supporting CPU-A or CPU-B (assuming B is a super set of A). -- Doug Goldstein