On 25.10.2017 17:09, Boris Fiuczynski wrote:
On 10/25/2017 12:23 PM, David Hildenbrand wrote:
> On 25.10.2017 12:18, Christian Borntraeger wrote:
>> Ping, I plan to submit belows patch for 2.11. We can then still look into
>> a libvirt<->qemu interface for limiting host-model depending on machine
versions
>> (or not).
>
> I think this would be sufficient for now.
>
> Having different host models, depending on the the machine type sounds
> wrong. But maybe we'll need it in the future.
>
David, I disagree if your proposal is to generally tolerate new cpu
features in old machine types. This *might* work for gs but how do you
guaranty that guests do not behave differently/wrong when suddenly new
cpu features are made available to guest when (re-)starting them.
That is my feedback for the qemu side of this mater.
My point would be that it seems to work for all existing architectures
(so far as I am aware) and this one problem is easily fixed (and stems
from old CPU feature compatibility handling). So my question would be,
are there any potential CPU features that make such handling necessary
right now or in the near future?
Regarding the libvirt side of this:
When looking at
https://libvirt.org/formatdomain.html#elementsCPU I
found the following sentence:
Since the CPU definition is copied just before starting a domain,
exactly the same XML can be used on different hosts while still
providing the best guest CPU each host supports.
My interpretation of "the best guest CPU each host supports" is that
besides limiting factors like hardware, kernel and qemu capabilities the
requested machine type for the guest is a limiting factor as well.
I understand "what the host supports" as combination of hw+kernel+qemu.
But the definition can be interpreted differently. I don't think that
the requested machine has to be taken into account at this point.
(Again, do you have any real examples where this would be applicable?)
Nevertheless if my interpretation is found to be incorrect than we
should think about another new cpu mode that includes the machine type
into the "best guest CPU" detection.
Which use case? I just want to understand how the current solution could
be problematic? (besides the problem we had, which is easily fixed)
My assumption is that we must not require the users to know which cpu
model they manually need to define to match a specific machine type
AND we want to guarantee that guests run without risking any side
effects by tolerating any additional cpu features.
That's why I think CPU models should be independent of the used QEMU
machine. It just over complicates things as we have seen.
Especially suddenly having multiple "host" CPU models depending on the
machine type, confusing. If we can, we should keep it simple.
--
Thanks,
David