I'm failing to see what real world technical problems QEMU faces
with a parameter being set to '1' by a mgmt app, when QEMU itself
treats all omitted values as being '1' anyway.
If we're trying to faithfully model the real world, then restricting
the topology against machine types though still looks inherantly wrong.
The valid topology ought to be constrained based on the named CPU model.
eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
only an EPYC CPU model, especially if we want to model cache info in
a way that matches the real world silicon better.
Thanks for figuring out this. This issue is related with Intel CPU
cache model: currently Intel code defaults L3 shared at die level.
This could be resolved by defining the accurate default cache topology
level for CPU model and make Intel CPU models share L3 at package level
except only Cascadelake.
Then user could define any other topology levels (die/module) for
Icelake and this won't change the cache topology, unless the user adds
more sockets or further customizes the cache topology in another way [1].
Do you agree with this solution?
[1]:
https://lore.kernel.org/qemu-devel/20240220092504.726064-1-zhao1.liu@linu...
[snip]
As above, I think that restrictions based on machine type, while nice
and
simple, are incorrect long term. If we did impose restrictions based on
CPU model, then we could trivially expose this info to mgmt apps via the
existing mechanism for querying supported CPU models. Limiting based on
CPU model, however, has potentially greater back compat issues, though
it would be strictly more faithful to hardware.
I think as long as the default cache topology model is clearly defined,
users can further customize the CPU topology and adjust the cache
topology based on it. After all, topology is architectural, not CPU
model-specific (linux support for topology does not take into account
specific CPU models).
For example, x86, for simplicity, can we assume that all x86 CPU models
support all x86 topology levels (thread/core/module/die/package) without
making distinctions based on specific CPU models?
That way as long as the user doesn't change the default topology, then
Guest's cache and other topology information won't be "corrupted".
And there's one more question, does this rollback mean that smp's
parameters must have compatible default values for all architectures?
This is related with my SMP cache proposal above [1], should I provide
default entries (e.g. default) to be compatible with all architectures,
even if they don't support custom cache topology? Like the following:
-smp 32,sockets=2,dies=2,modules=2,cores=2,threads=2,maxcpus=32,\
l1d-cache=default,l1i-cache=default,l2-cache=default,l3-cache=default
Thanks,
Zhao