On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> > Aside from not supporting KVM on 32-bit hosts, the qemu-system-x86_64
> > binary is a proper superset of the qemu-system-i386 binary. With the
> > 32-bit host support being deprecated, it is now also possible to
> > deprecate the qemu-system-i386 binary.
> >
> > With regards to 32-bit KVM support in the x86 Linux kernel,
> > the developers confirmed that they do not need a recent
> > qemu-system-i386 binary here:
> >
> >
https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/
> >
> > Reviewed-by: Daniel P. Berrangé <berrange(a)redhat.com>
> > Reviewed-by: Wilfred Mallawa <wilfred.mallawa(a)wdc.com>
> > Signed-off-by: Thomas Huth <thuth(a)redhat.com>
> > ---
> > docs/about/deprecated.rst | 14 ++++++++++++++
> > 1 file changed, 14 insertions(+)
> >
> > diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
> > index 1ca9dc33d6..c4fcc6b33c 100644
> > --- a/docs/about/deprecated.rst
> > +++ b/docs/about/deprecated.rst
> > @@ -34,6 +34,20 @@ deprecating the build option and no longer defend it in CI.
The
> > ``--enable-gcov`` build option remains for analysis test case
> > coverage.
> > +``qemu-system-i386`` binary (since 8.0)
> >
+'''''''''''''''''''''''''''''''''''''''
> > +
> > +The ``qemu-system-i386`` binary was mainly useful for running with KVM
> > +on 32-bit x86 hosts, but most Linux distributions already removed their
> > +support for 32-bit x86 kernels, so hardly anybody still needs this. The
> > +``qemu-system-x86_64`` binary is a proper superset and can be used to
> > +run 32-bit guests by selecting a 32-bit CPU model, including KVM support
> > +on x86_64 hosts. Thus users are recommended to reconfigure their systems
> > +to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
> > +environment should be enforced, you can switch off the "long mode"
CPU
> > +flag, e.g. with ``-cpu max,lm=off``.
>
> I had the idea to check this today and this is not quite sufficient,
> because we have code that changes the family/model/stepping for
> 'max' which is target dependent:
>
> #ifdef TARGET_X86_64
> object_property_set_int(OBJECT(cpu), "family", 15,
&error_abort);
> object_property_set_int(OBJECT(cpu), "model", 107,
&error_abort);
> object_property_set_int(OBJECT(cpu), "stepping", 1,
&error_abort);
> #else
> object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
> object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
> object_property_set_int(OBJECT(cpu), "stepping", 3,
&error_abort);
> #endif
>
> The former is a 64-bit AMD model and the latter is a 32-bit model.
>
> Seems LLVM was sensitive to this distinction to some extent:
>
>
https://gitlab.com/qemu-project/qemu/-/issues/191
>
> A further difference is that qemy-system-i686 does not appear to enable
> the 'syscall' flag, but I've not figured out where that difference is
> coming from in the code.
Ugh, ok. I gave it a quick try with a patch like this:
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4344,15 +4344,15 @@ static void max_x86_cpu_initfn(Object *obj)
*/
object_property_set_str(OBJECT(cpu), "vendor", CPUID_VENDOR_AMD,
&error_abort);
-#ifdef TARGET_X86_64
- object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
- object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
- object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
-#else
- object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
- object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
- object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
-#endif
+ if (object_property_get_bool(obj, "lm", &error_abort)) {
+ object_property_set_int(obj, "family", 15, &error_abort);
+ object_property_set_int(obj, "model", 107, &error_abort);
+ object_property_set_int(obj, "stepping", 1, &error_abort);
+ } else {
+ object_property_set_int(obj, "family", 6, &error_abort);
+ object_property_set_int(obj, "model", 6, &error_abort);
+ object_property_set_int(obj, "stepping", 3, &error_abort);
+ }
object_property_set_str(OBJECT(cpu), "model-id",
"QEMU TCG CPU version " QEMU_HW_VERSION,
&error_abort);
... but it seems like the "lm" property is not initialized
there yet, so this does not work... :-/
Giving that we have soft-freeze tomorrow, let's ignore this patch
for now and revisit this topic during the 8.1 cycle. But I'll
queue the other 4 patches to get some pressure out of our CI
during the freeze time.
Yep, makes sense.
More generally the whole impl of the 'max' CPU feels somewhat
questionable even for qemu-system-i386. It exposes all features
that TCG supports. A large set of these features never existed
on *any* 32-bit silicon. Hands up who has seen 32-bit silicon
with AVX2 support ? From a correctness POV we should have
capped CPU features in some manner. Given the lack of interest
in 32-bit though, we've ignored the problem and it likely does
not affect apps anyway as they're not likely to be looking for
newish features.
With regards,
Daniel
--
|: