On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
Aside from not supporting KVM on 32-bit hosts, the
qemu-system-x86_64
binary is a proper superset of the qemu-system-i386 binary. With the
32-bit host support being deprecated, it is now also possible to
deprecate the qemu-system-i386 binary.
With regards to 32-bit KVM support in the x86 Linux kernel,
the developers confirmed that they do not need a recent
qemu-system-i386 binary here:
https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/
Reviewed-by: Daniel P. Berrangé <berrange(a)redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa(a)wdc.com>
Signed-off-by: Thomas Huth <thuth(a)redhat.com>
---
docs/about/deprecated.rst | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 1ca9dc33d6..c4fcc6b33c 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -34,6 +34,20 @@ deprecating the build option and no longer defend it in CI. The
``--enable-gcov`` build option remains for analysis test case
coverage.
+``qemu-system-i386`` binary (since 8.0)
+'''''''''''''''''''''''''''''''''''''''
+
+The ``qemu-system-i386`` binary was mainly useful for running with KVM
+on 32-bit x86 hosts, but most Linux distributions already removed their
+support for 32-bit x86 kernels, so hardly anybody still needs this. The
+``qemu-system-x86_64`` binary is a proper superset and can be used to
+run 32-bit guests by selecting a 32-bit CPU model, including KVM support
+on x86_64 hosts. Thus users are recommended to reconfigure their systems
+to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
+environment should be enforced, you can switch off the "long mode" CPU
+flag, e.g. with ``-cpu max,lm=off``.
I had the idea to check this today and this is not quite sufficient,
because we have code that changes the family/model/stepping for
'max' which is target dependent:
#ifdef TARGET_X86_64
object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
#else
object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
#endif
The former is a 64-bit AMD model and the latter is a 32-bit model.
Seems LLVM was sensitive to this distinction to some extent:
https://gitlab.com/qemu-project/qemu/-/issues/191
A further difference is that qemy-system-i686 does not appear to enable
the 'syscall' flag, but I've not figured out where that difference is
coming from in the code.
With regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|