On Wed, Apr 05, 2023 at 03:19:07PM -0600, Jim Fehlig wrote:
On 3/16/23 11:56, Jim Fehlig wrote:
> I just did a quick check with libvirt 9.1.0 (qemu is a bit older, at 7.1.0):
>
> # cat test.xml
> <domain type='kvm'>
> <name>test</name>
> <memory unit='KiB'>2097152</memory>
> <vcpu placement='static'>1</vcpu>
> <os>
> <type>hvm</type>
> <loader readonly='yes'
type='pflash'>/usr/share/qemu/aavmf-aarch64-code.bin</loader>
> <nvram template='/usr/share/qemu/aavmf-aarch64-vars.bin'/>
> <boot dev='hd'/>
> </os>
> <clock offset='utc'/>
> <on_poweroff>destroy</on_poweroff>
> <on_reboot>restart</on_reboot>
> <on_crash>destroy</on_crash>
> <devices>
> <emulator>/usr/bin/qemu-system-aarch64</emulator>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'
discard='unmap'/>
> <source file='/var/lib/libvirt/images/test.qcow2'/>
> <target dev='vda' bus='virtio'/>
> </disk>
> </devices>
> </domain>
> # virsh create test.xml
> error: Failed to create domain from test.xml
> error: internal error: Unexpected enum value 0 for virDomainDeviceAddressType
>
> I don't _think_ it's a downstream bug, nor fixed in git in the meantime.
> It appears running the old integratorcp machine with libivrt is not
> possible.
I trimmed the config to remove things like virtio devices that are not
supported by the default machine type, but still it does not work with
libvirt:
# cat test.xml
<domain type='kvm'>
<name>test</name>
<memory unit='KiB'>2097152</memory>
<vcpu placement='static'>1</vcpu>
<os>
<type>hvm</type>
</os>
<devices>
<emulator>/usr/bin/qemu-system-aarch64</emulator>
</devices>
</domain>
# virsh create test.xml
error: Failed to create domain from test.xml
error: internal error: process exited while connecting to monitor:
2023-04-05T20:36:19.564896Z qemu-system-aarch64: Property
'integratorcp-machine.acpi' not found
As explained downthread by Peter, this is no longer the case as of
9.2.0. However, even though an empty integratorcp machine used to be
able to boot in 9.1.0, as you've found out adding *any kind of
device* to it would result in a failure.
That's still the case: if you have a barebones definition such as
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/some/disk.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
then on startup you'll get
Unexpected enum value 0 for virDomainDeviceAddressType
(raised by qemuBuildVirtioDevGetConfig:1013). If you add
<address type='virtio-mmio'/>
libvirt itself will not have a problem with the configuration, but
QEMU will exit with
-device
{"driver":"virtio-blk-device","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}:
No 'virtio-bus' bus found for device 'virtio-blk-device'
Finally, if you try using
<address type='pci'/>
you'll be greeted by
Could not find PCI controller with index '0' required for device at
address '0000:00:00.0'
Even going out of your way and adding a
<controller type='pci' model='pci-root'/>
will not get you much further, because QEMU will then just report
-device
{"driver":"virtio-blk-pci","bus":"pci","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}:
Bus 'pci' not found
In conclusion, there currently doesn't seem to exist a way to define
a useful integratorcp-based VM in libvirt, which IMO means we can
safely change the default machine type for Arm architectures without
any concerns about breaking existing VMs.
I will look into whether the same can be said for RISC-V
architectures. Hopefully that's the case.
For
the second, do any board types other than virt support kvm acceleration? It
appears to be the only one
https://www.qemu.org/docs/master/system/target-arm.html
I guess it's a slippery slope. The virt board also requires specifying a cpu
model, with the only reasonable values being host and max
https://www.qemu.org/docs/master/system/arm/cpu-features.html#a-note-abou...
In fact, that doc implies host is the only choice: "but mostly if KVM is
enabled the host CPU type must be used". The status quo may by fine for
domain type qemu, but it seems there's room for improvement for kvm domains.
Yeah, KVM VMs need host-passthrough while TCG VMs need a named CPU
model. There are plans to implement proper named CPU models that work
across accelerators, so hopefully this difference will be removed at
some point in the future.
--
Andrea Bolognani / Red Hat / Virtualization