virsh create fails with "Unable to find any firmware to satisfy 'efi'" for aarch64 guest on macOS

I am having trouble when I `virsh create test.xml` with an aarch64 guest on a macOS (Apple Silicon) host. I've wrestled with a variety of issues but the one I simply haven't been able to get past is this regarding the UEFI firmware: error: Failed to create domain from test.xml error: operation failed: Unable to find any firmware to satisfy 'efi' This is with `virsh --version` of 7.9.0 and `qemu-system-aarch64 --version` of 6.1.0, both installed via the common `brew` tool from its main 'homebrew/core' tap. I have confirmed that the loader/nvram files referenced do exist. Running `qemu-system-aarch64 -L help` outputs two lines: /opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu-firmware /opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu The …/share/qemu-firmware folder does NOT exist, but symlinking it to a …/share/qemu/firmware folder which does exist does not improve the situation. I have also tried renaming the …/share/qemu/firmware folder (e.g. `mv firmware zzz-firmware`) as some notes I found led me to believe that libvirst might ignore my loader/nvram settings entirely if QEMU had the "new" firmware JSON configuration stuff there? But again no improvement…. How can I debug this further? What should I try next? thanks, -natevw --- the test.xml domain configuration I am trying --- <domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>test1</name> <uuid>D45FCA3E-B873-4608-A0B8-3D8529E7CFB7</uuid> <memory unit="MiB">2048</memory> <cpu mode="host-model" check="partial" /> <vcpu>2</vcpu> <clock offset="utc"/> <qemu:commandline> <qemu:arg value='-accel'/> <qemu:arg value='hvf'/> </qemu:commandline> <os firmware="efi"> <type arch="aarch64" machine="virt">hvm</type> <loader readonly="yes" secure="no" type="pflash">/opt/homebrew/share/qemu/edk2-aarch64-code.fd</loader> <nvram template="/opt/homebrew/share/qemu/edk2-arm-vars.fd">/Users/me/vm_testing/vm-test1.efi.fd</nvram> <bootmenu enable="no"/> <boot dev="hd"/> </os> <features> <acpi/> <gic version="3"/> <pae/> </features> <devices> <emulator>/opt/homebrew/bin/qemu-system-aarch64</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/Users/me/vm_testing/vm-test1.qcow2"/> <target dev="vda" bus="virtio"/> </disk> <interface type="network"> <source network="default"/> <model type="virtio"/> </interface> <console type="pty"> <target type="serial" port="0"/> </console> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> </devices> </domain>

On 11/27/21 00:08, Nathan Vander Wilt wrote:
I am having trouble when I `virsh create test.xml` with an aarch64 guest on a macOS (Apple Silicon) host. I've wrestled with a variety of issues but the one I simply haven't been able to get past is this regarding the UEFI firmware:
error: Failed to create domain from test.xml error: operation failed: Unable to find any firmware to satisfy 'efi' < This is with `virsh --version` of 7.9.0 and `qemu-system-aarch64 --version` of 6.1.0, both installed via the common `brew` tool from its main 'homebrew/core' tap. I have confirmed that the loader/nvram files referenced do exist.
Running `qemu-system-aarch64 -L help` outputs two lines:
/opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu-firmware /opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu
The …/share/qemu-firmware folder does NOT exist, but symlinking it to a …/share/qemu/firmware folder which does exist does not improve the situation.
I have also tried renaming the …/share/qemu/firmware folder (e.g. `mv firmware zzz-firmware`) as some notes I found led me to believe that libvirst might ignore my loader/nvram settings entirely if QEMU had the "new" firmware JSON configuration stuff there? But again no improvement….
How can I debug this further? What should I try next?
thanks, -natevw
--- the test.xml domain configuration I am trying ---
<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>test1</name> <uuid>D45FCA3E-B873-4608-A0B8-3D8529E7CFB7</uuid> <memory unit="MiB">2048</memory> <cpu mode="host-model" check="partial" /> <vcpu>2</vcpu> <clock offset="utc"/> <qemu:commandline> <qemu:arg value='-accel'/> <qemu:arg value='hvf'/> </qemu:commandline> <os firmware="efi">
Since you are providing the path to both UEFI image and varstore you can drop this 'firmware="efi"' attribute. It's what's causing troubles here. A short trip into not so distant past. UEFI was introduced to QEMU, so libvirt came up with <loader type="pflash>/path/to/uefi</loader> and <nvram/> combo. This was suboptimal, because now users had to guess which FW to select (because it depends on guest arch, secure boot enabled, SMM mode, ...). So QEMU started shipping small, machine readable files to each BIOS/UEFI image, which libvirt would parse and pick the best one for given domain XML. And this is what firmware='efi' controls. IOW, using firmware='efi' is incompatible with specifying paths in <loader/> and <nvram/> and if you define this XML you'd see that the paths are not formatted back (e.g. in virsh dumpxml).
<type arch="aarch64" machine="virt">hvm</type> <loader readonly="yes" secure="no" type="pflash">/opt/homebrew/share/qemu/edk2-aarch64-code.fd</loader> <nvram template="/opt/homebrew/share/qemu/edk2-arm-vars.fd">/Users/me/vm_testing/vm-test1.efi.fd</nvram> <bootmenu enable="no"/> <boot dev="hd"/> </os> <features> <acpi/> <gic version="3"/> <pae/> </features> <devices> <emulator>/opt/homebrew/bin/qemu-system-aarch64</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/Users/me/vm_testing/vm-test1.qcow2"/> <target dev="vda" bus="virtio"/> </disk> <interface type="network"> <source network="default"/> <model type="virtio"/> </interface> <console type="pty"> <target type="serial" port="0"/> </console> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> </devices> </domain>
Michal

On Fri, Nov 26, 2021 at 11:08 PM Michal Prívozník <mprivozn@redhat.com> wrote:
Since you are providing the path to both UEFI image and varstore you can drop this 'firmware="efi"' attribute. It's what's causing troubles here.
Thank you, yes!
A short trip into not so distant past. UEFI was introduced to QEMU, so libvirt came up with <loader type="pflash>/path/to/uefi</loader> and <nvram/> combo. This was suboptimal, because now users had to guess which FW to select (because it depends on guest arch, secure boot enabled, SMM mode, ...). So QEMU started shipping small, machine readable files to each BIOS/UEFI image, which libvirt would parse and pick the best one for given domain XML. And this is what firmware='efi' controls. IOW, using firmware='efi' is incompatible with specifying paths in <loader/> and <nvram/> and if you define this XML you'd see that the paths are not formatted back (e.g. in virsh dumpxml).
Okay, yes this was unclear. It's not in the documentation afaict and I saw some threads making it sound like it was the mere presence of the JSON config files that made libvirt ignore any loader/nvram configuration. But dropping the firmware attribute on the os element was indeed the trick to get rid of the EFI error. Thanks for the clarification! regards, -natevw

On 11/30/21 06:33, Nathan Vander Wilt wrote:
On Fri, Nov 26, 2021 at 11:08 PM Michal Prívozník <mprivozn@redhat.com> wrote:
Since you are providing the path to both UEFI image and varstore you can drop this 'firmware="efi"' attribute. It's what's causing troubles here.
Thank you, yes!
A short trip into not so distant past. UEFI was introduced to QEMU, so libvirt came up with <loader type="pflash>/path/to/uefi</loader> and <nvram/> combo. This was suboptimal, because now users had to guess which FW to select (because it depends on guest arch, secure boot enabled, SMM mode, ...). So QEMU started shipping small, machine readable files to each BIOS/UEFI image, which libvirt would parse and pick the best one for given domain XML. And this is what firmware='efi' controls. IOW, using firmware='efi' is incompatible with specifying paths in <loader/> and <nvram/> and if you define this XML you'd see that the paths are not formatted back (e.g. in virsh dumpxml).
Okay, yes this was unclear. It's not in the documentation afaict and I saw some threads making it sound like it was the mere presence of the JSON config files that made libvirt ignore any loader/nvram configuration. But dropping the firmware attribute on the os element was indeed the trick to get rid of the EFI error. Thanks for the clarification!
Yeah, writing documentation is hard. When doing so I tend to put myself into shoes of an user, but in fact I can never escape my developer mindset. What is obvious to me is not obvious to users, but at the same time - I'm unable to realize that. Having said that, if you have any suggestion, I'm more than happy to work it in. Michal

On a Friday in 2021, Nathan Vander Wilt wrote:
I am having trouble when I `virsh create test.xml` with an aarch64 guest on a macOS (Apple Silicon) host. I've wrestled with a variety of issues but the one I simply haven't been able to get past is this regarding the UEFI firmware:
error: Failed to create domain from test.xml error: operation failed: Unable to find any firmware to satisfy 'efi'
This is with `virsh --version` of 7.9.0 and `qemu-system-aarch64 --version` of 6.1.0, both installed via the common `brew` tool from its main 'homebrew/core' tap. I have confirmed that the loader/nvram files referenced do exist.
Running `qemu-system-aarch64 -L help` outputs two lines:
/opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu-firmware /opt/homebrew/Cellar/qemu/6.1.0_1/bin/../share/qemu
The …/share/qemu-firmware folder does NOT exist, but symlinking it to a …/share/qemu/firmware folder which does exist does not improve the situation.
I have also tried renaming the …/share/qemu/firmware folder (e.g. `mv firmware zzz-firmware`) as some notes I found led me to believe that libvirst might ignore my loader/nvram settings entirely if QEMU had the "new" firmware JSON configuration stuff there? But again no improvement….
How can I debug this further? What should I try next?
thanks, -natevw
Can you try with the latest libvirt? 7.10.0-rc2 was just tagged today and should be out this week: https://listman.redhat.com/archives/libvirt-announce/2021-November/msg00002.... Andrea did some fixes that are supposed to help with Apple Silicon: https://gitlab.com/libvirt/libvirt/-/issues/168 Jano

On Mon, Nov 29, 2021 at 2:28 AM Ján Tomko <jtomko@redhat.com> wrote:
Can you try with the latest libvirt? 7.10.0-rc2 was just tagged today and should be out this week: https://listman.redhat.com/archives/libvirt-announce/2021-November/msg00002....
Ah, but it looks like the arm64 -> VIR_ARCH_AARCH64 patch (https://github.com/ihsakashi/libvirt/commit/0f062221ae23e6ea0ed5e6ba65d47395...) is still in limbo? AFAICT that's the main issue I'm hitting now.
Andrea did some fixes that are supposed to help with Apple Silicon: https://gitlab.com/libvirt/libvirt/-/issues/168
Yes, I happened to find that thread a little while after posting here and it has lots of tips but didn't directly address my confusion regarding how to manually specify the files. But based on the "internal error: undefined hardware architecture" I'm now getting I think I will need some of the patches. For now I've just wrestled through direct usage of QEMU from the command line as ironically this whole exercise was to try keeping random build chains off my main "shiny new" macOS install itself. Thanks for the pointers and glad these issues are gradually getting worked out. I think I just must still be in the early adopters on the M1 platform and hit some growing pains! thanks again, -natevw

On 11/30/21 06:49, Nathan Vander Wilt wrote:
On Mon, Nov 29, 2021 at 2:28 AM Ján Tomko <jtomko@redhat.com> wrote:
Can you try with the latest libvirt? 7.10.0-rc2 was just tagged today and should be out this week: https://listman.redhat.com/archives/libvirt-announce/2021-November/msg00002....
Ah, but it looks like the arm64 -> VIR_ARCH_AARCH64 patch (https://github.com/ihsakashi/libvirt/commit/0f062221ae23e6ea0ed5e6ba65d47395...) is still in limbo? AFAICT that's the main issue I'm hitting now.
Looking into virArchFromHost() I can see uname() called which is then passed to virArchFromString(). In here, uname machine (which is equivalent to 'uname -m' from shell) is compared against virArchData array: https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virarch.c#L42 So what you are saying is that 'uname -m' reports arm64 and not aarch64? If that's the case then we should revisit the patch you mention.
Andrea did some fixes that are supposed to help with Apple Silicon: https://gitlab.com/libvirt/libvirt/-/issues/168
Yes, I happened to find that thread a little while after posting here and it has lots of tips but didn't directly address my confusion regarding how to manually specify the files. But based on the "internal error: undefined hardware architecture" I'm now getting I think I will need some of the patches.
For now I've just wrestled through direct usage of QEMU from the command line as ironically this whole exercise was to try keeping random build chains off my main "shiny new" macOS install itself. Thanks for the pointers and glad these issues are gradually getting worked out. I think I just must still be in the early adopters on the M1 platform and hit some growing pains!
Yeah, unfortunately I don't have access to M1 yet so all I can give is suggestions. Michal

On Tue, Nov 30, 2021 at 09:54:38AM +0100, Michal Prívozník wrote:
On 11/30/21 06:49, Nathan Vander Wilt wrote:
Ah, but it looks like the arm64 -> VIR_ARCH_AARCH64 patch is still in limbo? AFAICT that's the main issue I'm hitting now.
Unfortunately that's the case.
Looking into virArchFromHost() I can see uname() called which is then passed to virArchFromString(). In here, uname machine (which is equivalent to 'uname -m' from shell) is compared against virArchData array:
https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virarch.c#L42
So what you are saying is that 'uname -m' reports arm64 and not aarch64? If that's the case then we should revisit the patch you mention.
Correct: on Apple Silicon Macs the architecture name is reported as "arm64", but our code expects it to be "aarch64" because that's what we get on Linux. Michal, have you actually looked at the patch mentioned earlier? If not, you can perhaps do a clean room implementation of the fix based on the information provided above and get us out of this stalemate? It's quite a simple change, but having seen the original patch I feel like I couldn't possibly submit it myself and still be in the clear. -- Andrea Bolognani / Red Hat / Virtualization

On 11/30/21 11:40, Andrea Bolognani wrote:
On Tue, Nov 30, 2021 at 09:54:38AM +0100, Michal Prívozník wrote:
On 11/30/21 06:49, Nathan Vander Wilt wrote:
Ah, but it looks like the arm64 -> VIR_ARCH_AARCH64 patch is still in limbo? AFAICT that's the main issue I'm hitting now.
Unfortunately that's the case.
Looking into virArchFromHost() I can see uname() called which is then passed to virArchFromString(). In here, uname machine (which is equivalent to 'uname -m' from shell) is compared against virArchData array:
https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virarch.c#L42
So what you are saying is that 'uname -m' reports arm64 and not aarch64? If that's the case then we should revisit the patch you mention.
Correct: on Apple Silicon Macs the architecture name is reported as "arm64", but our code expects it to be "aarch64" because that's what we get on Linux.
Michal, have you actually looked at the patch mentioned earlier? If not, you can perhaps do a clean room implementation of the fix based on the information provided above and get us out of this stalemate?
It's quite a simple change, but having seen the original patch I feel like I couldn't possibly submit it myself and still be in the clear.
Unfortunately I did. But I think the whole area can be reworked a bit so that we would detect both arm64 and aarch64 but in a different way than the original patch. Michal

On Tue, Nov 30, 2021 at 02:40:07AM -0800, Andrea Bolognani wrote:
On Tue, Nov 30, 2021 at 09:54:38AM +0100, Michal Prívozník wrote:
On 11/30/21 06:49, Nathan Vander Wilt wrote:
Ah, but it looks like the arm64 -> VIR_ARCH_AARCH64 patch is still in limbo? AFAICT that's the main issue I'm hitting now.
Unfortunately that's the case.
Looking into virArchFromHost() I can see uname() called which is then passed to virArchFromString(). In here, uname machine (which is equivalent to 'uname -m' from shell) is compared against virArchData array:
https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virarch.c#L42
So what you are saying is that 'uname -m' reports arm64 and not aarch64? If that's the case then we should revisit the patch you mention.
Correct: on Apple Silicon Macs the architecture name is reported as "arm64", but our code expects it to be "aarch64" because that's what we get on Linux.
Michal, have you actually looked at the patch mentioned earlier? If not, you can perhaps do a clean room implementation of the fix based on the information provided above and get us out of this stalemate?
It's quite a simple change, but having seen the original patch I feel like I couldn't possibly submit it myself and still be in the clear.
We're over thinking things here. The way this change is implemented is the only way anyone would write this code, and is a simple cut+paste from the code pattern in the lines above so arguably already a derived work. This is not a bit of code that meets the criteria to taint you from a copyright POV. I just implemented & posted the obvious fix in virArch and it is identical to the patch referenced earlier in this thread, rather proving my point. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Nov 30, 2021 at 11:06:27AM +0000, Daniel P. Berrangé wrote:
On Tue, Nov 30, 2021 at 02:40:07AM -0800, Andrea Bolognani wrote:
Michal, have you actually looked at the patch mentioned earlier? If not, you can perhaps do a clean room implementation of the fix based on the information provided above and get us out of this stalemate?
It's quite a simple change, but having seen the original patch I feel like I couldn't possibly submit it myself and still be in the clear.
We're over thinking things here. The way this change is implemented is the only way anyone would write this code, and is a simple cut+paste from the code pattern in the lines above so arguably already a derived work. This is not a bit of code that meets the criteria to taint you from a copyright POV.
I just implemented & posted the obvious fix in virArch and it is identical to the patch referenced earlier in this thread, rather proving my point.
I trust your judgement on this, hence my R-b :) Thanks! -- Andrea Bolognani / Red Hat / Virtualization
participants (5)
-
Andrea Bolognani
-
Daniel P. Berrangé
-
Ján Tomko
-
Michal Prívozník
-
Nathan Vander Wilt