On a Thursday in 2021, Laszlo Ersek wrote:
On 01/28/21 16:15, Ján Tomko wrote:
> Sadly, the replies changes for older QEMUs are synthetic.
> Separated for easier review.
>
> Also available on gitlab:
> git fetch
https://gitlab.com/janotomko/libvirt/ virtiofs-bootindex
>
https://gitlab.com/janotomko/libvirt/-/tree/virtiofs-bootindex
>
> And a broken pipeline:
>
https://gitlab.com/janotomko/libvirt/-/pipelines/248162273
>
> Ján Tomko (7):
> tests: switch vhost-user-fs-hugepages to use boot order
> conf: add boot order to filesystem
> qemu: add QEMU_CAPS_VHOST_USER_FS_BOOTINDEX
> fixup: vhost-user-fs-device properties
> fixup: renumber
> Add validation for virtiofs boot order setting
> qemu: format bootindex for vhost-user-fs
>
> docs/schemas/domaincommon.rng | 3 +
> src/conf/domain_conf.c | 5 +-
> src/conf/domain_validate.c | 17 ++-
> src/qemu/qemu_capabilities.c | 8 +
> src/qemu/qemu_capabilities.h | 1 +
> src/qemu/qemu_command.c | 3 +
> src/qemu/qemu_validate.c | 6 +
> .../caps_4.2.0.aarch64.replies | 131 ++++++++++++----
> .../caps_4.2.0.s390x.replies | 119 ++++++++++++---
> .../caps_4.2.0.x86_64.replies | 131 ++++++++++++----
> .../caps_5.0.0.aarch64.replies | 136 +++++++++++++----
> .../caps_5.0.0.ppc64.replies | 124 +++++++++++++---
> .../caps_5.0.0.riscv64.replies | 120 ++++++++++++---
> .../caps_5.0.0.x86_64.replies | 136 +++++++++++++----
> .../caps_5.1.0.x86_64.replies | 136 +++++++++++++----
> .../caps_5.2.0.aarch64.replies | 136 +++++++++++++----
> .../caps_5.2.0.ppc64.replies | 124 +++++++++++++---
> .../caps_5.2.0.riscv64.replies | 120 ++++++++++++---
> .../caps_5.2.0.s390x.replies | 124 +++++++++++++---
> .../caps_5.2.0.x86_64.replies | 136 +++++++++++++----
> .../caps_6.0.0.x86_64.replies | 140 ++++++++++++++----
> .../caps_6.0.0.x86_64.xml | 1 +
> ...vhost-user-fs-hugepages.x86_64-latest.args | 3 +-
> .../vhost-user-fs-hugepages.xml | 3 +-
> 24 files changed, 1534 insertions(+), 329 deletions(-)
>
I've applied this series locally on top of e59bb226b7d9 ("docs: link to
PCI docs from the kbase page", 2021-01-28), and tested it as follows:
- Added <boot order='N'/> to the virtio-fs element I already had; virsh
edit completed fine
- Booted the OVMF guest once with N=1 and then separately with N=3,
while the SCSI system disk of the guest had <boot order='2'/> in both
cases. Checked the firmware log to verify the behavior -- it was OK in
both cases.
So please add the following to whichever patch it applies to:
Tested-by: Laszlo Ersek <lersek(a)redhat.com>
Thanks,
(I didn't explicitly run any test suite, nor did I attempt to
verify
behavior with an older QEMU, so I figure my T-b does not apply to every
patch in the series.)
Thank you Jano for implementing this feature.
I would have done it sooner but I had trouble locating my script for
tuning the caps replies :(
--*--
Some general notes/questions on testing:
I used the documentation at <
https://libvirt.org/compiling.html#building>.
- I think the example pathname "/home/to/your/checkout/src/libvirtd"
should include the "build" directory name now, after the meson conversion.
That's a straightforward change, patch sent.
- I had to stop / start "virtlogd" separately, too; noting
that in the
docs could help.
I think virtlogd from the system would work too, but libvirtd in the
builddir probably does not have the right SELinux label.
make rpm
was the easiest way for me to test stuff that required SELinux changes.
I'm not sure what is the easiest equivalent after the Meson conversion.
Alternatively, copying context from the system binaries works,
but I'm not sure whether that's something we want to include in our
scripts/ repo.
- I had to set SELinux to Permissive temporarily, otherwise QEMU
wouldn't start. A note on SELinux could be helpful.
- I manually "power cycled" the virtual networks on my laptop as well,
because the dnsmasq command lines refer to the "lease helper" binaries,
and the latter are also specific to the libvirtd build. I'm not sure
this was really necessary, but better safe than sorry?...
- After completing the tests, I shut down the one-off virtlogd and
libvirtd processes (Ctrl-C), and started the system-wide binaries again,
with systemctl. Systemctl reports both of those as "running OK" now;
however, when I try to net-start the virtual networks again, with
"virsh", I get the following error:
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No
such file or directory
I don't know what that's about. I guess I'll just reboot my laptop now :)
I suspect it has something to do with socket activation.
Jano
Thanks!
Laszlo