[PATCH 0/3] qemu: Replace usb-storage with usb-bot
by Akihiko Odaki
usb-storage is a compound device that automatically creates a USB mass
storage device and a SCSI device as its backend. Unfortunately it lacks
some configuration options that are usually present with a SCSI device,
and cannot represent CD-ROM in particular.
Replace usb-storage with usb-bot, which can be combined with a manually
created SCSI device. libvirt will configure the SCSI device in a way
identical with how QEMU does for usb-storage except that now it respects
a configuration option to represent CD-ROM.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/368
Signed-off-by: Akihiko Odaki <akihiko.odaki(a)daynix.com>
---
Akihiko Odaki (3):
qemu_capabilities: Introduce QEMU_CAPS_DEVICE_USB_BOT
qemu: Replace usb-storage with usb-bot
qemu_capabilities: Retire QEMU_CAPS_DEVICE_USB_STORAGE
src/qemu/qemu_alias.c | 3 +-
src/qemu/qemu_capabilities.c | 19 +-
src/qemu/qemu_capabilities.h | 3 +-
src/qemu/qemu_command.c | 55 +++++-
src/qemu/qemu_command.h | 5 +
src/qemu/qemu_hotplug.c | 34 +++-
src/qemu/qemu_validate.c | 4 +-
.../qemucapabilitiesdata/caps_10.0.0_s390x.replies | 186 ++++---------------
tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 2 +-
.../caps_10.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 2 +-
.../caps_5.2.0_aarch64.replies | 174 ++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_5.2.0_ppc64.replies | 178 +++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_ppc64.xml | 2 +-
.../caps_5.2.0_riscv64.replies | 170 ++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_5.2.0_x86_64.replies | 190 +++++--------------
tests/qemucapabilitiesdata/caps_5.2.0_x86_64.xml | 2 +-
.../caps_6.0.0_aarch64.replies | 172 ++++-------------
tests/qemucapabilitiesdata/caps_6.0.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.0.0_x86_64.replies | 188 +++++--------------
tests/qemucapabilitiesdata/caps_6.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.1.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_6.1.0_x86_64.xml | 2 +-
.../caps_6.2.0_aarch64.replies | 178 ++++--------------
tests/qemucapabilitiesdata/caps_6.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.2.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.2.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 2 +-
.../caps_7.0.0_aarch64+hvf.replies | 182 +++++-------------
.../caps_7.0.0_aarch64+hvf.xml | 2 +-
.../caps_7.0.0_aarch64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.0.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.0.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.0.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.1.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.1.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 2 +-
tests/qemucapabilitiesdata/caps_7.2.0_ppc.replies | 186 ++++---------------
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 2 +-
.../caps_7.2.0_x86_64+hvf.replies | 206 +++++----------------
.../qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 2 +-
.../qemucapabilitiesdata/caps_7.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 2 +-
.../caps_8.0.0_riscv64.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.1.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_8.1.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 2 +-
.../caps_8.2.0_aarch64.replies | 190 ++++---------------
tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_armv7l.replies | 190 ++++---------------
tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 2 +-
.../caps_8.2.0_loongarch64.replies | 186 ++++---------------
.../caps_8.2.0_loongarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 2 +-
.../caps_9.1.0_riscv64.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.1.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_9.1.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.2.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_9.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 2 +-
tests/qemuhotplugtest.c | 8 +-
.../qemuxmlconfdata/disk-cache.x86_64-latest.args | 3 +-
.../disk-cdrom-bus-other.x86_64-latest.args | 3 +-
.../disk-device-removable.x86_64-latest.args | 3 +-
.../disk-usb-device.x86_64-latest.args | 3 +-
84 files changed, 1689 insertions(+), 5346 deletions(-)
---
base-commit: e5299ddf86121d3c792ca271ffcb54900eb19dc3
change-id: 20250304-bot-5d84aa3a5cfe
Best regards,
--
Akihiko Odaki <akihiko.odaki(a)daynix.com>
8 hours, 6 minutes
[PATCH 0/2] network: support NAT networking for FreeBSD/pf
by Roman Bogorodskiy
This series implements NAT networks support for FreeBSD using the Packet
Filter (pf) firewall.
The commit messages provide high-level details and limitations of the
current implementation, and I'll use this cover letter to provide some
more technical details and describe testing I have performed for this
change.
Libvirt FreeBSD/pf NAT testing
For two networks:
virsh # net-dumpxml default
<network>
<name>default</name>
<uuid>68cd5419-9fda-4cf0-9ac6-2eb9c1ba41ed</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:db:0e:e5'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
virsh # net-dumpxml natnet
<network>
<name>natnet</name>
<uuid>d3c59659-3ceb-4482-a625-1f839a54429c</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:0a:fc:1d'/>
<ip address='10.0.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.0.100.2' end='10.0.100.254'/>
</dhcp>
</ip>
</network>
virsh #
The following rules are generated:
$ sudo pfctl -a '*' -sn
nat-anchor "libvirt/*" all {
nat-anchor "default" all {
nat pass on re0 inet from 192.168.122.0/24 to <natdst> -> (re0) port
1024:65535 round-robin
}
nat-anchor "natnet" all {
nat pass on re0 inet from 10.0.100.0/24 to <natdst> -> (re0) port
1024:65535 round-robin
}
}
$
$ sudo pfctl -a 'libvirt/default' -t natdst -T show
0.0.0.0/0
!192.168.122.0/24
!224.0.0.0/24
!255.255.255.255
$ sudo pfctl -a 'libvirt/natnet' -t natdst -T show
0.0.0.0/0
!10.0.100.0/24
!224.0.0.0/24
!255.255.255.255
$
$ sudo pfctl -a '*' -sr
scrub all fragment reassemble
anchor "libvirt/*" all {
anchor "default" all {
pass quick on virbr0 inet from 192.168.122.0/24 to 192.168.122.0/24
flags S/SA keep state
pass quick on virbr0 inet from 192.168.122.0/24 to 224.0.0.0/24
flags S/SA keep state
pass quick on virbr0 inet from 192.168.122.0/24 to 255.255.255.255
flags S/SA keep state
block drop on virbr0 all
}
anchor "natnet" all {
pass quick on virbr1 inet from 10.0.100.0/24 to 10.0.100.0/24 flags
S/SA keep state
pass quick on virbr1 inet from 10.0.100.0/24 to 224.0.0.0/24 flags
S/SA keep state
pass quick on virbr1 inet from 10.0.100.0/24 to 255.255.255.255
flags S/SA keep state
block drop on virbr1 all
}
}
pass all flags S/SA keep state
$
Create two guests attached to the "default" network, vmA and vmB.
vmA $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:67:eb:de brd ff:ff:ff:ff:ff:ff
inet 192.168.122.92/24 brd 192.168.122.255 scope global dynamic noprefixroute enp0s4
valid_lft 1082sec preferred_lft 1082sec
inet6 fe80::5054:ff:fe67:ebde/64 scope link noprefixroute
valid_lft forever preferred_lft forever
vmA $
vmB $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:d2:8b:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.154/24 metric 100 brd 192.168.122.255 scope global dynamic enp0s4
valid_lft 1040sec preferred_lft 1040sec
inet6 fe80::5054:ff:fed2:8b41/64 scope link
valid_lft forever preferred_lft forever
vmB $
Test NAT rules:
vmA $ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=14.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=10.1 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 10.099/11.835/14.710/2.047 ms
vmA $
vmB $ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=11.0 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=10.4 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 10.434/12.198/15.113/2.075 ms
vmB $
vmA $ curl wttr.in/?0Q
Fog
_ - _ - _ - +4(1) °C
_ - _ - _ ↙ 11 km/h
_ - _ - _ - 0 km
0.0 mm
vmA $
vmB $ curl wttr.in/?0Q
Fog
_ - _ - _ - +4(1) °C
_ - _ - _ ↙ 11 km/h
_ - _ - _ - 0 km
0.0 mm
vmB $
Inter-VM connectivity:
vmA $ ping -c 3 192.168.122.154
PING 192.168.122.154 (192.168.122.154) 56(84) bytes of data.
64 bytes from 192.168.122.154: icmp_seq=1 ttl=64 time=0.253 ms
64 bytes from 192.168.122.154: icmp_seq=2 ttl=64 time=0.226 ms
64 bytes from 192.168.122.154: icmp_seq=3 ttl=64 time=0.269 ms
--- 192.168.122.154 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.226/0.249/0.269/0.017 ms
vmA $
vmA $ ssh 192.168.122.154 uname
novel(a)192.168.122.154's password:
Linux
vmA $
Multicast test:
vmA $ iperf -s -u -B 224.0.0.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Joining multicast group 224.0.0.1
Server set to single client traffic mode (per multicast receive)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 1] local 224.0.0.1 port 5001 connected with 192.168.122.154 port
36963
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Datagrams
[ 1] 0.00-1.00 sec 131 KBytes 1.07 Mbits/sec 0.030 ms 0/91 (0%)
[ 1] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 0.022 ms 0/89 (0%)
[ 1] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 0.021 ms 0/89 (0%)
[ 1] 0.00-3.02 sec 389 KBytes 1.06 Mbits/sec 0.026 ms 0/271 (0%)
vmB $ iperf -c 224.0.0.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.122.154 port 36963 connected with 224.0.0.1 port
5001
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-1.0000 sec 131 KBytes 1.07 Mbits/sec
[ 1] 1.0000-2.0000 sec 128 KBytes 1.05 Mbits/sec
[ 1] 2.0000-3.0000 sec 128 KBytes 1.05 Mbits/sec
[ 1] 0.0000-3.0173 sec 389 KBytes 1.06 Mbits/sec
[ 1] Sent 272 datagrams
vmB $
Broadcast test:
vmA $ sudo sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
net.ipv4.icmp_echo_ignore_broadcasts = 0
vmA $
vmB $ sudo sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
net.ipv4.icmp_echo_ignore_broadcasts = 0
vmB $
host $ ping 192.168.122.255
PING 192.168.122.255 (192.168.122.255): 56 data bytes
64 bytes from 192.168.122.154: icmp_seq=0 ttl=64 time=0.199 ms
64 bytes from 192.168.122.92: icmp_seq=0 ttl=64 time=0.227 ms (DUP!)
64 bytes from 192.168.122.154: icmp_seq=1 ttl=64 time=0.209 ms
64 bytes from 192.168.122.92: icmp_seq=1 ttl=64 time=0.235 ms (DUP!)
^C
--- 192.168.122.255 ping statistics ---
2 packets transmitted, 2 packets received, +2 duplicates, 0.0% packet
loss
round-trip min/avg/max/stddev = 0.199/0.218/0.235/0.014 ms
This testing does not cover any negative scenarios which are probably
not that important at this point.
Roman Bogorodskiy (2):
network: bridge_driver: add BSD implementation
network: introduce Packet Filter firewall backend
meson.build | 2 +
po/POTFILES | 2 +
src/network/bridge_driver_bsd.c | 107 +++++++++
src/network/bridge_driver_conf.c | 8 +
src/network/bridge_driver_linux.c | 2 +
src/network/bridge_driver_platform.c | 2 +
src/network/meson.build | 1 +
src/network/network_pf.c | 327 +++++++++++++++++++++++++++
src/network/network_pf.h | 26 +++
src/util/virfirewall.c | 4 +-
src/util/virfirewall.h | 2 +
11 files changed, 482 insertions(+), 1 deletion(-)
create mode 100644 src/network/bridge_driver_bsd.c
create mode 100644 src/network/network_pf.c
create mode 100644 src/network/network_pf.h
--
2.49.0
2 weeks, 3 days
[PATCH 0/4] USB hostdev: allow addressing by port
by Maximilian Martin
This resubmission splits up the previous patch into multiple patches and
incorporates review comments from Michal Prívozník.
Currently, only vendor/product and bus/device matching are supported for USB host
devices. Neither of these provide a stable and persistent way of assigning a guest
a specific host device. Vendor/product can be ambiguous. Device numbers change on
every enumeration.
This patch adds a bus/port matching, which allows a specific port on the host to be
specified using the dotted notation found in Linux's "devpath" sysfs attribute.
This patch is based on the previous work of Thomas Hebb: https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/7...
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/513
Signed-off-by: Maximilian Martin <maximilian_martin(a)gmx.de>
Maximilian Martin (4):
virusb test data: add devpath files for port addressing
domain_conf, virhostdev, virusb, virusb test: add bus/port matching
schema: add USB port attribute
docs: add description for USB port matching
docs/formatdomain.rst | 29 ++--
src/conf/domain_conf.c | 69 +++++++-
src/conf/domain_conf.h | 1 +
src/conf/schemas/domaincommon.rng | 11 +-
src/hypervisor/virhostdev.c | 131 +++++++++------
src/libvirt_private.syms | 2 -
src/util/virusb.c | 156 ++++++------------
src/util/virusb.h | 32 ++--
tests/virusbtest.c | 149 ++++++++++++-----
.../sys_bus_usb/devices/1-1.5.3.1/devpath | 1 +
.../sys_bus_usb/devices/1-1.5.3.3/devpath | 1 +
.../sys_bus_usb/devices/1-1.5.3/devpath | 1 +
.../sys_bus_usb/devices/1-1.5.4/devpath | 1 +
.../sys_bus_usb/devices/1-1.5.5/devpath | 1 +
.../sys_bus_usb/devices/1-1.5.6/devpath | 1 +
.../sys_bus_usb/devices/1-1.5/devpath | 1 +
.../sys_bus_usb/devices/1-1.6/devpath | 1 +
.../sys_bus_usb/devices/1-1/devpath | 1 +
.../sys_bus_usb/devices/2-1.2/devpath | 1 +
.../sys_bus_usb/devices/2-1/devpath | 1 +
.../sys_bus_usb/devices/usb1/devpath | 1 +
.../sys_bus_usb/devices/usb2/devpath | 1 +
.../sys_bus_usb/devices/usb3/devpath | 1 +
.../sys_bus_usb/devices/usb4/devpath | 1 +
24 files changed, 351 insertions(+), 244 deletions(-)
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.3.1/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.3.3/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.3/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.4/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.5/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5.6/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.5/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1.6/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/1-1/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/2-1.2/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/2-1/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/usb1/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/usb2/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/usb3/devpath
create mode 100644 tests/virusbtestdata/sys_bus_usb/devices/usb4/devpath
--
2.39.5
3 weeks, 1 day
[PATCH v3 0/6] qemu: acpi-generic-initiator support
by Andrea Righi
= Overview =
This patch set introduces support for acpi-generic-initiator devices,
supported by QEMU [1].
The acpi-generic-initiator object is required to support Multi-Instance GPU
(MIG) configurations on NVIDIA GPUs [2]. MIG enables partitioning of GPU
resources into multiple isolated instances, each requiring a dedicated NUMA
node definition.
= Implementation =
This patch set implements the libvirt counterpart to the QEMU feature,
enabling users to configure acpi-generic-initiator objects within libvirt
domain XML.
This includes:
- adding XML syntax to define acpi-generic-initiator objects,
- resolving the acpi-generic-initiator definitions into the proper QEMU
command-line arguments,
- ensuring compatibility with existing NUMA configuration.
= Example =
- Domain XML:
```
...
<cpu mode='host-passthrough' check='none'>
<numa>
<cell id='0' cpus='0-15' memory='8388608' unit='KiB'/>
<cell id='1' memory='0' unit='KiB'/>
<cell id='2' memory='0' unit='KiB'/>
<cell id='3' memory='0' unit='KiB'/>
<cell id='4' memory='0' unit='KiB'/>
<cell id='5' memory='0' unit='KiB'/>
<cell id='6' memory='0' unit='KiB'/>
<cell id='7' memory='0' unit='KiB'/>
<cell id='8' memory='0' unit='KiB'/>
</numa>
</cpu>
...
<devices>
...
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0009' bus='0x01' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</hostdev>
<acpi-generic-initiator>
<alias name="gi1"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>1</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi2"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>2</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi3"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>3</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi4"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>4</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi5"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>5</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi6"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>6</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi7"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>7</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi8"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>8</numa-node>
</acpi-generic-initiator>
</devices>
```
- Generated QEMU command line options:
```
... /usr/bin/qemu-system-aarch64 \
...
-object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":8589934592}' \
-numa node,nodeid=0,cpus=0-15,memdev=ram-node0 \
-numa node,nodeid=1 \
-numa node,nodeid=2 \
-numa node,nodeid=3 \
-numa node,nodeid=4 \
-numa node,nodeid=5 \
-numa node,nodeid=6 \
-numa node,nodeid=7 \
-numa node,nodeid=8 \
...
-device '{"driver":"vfio-pci","host":"0009:01:00.0","id":"hostdev0","bus":"pci.3","addr":"0x0"}'
...
-object acpi-generic-initiator,id=gi1,pci-dev=hostdev0,node=1 \
-object acpi-generic-initiator,id=gi2,pci-dev=hostdev0,node=2 \
-object acpi-generic-initiator,id=gi3,pci-dev=hostdev0,node=3 \
-object acpi-generic-initiator,id=gi4,pci-dev=hostdev0,node=4 \
-object acpi-generic-initiator,id=gi5,pci-dev=hostdev0,node=5 \
-object acpi-generic-initiator,id=gi6,pci-dev=hostdev0,node=6 \
-object acpi-generic-initiator,id=gi7,pci-dev=hostdev0,node=7 \
-object acpi-generic-initiator,id=gi8,pci-dev=hostdev0,node=8
```
= References =
[1] https://lore.kernel.org/all/20231225045603.7654-2-ankita@nvidia.com/
[2] https://www.nvidia.com/en-in/technologies/multi-instance-gpu/
ChangeLog v2 -> v3:
- replaced <text/> with proper types in the XML schema
- avoid mixing g_free() and VIR_FREE()
- use virXMLPropString() instead of looping all XML nodes
- report proper errors with virReportError()
- use virBufferEscapeString() to process strings passed by the user
- fix broken formatting of function headers
- misc coding style fixes
ChangeLog v1 -> v2:
- split parser and driver changes in separate patches
- introduce a new qemu capability flag
- introduce test in qemuxmlconftest
Andrea Righi (6):
schema: Introduce acpi-generic-initiator definition
conf: Introduce acpi-generic-initiator device
qemu: Allow to define NUMA nodes without memory or CPUs assigned
qemu: capabilies: Introduce QEMU_CAPS_ACPI_GENERIC_INITIATOR
qemu: support acpi-generic-initiator
qemu: Add test case for acpi-generic-initiator
src/ch/ch_domain.c | 1 +
src/conf/domain_conf.c | 159 +++++++++++++++++++++
src/conf/domain_conf.h | 14 ++
src/conf/domain_postparse.c | 1 +
src/conf/domain_validate.c | 37 +++++
src/conf/numa_conf.c | 3 +
src/conf/schemas/domaincommon.rng | 19 +++
src/conf/virconftypes.h | 2 +
src/libxl/libxl_driver.c | 6 +
src/lxc/lxc_driver.c | 6 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 49 ++++++-
src/qemu/qemu_domain.c | 2 +
src/qemu/qemu_domain_address.c | 4 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_hotplug.c | 5 +
src/qemu/qemu_postparse.c | 1 +
src/qemu/qemu_validate.c | 1 +
src/test/test_driver.c | 4 +
.../caps_10.0.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 +
.../caps_9.2.0_aarch64+hvf.xml | 1 +
.../caps_9.2.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 +
.../acpi-generic-initiator.x86_64-latest.args | 55 +++++++
.../acpi-generic-initiator.x86_64-latest.xml | 102 +++++++++++++
tests/qemuxmlconfdata/acpi-generic-initiator.xml | 102 +++++++++++++
tests/qemuxmlconftest.c | 1 +
32 files changed, 581 insertions(+), 7 deletions(-)
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.xml
3 weeks, 6 days
Question about SEV* guests and automatic firmware selection
by Jim Fehlig
Hi All,
While investigating an internal bug report, we noticed that a minimal firmware
auto-selection configuration along with SEV* fails to find a match. E.g. the
following config
<domain type="kvm">
<os firmware="efi">
<type arch="x86_64" machine="q35">hvm</type>
<boot dev="hd"/>
</os>
<launchSecurity type="sev">
<policy>0x07</policy>
</launchSecurity>
...
</domain>
Fails with "Unable to find 'efi' firmware that is compatible with the current
configuration". A firmware that should match has the following json description
{
"description": "UEFI firmware for x86_64, with AMD SEV",
"interface-types": [
"uefi"
],
"mapping": {
"device": "flash",
"mode": "stateless",
"executable": {
"filename": "/usr/share/qemu/ovmf-x86_64-sev.bin",
"format": "raw"
}
},
"targets": [
{
"architecture": "x86_64",
"machines": [
"pc-q35-*"
]
}
],
"features": [
"acpi-s4",
"amd-sev",
"amd-sev-es",
"amd-sev-snp",
"verbose-dynamic"
],
"tags": [
]
}
Auto-selection works fine if I specify a 'stateless' firmware, e.g. amend the
above config with
<os firmware="efi">
<type arch="x86_64" machine="q35">hvm</type>
<loader stateless="yes"/>
<boot dev="hd"/>
</os>
Being unfamiliar with the firmware auto-selection code, I tried the below naive
hack, which only led to test failures and the subsequent runtime error "unable
to find any master var store for loader: /usr/share/qemu/ovmf-x86_64-sev.bin".
Should auto-selection work with the minimal config, or is it expected that user
also specify a stateless firmware?
Regards,
Jim
diff --git a/src/qemu/qemu_firmware.c b/src/qemu/qemu_firmware.c
index 2d0ec0b4fa..660b74141a 100644
--- a/src/qemu/qemu_firmware.c
+++ b/src/qemu/qemu_firmware.c
@@ -1293,15 +1293,17 @@ qemuFirmwareMatchDomain(const virDomainDef *def,
return false;
}
- if (loader && loader->stateless == VIR_TRISTATE_BOOL_YES) {
- if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_STATELESS) {
- VIR_DEBUG("Discarding loader without stateless flash");
- return false;
- }
- } else {
- if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_SPLIT) {
- VIR_DEBUG("Discarding loader without split flash");
- return false;
+ if (loader) {
+ if (loader->stateless == VIR_TRISTATE_BOOL_YES) {
+ if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_STATELESS) {
+ VIR_DEBUG("Discarding loader without stateless flash");
+ return false;
+ }
+ } else if (loader->stateless == VIR_TRISTATE_BOOL_NO) {
+ if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_SPLIT) {
+ VIR_DEBUG("Discarding loader without split flash");
+ return false;
+ }
}
}
4 weeks, 1 day
[PATCH RESEND 0/6] Add support for configuring PCI high memory MMIO size
by Matthew R. Ochs
Resending: Series has been re-based over latest upstream.
This patch series adds support for configuring the PCI high memory MMIO
window size for aarch64 virt machine types. This feature has been merged
into the QEMU upstream master branch [1] and will be available in QEMU 10.0.
It allows users to configure the size of the high memory MMIO window above
4GB, which is particularly useful for systems with large amounts of PCI
memory requirements.
The feature is exposed through the domain XML as a new PCI feature:
<features>
<pci>
<highmem-mmio-size unit='G'>512</highmem-mmio-size>
</pci>
</features>
When enabled, this configures the size of the PCI high memory MMIO window
via QEMU's highmem-mmio-size machine property. The feature is only
available for aarch64 virt machine types and requires QEMU support.
This series depends on [2] and should be applied on top of those patches.
For your convenience, this series is also available on Github [3].
[1] https://github.com/qemu/qemu/commit/f10104aeae3a17f181d5bb37b7fd7dad7fe86cba
[2] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/Z4...
[3] git fetch https://github.com/nvmochs/libvirt.git pci_highmem_mmio_size
Signed-off-by: Matthew R. Ochs <mochs(a)nvidia.com>
Matthew R. Ochs (6):
domain: Add PCI configuration feature infrastructure
schema: Add PCI configuration feature schema
conf: Add PCI configuration XML parsing and formatting
qemu: Add capability for PCI high memory MMIO size
qemu: Add command line support for PCI high memory MMIO size
tests: Add tests for machine PCI features
src/conf/domain_conf.c | 103 ++++++++++++++++++
src/conf/domain_conf.h | 6 +
src/conf/schemas/domaincommon.rng | 9 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 6 +
src/qemu/qemu_validate.c | 15 +++
.../caps_10.0.0_aarch64.replies | 10 ++
.../caps_10.0.0_aarch64.xml | 1 +
...rch64-virt-machine-pci.aarch64-latest.args | 31 ++++++
...arch64-virt-machine-pci.aarch64-latest.xml | 30 +++++
.../aarch64-virt-machine-pci.xml | 20 ++++
tests/qemuxmlconftest.c | 2 +
13 files changed, 236 insertions(+)
create mode 100644 tests/qemuxmlconfdata/aarch64-virt-machine-pci.aarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/aarch64-virt-machine-pci.aarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/aarch64-virt-machine-pci.xml
--
2.46.0
4 weeks, 1 day
[PATCH v3 0/5] Adding POWER11 CPU Support
by Narayana Murty N
This patch series introduces the necessary changes to
support the power11 CPU and power11 host in libvirt.
Additionally, it updates the QEMU capabilities with
the latest QEMU version v10.0.0-rc2 to include power11
in the capabilities tests.During testing of v2 patches
found a bug in qemu which sets wrong default cpu for
pre-9.0 machines, that was fixed in qemu v10.0.0-rc2.
Patch Summary:
Patch 0001: tests: Pin pseries-2.7 tests to the version 7.0.
Patch 0002: Fix: qemuhotplugtest Set the cpu version at source as POWER9.
Patch 0003: Add capabilities test support for QEMU 10.0.0 on PPC64.
Patch 0004: Add POWER11 CPU model support.
Patch 0005: Add POWER11 host model support and accept case-insensitive.
The corresponding patches for the Linux kernel and QEMU
are already upstream. Relevant links are provided below:
Linux kernel patches:
1. Linux P11 support: commit c2ed087ed35c ("powerpc: Add Power11 architected and raw mode")
2. Linux P11 KVM support: commit 96e266e3bcd6 ("KVM: PPC: Book3S HV: Add Power11 capability support for Nested PAPR guests")
Qemu patches:
3. Qemu P11 support: commit 273db89bcaf4 ("ppc/pseries: Add Power11 cpu type")
4. Qemu P11 DD02.0 support: commit c0d964076c3e ("target/ppc: Add Power11 DD2.0 processor")
5. Qemu pre-9.0 pseries michines changing default cpu with Qemu commit 1490d0bcdfcb ("ppc/spapr: fix default cpu for pre-9.0 machines.").
Signed-off-by: Narayana Murty N <nnmlinux(a)linux.ibm.com>
----
Changelog:
-V3: https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/PG...
*Rebased to top of the tree as I saw some tests were failing with
recent commits.
-V2:https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/JQ5UJFWJG35OFX54RQIDHFUSX3RIYIYP/?sort=thread
*Instead of removing pseries-2.7 related tests, they are now pinned to the latest available capabilities version 7.0.0.
*patch0003: description explained why the cpu version changed.
*change the cpu model name to lower case.
*Fixing the hotplugtest by setting the cpu version at source as POWER9
*default CPU change for pseries machines preior to pseries-9.0 tests cases for tcg fixed.
* Fixing the compat models check for host CPU names in both lowercase and uppercase.
-V1:https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/4A3IND2QP5CE654XE755ICRZKLUNYXAE/
*Addressed v2 patch readability issues.
Narayana Murty N (5):
tests: Pin pseries-2.7 tests to the version 7.0
tests: qemuhotplugtest: Set the cpu version at source for PPC64 tests
tests: Add capabilities for QEMU 10.0.0 on ppc64
cpu_map: Add POWER11 cpu model support
cpu_ppc64: Add POWER11 hostmodel support and accept case insensitive
src/cpu/cpu_ppc64.c | 11 +-
src/cpu_map/index.xml | 1 +
src/cpu_map/meson.build | 1 +
src/cpu_map/ppc64_POWER11.xml | 6 +
tests/domaincapsdata/qemu_10.0.0.ppc64.xml | 191 +
.../caps_10.0.0_ppc64.replies | 39513 ++++++++++++++++
.../caps_10.0.0_ppc64.xml | 1090 +
.../ppc64-modern-bulk-domain.xml | 3 +-
.../ppc64-modern-individual-domain.xml | 3 +-
.../disk-floppy-pseries.ppc64-latest.xml | 2 +-
.../memory-hotplug-nvdimm-ppc64.xml | 3 +-
.../memory-hotplug-ppc64-nonuma.xml | 3 +
.../panic-pseries.ppc64-latest.args | 2 +-
.../panic-pseries.ppc64-latest.xml | 2 +-
...ault-cpu-kvm-pseries-2.7.ppc64-7.0.0.args} | 0
...fault-cpu-kvm-pseries-2.7.ppc64-7.0.0.xml} | 0
...ault-cpu-kvm-pseries-3.1.ppc64-latest.args | 2 +-
...fault-cpu-kvm-pseries-3.1.ppc64-latest.xml | 2 +-
...ault-cpu-kvm-pseries-4.2.ppc64-latest.args | 2 +-
...fault-cpu-kvm-pseries-4.2.ppc64-latest.xml | 2 +-
...ault-cpu-tcg-pseries-2.7.ppc64-7.0.0.args} | 0
...fault-cpu-tcg-pseries-2.7.ppc64-7.0.0.xml} | 0
...efault-models.ppc64-latest.abi-update.args | 4 +-
...default-models.ppc64-latest.abi-update.xml | 2 +-
...4-pseries-default-models.ppc64-latest.args | 4 +-
...64-pseries-default-models.ppc64-latest.xml | 2 +-
.../ppc64-pseries-graphics.ppc64-latest.args | 4 +-
.../ppc64-pseries-graphics.ppc64-latest.xml | 2 +-
.../ppc64-pseries-headless.ppc64-latest.args | 4 +-
.../ppc64-pseries-headless.ppc64-latest.xml | 2 +-
...eries-minimal.ppc64-latest.abi-update.args | 2 +-
...series-minimal.ppc64-latest.abi-update.xml | 2 +-
.../ppc64-pseries-minimal.ppc64-latest.args | 2 +-
.../ppc64-pseries-minimal.ppc64-latest.xml | 2 +-
.../ppc64-tpmproxy-single.ppc64-latest.args | 2 +-
.../ppc64-tpmproxy-single.ppc64-latest.xml | 2 +-
.../ppc64-tpmproxy-with-tpm.ppc64-latest.args | 2 +-
.../ppc64-tpmproxy-with-tpm.ppc64-latest.xml | 2 +-
.../pseries-basic.ppc64-latest.args | 2 +-
.../pseries-basic.ppc64-latest.xml | 2 +-
.../pseries-console-virtio.ppc64-latest.args | 2 +-
.../pseries-console-virtio.ppc64-latest.xml | 2 +-
...eries-cpu-compat-power11.ppc64-latest.args | 31 +
...series-cpu-compat-power11.ppc64-latest.err | 1 +
...series-cpu-compat-power11.ppc64-latest.xml | 29 +
.../pseries-cpu-compat-power11.xml | 19 +
.../pseries-cpu-le.ppc64-latest.args | 2 +-
.../pseries-cpu-le.ppc64-latest.xml | 2 +-
.../pseries-features.ppc64-latest.args | 2 +-
.../pseries-features.ppc64-latest.xml | 2 +-
.../pseries-hostdevs-1.ppc64-latest.args | 2 +-
.../pseries-hostdevs-1.ppc64-latest.xml | 2 +-
.../pseries-hostdevs-2.ppc64-latest.args | 2 +-
.../pseries-hostdevs-2.ppc64-latest.xml | 2 +-
.../pseries-hostdevs-3.ppc64-latest.args | 2 +-
.../pseries-hostdevs-3.ppc64-latest.xml | 2 +-
.../pseries-many-buses-1.ppc64-latest.args | 2 +-
.../pseries-many-buses-1.ppc64-latest.xml | 2 +-
.../pseries-many-buses-2.ppc64-latest.args | 2 +-
.../pseries-many-buses-2.ppc64-latest.xml | 2 +-
.../pseries-many-devices.ppc64-latest.args | 2 +-
.../pseries-many-devices.ppc64-latest.xml | 2 +-
.../pseries-nvram.ppc64-latest.args | 2 +-
.../pseries-nvram.ppc64-latest.xml | 2 +-
.../pseries-panic-missing.ppc64-latest.args | 2 +-
.../pseries-panic-missing.ppc64-latest.xml | 2 +-
...pseries-panic-no-address.ppc64-latest.args | 2 +-
.../pseries-panic-no-address.ppc64-latest.xml | 2 +-
...ries-phb-default-missing.ppc64-latest.args | 2 +-
...eries-phb-default-missing.ppc64-latest.xml | 2 +-
.../pseries-phb-numa-node.ppc64-latest.args | 2 +-
.../pseries-phb-numa-node.ppc64-latest.xml | 2 +-
.../pseries-phb-simple.ppc64-latest.args | 6 +-
.../pseries-phb-simple.ppc64-latest.xml | 2 +-
.../pseries-phb-user-alias.ppc64-latest.args | 6 +-
.../pseries-phb-user-alias.ppc64-latest.xml | 2 +-
.../pseries-serial-native.ppc64-latest.args | 2 +-
.../pseries-serial-native.ppc64-latest.xml | 2 +-
.../pseries-serial-pci.ppc64-latest.args | 2 +-
.../pseries-serial-pci.ppc64-latest.xml | 2 +-
.../pseries-serial-usb.ppc64-latest.args | 2 +-
.../pseries-serial-usb.ppc64-latest.xml | 2 +-
.../pseries-usb-default.ppc64-latest.args | 2 +-
.../pseries-usb-default.ppc64-latest.xml | 2 +-
.../pseries-usb-kbd.ppc64-latest.args | 2 +-
.../pseries-usb-kbd.ppc64-latest.xml | 2 +-
.../pseries-usb-multi.ppc64-latest.args | 2 +-
.../pseries-usb-multi.ppc64-latest.xml | 2 +-
...series-vio-user-assigned.ppc64-latest.args | 7 +-
...pseries-vio-user-assigned.ppc64-latest.xml | 2 +-
.../pseries-vio.ppc64-latest.args | 7 +-
.../pseries-vio.ppc64-latest.xml | 2 +-
...fault-pseries.ppc64-latest.abi-update.args | 2 +-
...efault-pseries.ppc64-latest.abi-update.xml | 2 +-
...ntroller-default-pseries.ppc64-latest.args | 2 +-
...ontroller-default-pseries.ppc64-latest.xml | 2 +-
...fault-unavailable-pseries.ppc64-latest.xml | 2 +-
tests/qemuxmlconftest.c | 8 +-
tests/testutilshostcpus.h | 11 +
tests/testutilsqemu.c | 4 +
tests/testutilsqemu.h | 1 +
101 files changed, 41008 insertions(+), 103 deletions(-)
create mode 100644 src/cpu_map/ppc64_POWER11.xml
create mode 100644 tests/domaincapsdata/qemu_10.0.0.ppc64.xml
create mode 100644 tests/qemucapabilitiesdata/caps_10.0.0_ppc64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_10.0.0_ppc64.xml
rename tests/qemuxmlconfdata/{ppc64-default-cpu-kvm-pseries-2.7.ppc64-latest.args => ppc64-default-cpu-kvm-pseries-2.7.ppc64-7.0.0.args} (100%)
rename tests/qemuxmlconfdata/{ppc64-default-cpu-kvm-pseries-2.7.ppc64-latest.xml => ppc64-default-cpu-kvm-pseries-2.7.ppc64-7.0.0.xml} (100%)
rename tests/qemuxmlconfdata/{ppc64-default-cpu-tcg-pseries-2.7.ppc64-latest.args => ppc64-default-cpu-tcg-pseries-2.7.ppc64-7.0.0.args} (100%)
rename tests/qemuxmlconfdata/{ppc64-default-cpu-tcg-pseries-2.7.ppc64-latest.xml => ppc64-default-cpu-tcg-pseries-2.7.ppc64-7.0.0.xml} (100%)
create mode 100644 tests/qemuxmlconfdata/pseries-cpu-compat-power11.ppc64-latest.args
create mode 100644 tests/qemuxmlconfdata/pseries-cpu-compat-power11.ppc64-latest.err
create mode 100644 tests/qemuxmlconfdata/pseries-cpu-compat-power11.ppc64-latest.xml
create mode 100644 tests/qemuxmlconfdata/pseries-cpu-compat-power11.xml
--
2.48.1
1 month
[PATCHv2 0/5] qemu: Introduce nvme disk emulation support
by honglei.wang@smartx.com
From: hongleiwang <honglei.wang(a)smartx.com>
QEMU has supported nvme disk emulation for a long time,
see: https://qemu-project.gitlab.io/qemu/system/devices/nvme.html.
The following patches introduce nvme-ns disk bus type:
A disk with nvme-ns as bus is represented as an nvme namespace
and needs to be attached to an nvme controller. In XML, it can be
used like this:
<devices>
...
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/tmp/data.img'/>
<target dev='nvmensa' bus='nvme-ns'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='nvme' index='0'>
<serial>nvme-controller-serial-value</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
...
</devices>
Signed-off-by: ray <honglei.wang(a)smartx.com>
---
Compared to patch v1, this version removes the nvme bus type implementation
and keeps only the nvme controller + nvme-ns bus approach.
ray (5):
qemu: Add support for NVMe namespace disk bus type
qemu_capabilities: Add support for nvme-ns bus capabilities
schema: Add nvme controller and nvme-ns bus defination
tests: Add test case for nvme-ns device configuration
NEWS: Document qemu nvme disk emulation feature
NEWS.rst | 17 +++++++++
src/conf/domain_conf.c | 39 ++++++++++++++++++++
src/conf/domain_conf.h | 7 ++++
src/conf/domain_postparse.c | 2 ++
src/conf/domain_validate.c | 4 ++-
src/conf/schemas/domaincommon.rng | 11 +++++-
src/conf/virconftypes.h | 2 ++
src/hyperv/hyperv_driver.c | 2 ++
src/qemu/qemu_alias.c | 1 +
src/qemu/qemu_capabilities.c | 5 +++
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 26 ++++++++++++++
src/qemu/qemu_domain_address.c | 5 +++
src/qemu/qemu_hotplug.c | 14 ++++++--
src/qemu/qemu_postparse.c | 1 +
src/qemu/qemu_validate.c | 18 ++++++++++
src/test/test_driver.c | 2 ++
src/util/virutil.c | 2 +-
src/vbox/vbox_common.c | 2 ++
src/vmx/vmx.c | 1 +
.../qemu_10.0.0-q35.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_10.0.0-q35.x86_64.xml | 1 +
.../qemu_10.0.0-tcg.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_10.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-tcg-virt.riscv64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0-virt.riscv64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.sparc.xml | 1 +
tests/domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.2.0-hvf.x86_64+hvf.xml | 1 +
tests/domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.2.0-tcg.x86_64+hvf.xml | 1 +
tests/domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.2.0.ppc.xml | 1 +
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 1 +
.../qemu_8.2.0-tcg-virt.loongarch64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0-virt.aarch64.xml | 1 +
.../domaincapsdata/qemu_8.2.0-virt.loongarch64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.armv7l.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.0.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.0.0.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_9.1.0-tcg-virt.riscv64.xml | 1 +
tests/domaincapsdata/qemu_9.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.1.0-virt.riscv64.xml | 1 +
tests/domaincapsdata/qemu_9.1.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_9.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_9.2.0-hvf.aarch64+hvf.xml | 1 +
.../qemu_9.2.0-q35.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_9.2.0-q35.x86_64.xml | 1 +
.../qemu_9.2.0-tcg.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_9.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_9.2.0.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_9.2.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 +
.../caps_10.0.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 +
.../qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 +
tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 +
.../caps_8.2.0_loongarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 +
.../caps_9.2.0_aarch64+hvf.xml | 1 +
tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml | 1 +
.../caps_9.2.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 +
.../disk-nvme-ns-device.x86_64-latest.args | 36 +++++++++++++++++++
.../disk-nvme-ns-device.x86_64-latest.xml | 42 ++++++++++++++++++++++
tests/qemuxmlconfdata/disk-nvme-ns-device.xml | 41 +++++++++++++++++++++
tests/qemuxmlconftest.c | 1 +
117 files changed, 370 insertions(+), 5 deletions(-)
create mode 100644 tests/qemuxmlconfdata/disk-nvme-ns-device.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/disk-nvme-ns-device.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/disk-nvme-ns-device.xml
--
2.11.0
1 month
[PATCH v2 0/5] docs: automated info about machine deprecation/removal info
by Daniel P. Berrangé
Since we deprecate and remove versioned machine types on a fixed
schedule, we can automatically ensure that the docs reflect the
latest version info, rather than requiring manual updates on each
dev cycle.
The first patch in this series removes the hack which postponed
automatic removal of versioned machine types to the 10.1.0 release,
since we're now in the 10.1.0 dev cycle.
The second patch in this series fixes the logic to ensure dev snapshots
and release candidates don't have an off-by-1 error in setting
deprecation and removal thresholds - they must predict the next formal
release version number.
The following three patches deal with the docs stuff.
With this series applied all versioned machine types prior to 4.1
are now removed (hidden). We can delete the code at our leisure.
Changed in v2:
- Remove hack that temporarily postponed automatic deletion
of machine types
- Fix docs version info for stable bugfix releases
Daniel P. Berrangé (5):
Revert "include/hw: temporarily disable deletion of versioned machine
types"
include/hw/boards: cope with dev/rc versions in deprecation checks
docs/about/deprecated: auto-generate a note for versioned machine
types
docs/about/removed-features: auto-generate a note for versioned
machine types
include/hw/boards: add warning about changing deprecation logic
docs/about/deprecated.rst | 7 ++++
docs/about/removed-features.rst | 10 +++---
docs/conf.py | 39 +++++++++++++++++++++-
include/hw/boards.h | 58 +++++++++++++++++++++------------
4 files changed, 89 insertions(+), 25 deletions(-)
--
2.49.0
1 month
RFC: libvirt-tck bhyve/FreeBSD support
by Roman Bogorodskiy
Hi,
I'd like to test the bhyve driver using libvirt-tck, which seems to be
very useful both from the continuous integration perspective,
and making the bhyve driver closer to other drivers to improve
integration with other tooling.
As a small proof-of-concept I made just a single test
060-persistent-lifecycle.t work with bhyve, though it was enough to
highlight a bunch of issues.
I've created a pull request on Gitlab:
https://gitlab.com/libvirt/libvirt-tck/-/merge_requests/58
but apparently it's not very active there, so I'll briefly repeat it
here as well.
Issues I encountered:
* Network Filters not supported
It could be solved by implementing the network filters (obviously),
but realistically it's probably not going to happen soon.
I guess there are at least 3 options to solve that:
- Add a test suite config knob to skip nwfilters
- Don't fail the test suite on non-implemented list_nwfilters
- In the bhyve driver, add a minimal implementation which
always returns 0 rules
* Hardcoded /dev/urandom RNG devices
As the bhyve driver now supports virtio-rnd with /dev/random
backend, should there be a configuration knob for libvirt-tck to
override the RNG device definition?
* Missing IDE device support
Bhyve does not (and likely never will) support IDE devices, but supports
SATA devices. Similar to the previous item, should there be a config knob?
Or maybe just change default to sata?
* Missing pty console support
Bhyve does not support pty consoles. The bhyve driver currently supports
the nmdm console, and bhyve also supports virtio-console and TCP socket
connection which are not yet supported by the driver.
* Firmware loader specification
This is probably one of the trickiest ones. When no loader specified,
the bhyve driver uses the bhyveload(8) loader which allows to boot
FreeBSD guests only. That means that any other guest OS requires
specifying the BHYVE_UEFI.fd firmware (or use grub-bhyve). I see
two options for that:
- A test suite/framework configuration knob (again)
- Change the bhyve driver to default to BHYVE_UEFI.fd. I like this option
for two reasons: it makes bhyve domain XMLs more compatible with other
drivers, and also looks like a more reasonable default, because
for example I don't run FreeBSD guests inside bhyve very often, so for
most of my domains I have to set loader. It's a bit inconvenient that
the firmware is not a part of bhyve and needs to be installed through
the port, but we can make it possible to set the default firmware
via the build option and allow to override that via the bhyve driver
configuration, which should provide enough flexibility to users, I guess.
Any thoughts and recommendations how to tackle this project appreciated.
Thanks,
Roman
1 month