[PATCH 0/2] network: support NAT networking for FreeBSD/pf
by Roman Bogorodskiy
This series implements NAT networks support for FreeBSD using the Packet
Filter (pf) firewall.
The commit messages provide high-level details and limitations of the
current implementation, and I'll use this cover letter to provide some
more technical details and describe testing I have performed for this
change.
Libvirt FreeBSD/pf NAT testing
For two networks:
virsh # net-dumpxml default
<network>
<name>default</name>
<uuid>68cd5419-9fda-4cf0-9ac6-2eb9c1ba41ed</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:db:0e:e5'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
virsh # net-dumpxml natnet
<network>
<name>natnet</name>
<uuid>d3c59659-3ceb-4482-a625-1f839a54429c</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:0a:fc:1d'/>
<ip address='10.0.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.0.100.2' end='10.0.100.254'/>
</dhcp>
</ip>
</network>
virsh #
The following rules are generated:
$ sudo pfctl -a '*' -sn
nat-anchor "libvirt/*" all {
nat-anchor "default" all {
nat pass on re0 inet from 192.168.122.0/24 to <natdst> -> (re0) port
1024:65535 round-robin
}
nat-anchor "natnet" all {
nat pass on re0 inet from 10.0.100.0/24 to <natdst> -> (re0) port
1024:65535 round-robin
}
}
$
$ sudo pfctl -a 'libvirt/default' -t natdst -T show
0.0.0.0/0
!192.168.122.0/24
!224.0.0.0/24
!255.255.255.255
$ sudo pfctl -a 'libvirt/natnet' -t natdst -T show
0.0.0.0/0
!10.0.100.0/24
!224.0.0.0/24
!255.255.255.255
$
$ sudo pfctl -a '*' -sr
scrub all fragment reassemble
anchor "libvirt/*" all {
anchor "default" all {
pass quick on virbr0 inet from 192.168.122.0/24 to 192.168.122.0/24
flags S/SA keep state
pass quick on virbr0 inet from 192.168.122.0/24 to 224.0.0.0/24
flags S/SA keep state
pass quick on virbr0 inet from 192.168.122.0/24 to 255.255.255.255
flags S/SA keep state
block drop on virbr0 all
}
anchor "natnet" all {
pass quick on virbr1 inet from 10.0.100.0/24 to 10.0.100.0/24 flags
S/SA keep state
pass quick on virbr1 inet from 10.0.100.0/24 to 224.0.0.0/24 flags
S/SA keep state
pass quick on virbr1 inet from 10.0.100.0/24 to 255.255.255.255
flags S/SA keep state
block drop on virbr1 all
}
}
pass all flags S/SA keep state
$
Create two guests attached to the "default" network, vmA and vmB.
vmA $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:67:eb:de brd ff:ff:ff:ff:ff:ff
inet 192.168.122.92/24 brd 192.168.122.255 scope global dynamic noprefixroute enp0s4
valid_lft 1082sec preferred_lft 1082sec
inet6 fe80::5054:ff:fe67:ebde/64 scope link noprefixroute
valid_lft forever preferred_lft forever
vmA $
vmB $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:d2:8b:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.154/24 metric 100 brd 192.168.122.255 scope global dynamic enp0s4
valid_lft 1040sec preferred_lft 1040sec
inet6 fe80::5054:ff:fed2:8b41/64 scope link
valid_lft forever preferred_lft forever
vmB $
Test NAT rules:
vmA $ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=14.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=10.1 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 10.099/11.835/14.710/2.047 ms
vmA $
vmB $ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=11.0 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=10.4 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 10.434/12.198/15.113/2.075 ms
vmB $
vmA $ curl wttr.in/?0Q
Fog
_ - _ - _ - +4(1) °C
_ - _ - _ ↙ 11 km/h
_ - _ - _ - 0 km
0.0 mm
vmA $
vmB $ curl wttr.in/?0Q
Fog
_ - _ - _ - +4(1) °C
_ - _ - _ ↙ 11 km/h
_ - _ - _ - 0 km
0.0 mm
vmB $
Inter-VM connectivity:
vmA $ ping -c 3 192.168.122.154
PING 192.168.122.154 (192.168.122.154) 56(84) bytes of data.
64 bytes from 192.168.122.154: icmp_seq=1 ttl=64 time=0.253 ms
64 bytes from 192.168.122.154: icmp_seq=2 ttl=64 time=0.226 ms
64 bytes from 192.168.122.154: icmp_seq=3 ttl=64 time=0.269 ms
--- 192.168.122.154 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.226/0.249/0.269/0.017 ms
vmA $
vmA $ ssh 192.168.122.154 uname
novel(a)192.168.122.154's password:
Linux
vmA $
Multicast test:
vmA $ iperf -s -u -B 224.0.0.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Joining multicast group 224.0.0.1
Server set to single client traffic mode (per multicast receive)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 1] local 224.0.0.1 port 5001 connected with 192.168.122.154 port
36963
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Datagrams
[ 1] 0.00-1.00 sec 131 KBytes 1.07 Mbits/sec 0.030 ms 0/91 (0%)
[ 1] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 0.022 ms 0/89 (0%)
[ 1] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 0.021 ms 0/89 (0%)
[ 1] 0.00-3.02 sec 389 KBytes 1.06 Mbits/sec 0.026 ms 0/271 (0%)
vmB $ iperf -c 224.0.0.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.122.154 port 36963 connected with 224.0.0.1 port
5001
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-1.0000 sec 131 KBytes 1.07 Mbits/sec
[ 1] 1.0000-2.0000 sec 128 KBytes 1.05 Mbits/sec
[ 1] 2.0000-3.0000 sec 128 KBytes 1.05 Mbits/sec
[ 1] 0.0000-3.0173 sec 389 KBytes 1.06 Mbits/sec
[ 1] Sent 272 datagrams
vmB $
Broadcast test:
vmA $ sudo sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
net.ipv4.icmp_echo_ignore_broadcasts = 0
vmA $
vmB $ sudo sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
net.ipv4.icmp_echo_ignore_broadcasts = 0
vmB $
host $ ping 192.168.122.255
PING 192.168.122.255 (192.168.122.255): 56 data bytes
64 bytes from 192.168.122.154: icmp_seq=0 ttl=64 time=0.199 ms
64 bytes from 192.168.122.92: icmp_seq=0 ttl=64 time=0.227 ms (DUP!)
64 bytes from 192.168.122.154: icmp_seq=1 ttl=64 time=0.209 ms
64 bytes from 192.168.122.92: icmp_seq=1 ttl=64 time=0.235 ms (DUP!)
^C
--- 192.168.122.255 ping statistics ---
2 packets transmitted, 2 packets received, +2 duplicates, 0.0% packet
loss
round-trip min/avg/max/stddev = 0.199/0.218/0.235/0.014 ms
This testing does not cover any negative scenarios which are probably
not that important at this point.
Roman Bogorodskiy (2):
network: bridge_driver: add BSD implementation
network: introduce Packet Filter firewall backend
meson.build | 2 +
po/POTFILES | 2 +
src/network/bridge_driver_bsd.c | 107 +++++++++
src/network/bridge_driver_conf.c | 8 +
src/network/bridge_driver_linux.c | 2 +
src/network/bridge_driver_platform.c | 2 +
src/network/meson.build | 1 +
src/network/network_pf.c | 327 +++++++++++++++++++++++++++
src/network/network_pf.h | 26 +++
src/util/virfirewall.c | 4 +-
src/util/virfirewall.h | 2 +
11 files changed, 482 insertions(+), 1 deletion(-)
create mode 100644 src/network/bridge_driver_bsd.c
create mode 100644 src/network/network_pf.c
create mode 100644 src/network/network_pf.h
--
2.49.0
1 week, 5 days
[PATCH] qemuxmlactivetest: Don't segfault when capability XMLs are invalid
by Peter Krempa
From: Peter Krempa <pkrempa(a)redhat.com>
This is purely a devel-time problem in the test suite.
'qemuxmlactivetest' invokes the whole test worker twice, once for
inactive output and second time for active.
Now 'testQemuInfoInitArgs' returns a failure if the XML is invalid and
the test is skipped. On another invocation though it returns 0 if
'testQemuInfoSetArgs' was not invoked meanwhile and thus makes it seem
it succeeded which leads to a crash in the code assuming that some
pointers are valid.
Use same interlocking as 'qemuxmlconftest' to skip the second invocation
on failure of the first one.
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
tests/qemuxmlactivetest.c | 67 +++++++++++++++++++++++++--------------
1 file changed, 43 insertions(+), 24 deletions(-)
diff --git a/tests/qemuxmlactivetest.c b/tests/qemuxmlactivetest.c
index b132b91623..de2b1e48eb 100644
--- a/tests/qemuxmlactivetest.c
+++ b/tests/qemuxmlactivetest.c
@@ -87,6 +87,45 @@ testRunStatus(const char *name,
}
+static int
+testqemuActiveXML2XMLCommonPrepare(testQemuInfo *info)
+{
+ if (info->prepared)
+ return 0;
+
+ if (testQemuInfoInitArgs((testQemuInfo *) info) < 0)
+ goto error;
+
+ virFileCacheClear(driver.qemuCapsCache);
+
+ if (qemuTestCapsCacheInsert(driver.qemuCapsCache, info->qemuCaps) < 0)
+ goto error;
+
+ if (!(info->def = virDomainDefParseFile(info->infile,
+ driver.xmlopt, NULL,
+ info->parseFlags)))
+ goto error;
+
+ if (!virDomainDefCheckABIStability(info->def, info->def, driver.xmlopt)) {
+ VIR_TEST_DEBUG("ABI stability check failed on %s", info->infile);
+ goto error;
+ }
+
+ /* make sure that the XML definition looks active, by setting an ID
+ * as otherwise the XML formatter will simply assume that it's inactive */
+ if (info->def->id == -1)
+ info->def->id = 1337;
+
+ info->prepared = true;
+ return 0;
+
+ error:
+ info->prep_skip = true;
+ info->prepared = true;
+ return -1;
+}
+
+
static int
testqemuActiveXML2XMLCommon(testQemuInfo *info,
bool live)
@@ -95,31 +134,11 @@ testqemuActiveXML2XMLCommon(testQemuInfo *info,
const char *outfile = info->out_xml_active;
unsigned int format_flags = VIR_DOMAIN_DEF_FORMAT_SECURE;
- /* Prepare the test data and parse the input just once */
- if (!info->def) {
- if (testQemuInfoInitArgs((testQemuInfo *) info) < 0)
- return -1;
-
- virFileCacheClear(driver.qemuCapsCache);
+ if (info->prep_skip)
+ return EXIT_AM_SKIP;
- if (qemuTestCapsCacheInsert(driver.qemuCapsCache, info->qemuCaps) < 0)
- return -1;
-
- if (!(info->def = virDomainDefParseFile(info->infile,
- driver.xmlopt, NULL,
- info->parseFlags)))
- return -1;
-
- if (!virDomainDefCheckABIStability(info->def, info->def, driver.xmlopt)) {
- VIR_TEST_DEBUG("ABI stability check failed on %s", info->infile);
- return -1;
- }
-
- /* make sure that the XML definition looks active, by setting an ID
- * as otherwise the XML formatter will simply assume that it's inactive */
- if (info->def->id == -1)
- info->def->id = 1337;
- }
+ if (testqemuActiveXML2XMLCommonPrepare(info) < 0)
+ return -1;
if (!live) {
format_flags |= VIR_DOMAIN_DEF_FORMAT_INACTIVE;
--
2.49.0
1 week, 6 days
Question about SEV* guests and automatic firmware selection
by Jim Fehlig
Hi All,
While investigating an internal bug report, we noticed that a minimal firmware
auto-selection configuration along with SEV* fails to find a match. E.g. the
following config
<domain type="kvm">
<os firmware="efi">
<type arch="x86_64" machine="q35">hvm</type>
<boot dev="hd"/>
</os>
<launchSecurity type="sev">
<policy>0x07</policy>
</launchSecurity>
...
</domain>
Fails with "Unable to find 'efi' firmware that is compatible with the current
configuration". A firmware that should match has the following json description
{
"description": "UEFI firmware for x86_64, with AMD SEV",
"interface-types": [
"uefi"
],
"mapping": {
"device": "flash",
"mode": "stateless",
"executable": {
"filename": "/usr/share/qemu/ovmf-x86_64-sev.bin",
"format": "raw"
}
},
"targets": [
{
"architecture": "x86_64",
"machines": [
"pc-q35-*"
]
}
],
"features": [
"acpi-s4",
"amd-sev",
"amd-sev-es",
"amd-sev-snp",
"verbose-dynamic"
],
"tags": [
]
}
Auto-selection works fine if I specify a 'stateless' firmware, e.g. amend the
above config with
<os firmware="efi">
<type arch="x86_64" machine="q35">hvm</type>
<loader stateless="yes"/>
<boot dev="hd"/>
</os>
Being unfamiliar with the firmware auto-selection code, I tried the below naive
hack, which only led to test failures and the subsequent runtime error "unable
to find any master var store for loader: /usr/share/qemu/ovmf-x86_64-sev.bin".
Should auto-selection work with the minimal config, or is it expected that user
also specify a stateless firmware?
Regards,
Jim
diff --git a/src/qemu/qemu_firmware.c b/src/qemu/qemu_firmware.c
index 2d0ec0b4fa..660b74141a 100644
--- a/src/qemu/qemu_firmware.c
+++ b/src/qemu/qemu_firmware.c
@@ -1293,15 +1293,17 @@ qemuFirmwareMatchDomain(const virDomainDef *def,
return false;
}
- if (loader && loader->stateless == VIR_TRISTATE_BOOL_YES) {
- if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_STATELESS) {
- VIR_DEBUG("Discarding loader without stateless flash");
- return false;
- }
- } else {
- if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_SPLIT) {
- VIR_DEBUG("Discarding loader without split flash");
- return false;
+ if (loader) {
+ if (loader->stateless == VIR_TRISTATE_BOOL_YES) {
+ if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_STATELESS) {
+ VIR_DEBUG("Discarding loader without stateless flash");
+ return false;
+ }
+ } else if (loader->stateless == VIR_TRISTATE_BOOL_NO) {
+ if (flash->mode != QEMU_FIRMWARE_FLASH_MODE_SPLIT) {
+ VIR_DEBUG("Discarding loader without split flash");
+ return false;
+ }
}
}
2 weeks
[PATCH] conf: Fix handling of vlan with one tagged vlan id
by Leigh Brown
Yanqiu Zhang reported [1] that when a vlan is defined with a single
vlan id and native mode set to tagged, the vlan was not setup correctly
using the standard Linux bridge functionality. This is due a conflict
between the XML validation and the bridge functionality.
The XML validation will allow a single vlan id to be configured with
native mode set to tagged without setting the vlan mode to trunked. If
the vlan is explicitly set to "trunk='no'" then the validation will
correctly reject the configuration.
Rather than force the user to specify "trunk='yes'", update the code to
infer trunk='yes' if a single vlan id is configured with native mode set
to a non-default value. It already infers trunk='yes' if more than one
vlan id is specified.
Reported-by: Yanqiu Zhang <yanqzhan(a)redhat.com>
Closes: https://gitlab.com/libvirt/libvirt/-/issues/767 [1]
Signed-off-by: Leigh Brown <leigh(a)solinno.co.uk>
---
src/conf/netdev_vlan_conf.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/src/conf/netdev_vlan_conf.c b/src/conf/netdev_vlan_conf.c
index 1ac66aec54..300d0d8e86 100644
--- a/src/conf/netdev_vlan_conf.c
+++ b/src/conf/netdev_vlan_conf.c
@@ -87,12 +87,13 @@ virNetDevVlanParse(xmlNodePtr node, xmlXPathContextPtr ctxt, virNetDevVlan *def)
def->nTags = nTags;
- /* now that we know how many tags there are, look for an explicit
- * trunk setting.
+ /* In case the trunk attribute is not specified, infer it from
+ * the number of tags or use of native vlan mode.
*/
- if (nTags > 1)
+ if (nTags > 1 || def->nativeMode != 0)
def->trunk = true;
+ /* Look for and validate an explicit trunk setting. */
ctxt->node = node;
if ((trunk = virXPathString("string(./@trunk)", ctxt)) != NULL) {
def->trunk = STRCASEEQ(trunk, "yes");
--
2.49.0
2 weeks
[PATCH v2] virhostcpu: Fix potential use of unallocated memory
by Felix Huettner
In case of a host that has a large number of cpus offline the count of
host cpus and the last bit set in the virHostCPUGetOnlineBitmap might
diverge significantly. This can e.g. be the case when disabling smt via
/sys/devices/system/cpu/smt/control.
On the host this looks like:
```
$ cat /sys/devices/system/cpu/present
0-255
$ cat /sys/devices/system/cpu/online
0-127
```
However in this case virBitmapToData previously only allocated 16 bytes
for the output bitmap. This is becase the last set bit is on the 15th
byte.
Users of virHostCPUGetMap however rely on the "cpumap" containing enough
space for all existing cpus (so they would expect 32 bytes in this case).
E.g. cmdNodeCpuMap relies on this for its output. It will then actually
read 32 bytes from the start of the "cpumap" address where in this case
the last 16 of these bytes are uninitialized.
This manifests itself in flapping outputs of "virsh nodecpumap --pretty" like:
```
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,202
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,197
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,196-197
```
This in turn potentially causes users of this data to report wrong cpu
counts.
Note that this only seems to happen with at least 256 physical cpus
where at least 128 are offline.
We fix this by preallocating the expected bitmap size.
Signed-off-by: Felix Huettner <felix.huettner(a)stackit.cloud>
---
src/util/virhostcpu.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/src/util/virhostcpu.c b/src/util/virhostcpu.c
index 5dbcc8987c..4805484b1d 100644
--- a/src/util/virhostcpu.c
+++ b/src/util/virhostcpu.c
@@ -1090,28 +1090,26 @@ virHostCPUGetMap(unsigned char **cpumap,
unsigned int flags)
{
g_autoptr(virBitmap) cpus = NULL;
- int ret = -1;
- int dummy;
+ int ncpus = virHostCPUGetCount();
virCheckFlags(0, -1);
if (!cpumap && !online)
- return virHostCPUGetCount();
+ return ncpus;
if (!(cpus = virHostCPUGetOnlineBitmap()))
- goto cleanup;
+ return -1;
+
+ if (cpumap) {
+ int len = (ncpus + CHAR_BIT) / CHAR_BIT;
+ *cpumap = g_new0(unsigned char, len);
+ virBitmapToDataBuf(cpus, *cpumap, len);
+ }
- if (cpumap)
- virBitmapToData(cpus, cpumap, &dummy);
if (online)
*online = virBitmapCountBits(cpus);
- ret = virHostCPUGetCount();
-
- cleanup:
- if (ret < 0 && cpumap)
- VIR_FREE(*cpumap);
- return ret;
+ return ncpus;
}
base-commit: c5a73f75bc6cae4f466d0a6344d5b3277ac9c2f4
--
2.43.0
2 weeks
[PATCH] virhostcpu: Fix potential use of unallocated memory
by Felix Huettner
In case of a host that has a large number of cpus offline the count of
host cpus and the last bit set in the virHostCPUGetOnlineBitmap might
diverge significantly. This can e.g. be the case when disabeling smt via
/sys/devices/system/cpu/smt/control.
On the host this looks like:
```
$ cat /sys/devices/system/cpu/present
0-255
$ cat /sys/devices/system/cpu/online
0-127
```
However in this case virBitmapToData previously only allocated 16 bytes
for the output bitmap. This is becase the last set bit is on the 15th
byte.
Users of virHostCPUGetMap however rely on the "cpumap" containing enough
space for all existing cpus (so they would expect 32 bytes in this case).
E.g. cmdNodeCpuMap relies on this for its output. It will then actually
read 32 bytes from the start of the "cpumap" address where in this case
the last 16 of these bytes are uninitialized.
This manifests itself in flapping outputs of "virsh nodecpumap --pretty" like:
```
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,202
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,197
$ virsh nodecpumap --pretty
CPUs present: 256
CPUs online: 128
CPU map: 0-127,192,194,196-197
```
This in turn potentially causes users of this data to report wrong cpu
counts.
Note that this only seems to happen with at least 256 phyiscal cpus
where at least 128 are offline.
We fix this by preallocating the expected bitmap size.
Signed-off-by: Felix Huettner <felix.huettner(a)stackit.cloud>
---
src/util/virhostcpu.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/src/util/virhostcpu.c b/src/util/virhostcpu.c
index 5dbcc8987c..626faa88cf 100644
--- a/src/util/virhostcpu.c
+++ b/src/util/virhostcpu.c
@@ -1091,22 +1091,26 @@ virHostCPUGetMap(unsigned char **cpumap,
{
g_autoptr(virBitmap) cpus = NULL;
int ret = -1;
- int dummy;
virCheckFlags(0, -1);
+ ret = virHostCPUGetCount();
+
if (!cpumap && !online)
- return virHostCPUGetCount();
+ return ret;
if (!(cpus = virHostCPUGetOnlineBitmap()))
goto cleanup;
- if (cpumap)
- virBitmapToData(cpus, cpumap, &dummy);
+ if (cpumap) {
+ int len = (ret + CHAR_BIT) / CHAR_BIT;
+ *cpumap = g_new0(unsigned char, len);
+ virBitmapToDataBuf(cpus, *cpumap, len);
+ }
+
if (online)
*online = virBitmapCountBits(cpus);
- ret = virHostCPUGetCount();
cleanup:
if (ret < 0 && cpumap)
base-commit: c5a73f75bc6cae4f466d0a6344d5b3277ac9c2f4
--
2.43.0
2 weeks
[PATCH 0/3] qemu: Replace usb-storage with usb-bot
by Akihiko Odaki
usb-storage is a compound device that automatically creates a USB mass
storage device and a SCSI device as its backend. Unfortunately it lacks
some configuration options that are usually present with a SCSI device,
and cannot represent CD-ROM in particular.
Replace usb-storage with usb-bot, which can be combined with a manually
created SCSI device. libvirt will configure the SCSI device in a way
identical with how QEMU does for usb-storage except that now it respects
a configuration option to represent CD-ROM.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/368
Signed-off-by: Akihiko Odaki <akihiko.odaki(a)daynix.com>
---
Akihiko Odaki (3):
qemu_capabilities: Introduce QEMU_CAPS_DEVICE_USB_BOT
qemu: Replace usb-storage with usb-bot
qemu_capabilities: Retire QEMU_CAPS_DEVICE_USB_STORAGE
src/qemu/qemu_alias.c | 3 +-
src/qemu/qemu_capabilities.c | 19 +-
src/qemu/qemu_capabilities.h | 3 +-
src/qemu/qemu_command.c | 55 +++++-
src/qemu/qemu_command.h | 5 +
src/qemu/qemu_hotplug.c | 34 +++-
src/qemu/qemu_validate.c | 4 +-
.../qemucapabilitiesdata/caps_10.0.0_s390x.replies | 186 ++++---------------
tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 2 +-
.../caps_10.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 2 +-
.../caps_5.2.0_aarch64.replies | 174 ++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_5.2.0_ppc64.replies | 178 +++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_ppc64.xml | 2 +-
.../caps_5.2.0_riscv64.replies | 170 ++++-------------
tests/qemucapabilitiesdata/caps_5.2.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_5.2.0_x86_64.replies | 190 +++++--------------
tests/qemucapabilitiesdata/caps_5.2.0_x86_64.xml | 2 +-
.../caps_6.0.0_aarch64.replies | 172 ++++-------------
tests/qemucapabilitiesdata/caps_6.0.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.0.0_x86_64.replies | 188 +++++--------------
tests/qemucapabilitiesdata/caps_6.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.1.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_6.1.0_x86_64.xml | 2 +-
.../caps_6.2.0_aarch64.replies | 178 ++++--------------
tests/qemucapabilitiesdata/caps_6.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.2.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_6.2.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 2 +-
.../caps_7.0.0_aarch64+hvf.replies | 182 +++++-------------
.../caps_7.0.0_aarch64+hvf.xml | 2 +-
.../caps_7.0.0_aarch64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.0.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.0.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.0.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.1.0_ppc64.replies | 182 +++++-------------
tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 2 +-
.../qemucapabilitiesdata/caps_7.1.0_x86_64.replies | 194 +++++--------------
tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 2 +-
tests/qemucapabilitiesdata/caps_7.2.0_ppc.replies | 186 ++++---------------
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 2 +-
.../caps_7.2.0_x86_64+hvf.replies | 206 +++++----------------
.../qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 2 +-
.../qemucapabilitiesdata/caps_7.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 2 +-
.../caps_8.0.0_riscv64.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.1.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_8.1.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 2 +-
.../caps_8.2.0_aarch64.replies | 190 ++++---------------
tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_armv7l.replies | 190 ++++---------------
tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 2 +-
.../caps_8.2.0_loongarch64.replies | 186 ++++---------------
.../caps_8.2.0_loongarch64.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_8.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.0.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 2 +-
.../caps_9.1.0_riscv64.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.1.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_9.1.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 2 +-
.../qemucapabilitiesdata/caps_9.2.0_s390x.replies | 182 ++++--------------
tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml | 2 +-
.../qemucapabilitiesdata/caps_9.2.0_x86_64.replies | 206 +++++----------------
tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 2 +-
tests/qemuhotplugtest.c | 8 +-
.../qemuxmlconfdata/disk-cache.x86_64-latest.args | 3 +-
.../disk-cdrom-bus-other.x86_64-latest.args | 3 +-
.../disk-device-removable.x86_64-latest.args | 3 +-
.../disk-usb-device.x86_64-latest.args | 3 +-
84 files changed, 1689 insertions(+), 5346 deletions(-)
---
base-commit: e5299ddf86121d3c792ca271ffcb54900eb19dc3
change-id: 20250304-bot-5d84aa3a5cfe
Best regards,
--
Akihiko Odaki <akihiko.odaki(a)daynix.com>
2 weeks
[PATCH 0/2] qemucapabilitiestest: Update of x86_64 test data for qemu-10.0 release
by Peter Krempa
Peter Krempa (2):
qemucapabilitiestest: Final update for qemu-10.0 release on x86_64
qemucapabilitiestest: Final update for qemu-10.0 release on x86_64 of
the 'amdsev' variant
.../caps_10.0.0_x86_64+amdsev.replies | 199 ++++++++++--------
.../caps_10.0.0_x86_64+amdsev.xml | 9 +-
.../caps_10.0.0_x86_64.replies | 16 +-
.../caps_10.0.0_x86_64.xml | 12 +-
4 files changed, 127 insertions(+), 109 deletions(-)
--
2.49.0
2 weeks
[PATCH] rpm: Enable KVM for riscv64 on RHEL 10+
by Andrea Bolognani
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
libvirt.spec.in | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index cb41ea1de1..9217820137 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -8,7 +8,9 @@
%define arches_qemu_kvm %{ix86} x86_64 %{power64} %{arm} aarch64 s390x riscv64
%if 0%{?rhel}
- %if 0%{?rhel} > 8
+ %if 0%{?rhel} >= 10
+ %define arches_qemu_kvm x86_64 aarch64 s390x riscv64
+ %elif 0%{?rhel} >= 9
%define arches_qemu_kvm x86_64 aarch64 s390x
%else
%define arches_qemu_kvm x86_64 %{power64} aarch64 s390x
--
2.48.1
2 weeks
Re: [PATCH] qemuSnapshotDeleteValidate: Fix crash when disk is
not found in VM definition
by jungleman759
Hi
Thanks for following up, and sorry for the delay in getting back to you.
You're right to suspect the issue might be related to device changes. Here’s how the crash can be triggered:
The VM initially uses a SATA controller, with a disk defined as:
xml
复制编辑
<controller type="scsi" index="0" model="lsilogic"></controller> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='sda' bus='sata'/> </disk>
A snapshot is created at this point — which records the disk as sda.
Later, the VM is reconfigured to use a virtio controller, and the disk is now assigned as vda.
When the VM is running and the snapshot is deleted, the snapshot code still expects to find a disk named sda in the current VM definition.
Because of this mismatch, qemuDomainDiskByName() returns NULL, and the crash occurs when the result is used without a null check.
This can easily happen during controller or disk bus reconfiguration between snapshot and deletion. The patch adds sanity checks to ensure we don’t dereference a null pointer in this situation.
Let me know if you’d like me to adjust the wording in the error messages or add a reproducer for automated testing.
Best regards,
kaihuan
Martin Kletzander <mkletzan(a)redhat.com> 于2025年1月22日周三 21:05写道:
On Fri, Nov 29, 2024 at 10:56:45PM +0800, kaihuan wrote:
>qemuDomainDiskByName() can return a NULL pointer on failure.
>But this returned value in qemuSnapshotDeleteValidate is not checked.It will make libvirtd crash.
>
Hi, looking through old unreviewed patches I found this one. Sorry for
the wait. Can you explain whether you found out how to trigger that?
This looks like there is some other issue that causes a snapshot having
a disk that is not in the domain. Does that happen when you want to
delete a snapshot after unplugging a disk?
>Signed-off-by: kaihuan <jungleman759(a)gmail.com>
>---
> src/qemu/qemu_snapshot.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
>diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
>index 18b2e478f6..bcbd913073 100644
>--- a/src/qemu/qemu_snapshot.c
>+++ b/src/qemu/qemu_snapshot.c
>@@ -4242,8 +4242,19 @@ qemuSnapshotDeleteValidate(virDomainObj *vm,
> virDomainDiskDef *vmdisk = NULL;
> virDomainDiskDef *disk = NULL;
>
>- vmdisk = qemuDomainDiskByName(vm->def, snapDisk->name);
>- disk = qemuDomainDiskByName(snapdef->parent.dom, snapDisk->name);
The function calls already report an error when the disk is not found,
so there is not need to rewrite that error in my opinion.
>+ if (!(vmdisk = qemuDomainDiskByName(vm->def, snapDisk->name))) {
>+ virReportError(VIR_ERR_OPERATION_FAILED,
>+ _("disk '%1$s' referenced by snapshot '%2$s' not found in the current definition"),
>+ snapDisk->name, snap->def->name);
>+ return -1;
>+ }
>+
>+ if (!(disk = qemuDomainDiskByName(snapdef->parent.dom, snapDisk->name))) {
>+ virReportError(VIR_ERR_OPERATION_FAILED,
>+ _("disk '%1$s' referenced by snapshot '%2$s' not found in the VM definition of the deleted snapshot"),
>+ snapDisk->name, snap->def->name);
>+ return -1;
>+ }
>
> if (!virStorageSourceIsSameLocation(vmdisk->src, disk->src)) {
> virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
>--
>2.33.1.windows.1
>
2 weeks