[PATCH] lib: Document VIR_DUMP_LIVE flag quirk
by Michal Privoznik
From: Michal Privoznik <mprivozn(a)redhat.com>
The virDomainCoreDump() API has VIR_DUMP_LIVE flag which is
documented to leave vCPUs running throughout making of the dump
of guest memory. Well, this is not the case for QEMU which pauses
vCPUs unconditionally (it calls vm_stop() in dump_init()).
Document this quirk. And also mention it in 'virsh dump --live'
manapage.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/646
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
docs/manpages/virsh.rst | 3 ++-
src/libvirt-domain.c | 2 ++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index 3a00778467..14f22b32c3 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -2927,7 +2927,8 @@ dump
Dumps the core of a domain to a file for analysis.
If *--live* is specified, the domain continues to run until the core
-dump is complete, rather than pausing up front.
+dump is complete, rather than pausing up front. Although, the hypervisor might
+still decide to pause the guest's vCPUs.
If *--crash* is specified, the domain is halted with a crashed status,
rather than merely left in a paused state.
If *--reset* is specified, the domain is reset after successful dump.
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 93e8f5b853..6d1cd2dba1 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -1445,6 +1445,8 @@ virDomainSaveImageDefineXML(virConnectPtr conn, const char *file,
* the guest to run; otherwise, the guest is suspended during the dump.
* VIR_DUMP_RESET flag forces reset of the guest after dump.
* The above three flags are mutually exclusive.
+ * However, note that even if VIR_DUMP_LIVE flag is specified, the hypervisor
+ * might temporarily suspend the guest vCPUs anyway.
*
* Additionally, if @flags includes VIR_DUMP_BYPASS_CACHE, then libvirt
* will attempt to bypass the file system cache while creating the file,
--
2.49.0
3 weeks
[PATCH 00/11] Support for emulated NVMe disks in VMX and QEMU
by Martin Kletzander
I took the liberty of adjusting Hongwei's QEMU series and made further
adjustments so that the common code can be used for reporting NVMe disks from
VMX as well. I added SoBs where I thought applicable, but feel free to correct
me and/or agree with me.
This series fixes https://issues.redhat.com/browse/RHEL-7390 in the middle, but
adds more than just that.
Martin Kletzander (11):
docs, conf, schemas: Add support for NVMe controller
util: Add support for parsing nvmeXnY(pZ) strings
conf: Add virDomainDeviceFindNvmeController
docs, conf, schemas: Add support for NVMe disks
vmx: Add support for NVMe disks
qemu_capabilities: Add NVMe controller and disk capabilities
qemu_capabilities: Add emulated NVMe disk support to domain
capabilities
qemu: Add support for NVMe controllers
qemu: Add support for emulated NVMe disks
NEWS: vmx support for NVMe disks
NEWS: qemu support for emulated NVMe disks
NEWS.rst | 15 +++
docs/formatdomain.rst | 19 +++-
src/bhyve/bhyve_command.c | 1 +
src/conf/domain_conf.c | 96 ++++++++++++++++-
src/conf/domain_conf.h | 10 ++
src/conf/domain_postparse.c | 2 +
src/conf/domain_validate.c | 4 +-
src/conf/schemas/domaincommon.rng | 22 +++-
src/conf/virconftypes.h | 2 +
src/hyperv/hyperv_driver.c | 2 +
src/libxl/libxl_driver.c | 2 +-
src/qemu/qemu_alias.c | 1 +
src/qemu/qemu_capabilities.c | 7 ++
src/qemu/qemu_capabilities.h | 2 +
src/qemu/qemu_command.c | 33 +++++-
src/qemu/qemu_domain_address.c | 5 +
src/qemu/qemu_hotplug.c | 10 +-
src/qemu/qemu_postparse.c | 1 +
src/qemu/qemu_validate.c | 65 ++++++++++--
src/test/test_driver.c | 2 +
src/util/virutil.c | 94 ++++++++++++++--
src/util/virutil.h | 7 +-
src/vbox/vbox_common.c | 4 +-
src/vmx/vmx.c | 85 ++++++++++++++-
src/vz/vz_sdk.c | 6 +-
.../qemu_10.0.0-q35.x86_64+amdsev.xml | 1 +
.../domaincapsdata/qemu_10.0.0-q35.x86_64.xml | 1 +
.../qemu_10.0.0-tcg.x86_64+amdsev.xml | 1 +
.../domaincapsdata/qemu_10.0.0-tcg.x86_64.xml | 1 +
.../qemu_10.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.s390x.xml | 1 +
.../qemu_10.0.0.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_10.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 1 +
.../qemu_7.2.0-hvf.x86_64+hvf.xml | 1 +
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 1 +
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 1 +
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.2.0.ppc.xml | 1 +
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 1 +
.../qemu_8.2.0-tcg-virt.loongarch64.xml | 1 +
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 1 +
.../qemu_8.2.0-virt.aarch64.xml | 1 +
.../qemu_8.2.0-virt.loongarch64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.armv7l.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_9.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_9.1.0-q35.x86_64.xml | 1 +
.../qemu_9.1.0-tcg-virt.riscv64.xml | 1 +
.../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml | 1 +
.../qemu_9.1.0-virt.riscv64.xml | 1 +
tests/domaincapsdata/qemu_9.1.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_9.1.0.x86_64.xml | 1 +
.../qemu_9.2.0-hvf.aarch64+hvf.xml | 1 +
.../qemu_9.2.0-q35.x86_64+amdsev.xml | 1 +
.../domaincapsdata/qemu_9.2.0-q35.x86_64.xml | 1 +
.../qemu_9.2.0-tcg.x86_64+amdsev.xml | 1 +
.../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_9.2.0.s390x.xml | 1 +
.../qemu_9.2.0.x86_64+amdsev.xml | 1 +
tests/domaincapsdata/qemu_9.2.0.x86_64.xml | 1 +
.../genericxml2xmlindata/controller-nvme.xml | 21 ++++
.../disk-nvme-invalid-serials.xml | 29 +++++
tests/genericxml2xmlindata/disk-nvme.xml | 32 ++++++
tests/genericxml2xmloutdata/disk-nvme.xml | 38 +++++++
tests/genericxml2xmltest.c | 4 +
.../caps_10.0.0_aarch64.xml | 2 +
.../caps_10.0.0_ppc64.xml | 2 +
.../caps_10.0.0_s390x.xml | 2 +
.../caps_10.0.0_x86_64+amdsev.xml | 2 +
.../caps_10.0.0_x86_64.xml | 2 +
.../qemucapabilitiesdata/caps_6.2.0_ppc64.xml | 2 +
.../caps_6.2.0_x86_64.xml | 2 +
.../qemucapabilitiesdata/caps_7.0.0_ppc64.xml | 2 +
.../caps_7.0.0_x86_64.xml | 2 +
.../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 2 +
.../caps_7.1.0_x86_64.xml | 2 +
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 2 +
.../caps_7.2.0_x86_64+hvf.xml | 2 +
.../caps_7.2.0_x86_64.xml | 2 +
.../caps_8.0.0_x86_64.xml | 2 +
.../caps_8.1.0_x86_64.xml | 2 +
.../caps_8.2.0_aarch64.xml | 2 +
.../caps_8.2.0_armv7l.xml | 2 +
.../caps_8.2.0_loongarch64.xml | 2 +
.../qemucapabilitiesdata/caps_8.2.0_s390x.xml | 2 +
.../caps_8.2.0_x86_64.xml | 2 +
.../caps_9.0.0_x86_64.xml | 2 +
.../caps_9.1.0_riscv64.xml | 2 +
.../qemucapabilitiesdata/caps_9.1.0_s390x.xml | 2 +
.../caps_9.1.0_x86_64.xml | 2 +
.../caps_9.2.0_aarch64+hvf.xml | 2 +
.../qemucapabilitiesdata/caps_9.2.0_s390x.xml | 2 +
.../caps_9.2.0_x86_64+amdsev.xml | 2 +
.../caps_9.2.0_x86_64.xml | 2 +
.../disk-target-nvme.x86_64-latest.args | 39 +++++++
.../disk-target-nvme.x86_64-latest.xml | 53 ++++++++++
tests/qemuxmlconfdata/disk-target-nvme.xml | 32 ++++++
tests/qemuxmlconftest.c | 1 +
tests/utiltest.c | 38 ++++---
tests/vmx2xmldata/esx-in-the-wild-15.vmx | 100 ++++++++++++++++++
tests/vmx2xmldata/esx-in-the-wild-15.xml | 45 ++++++++
tests/vmx2xmldata/esx-in-the-wild-16.vmx | 91 ++++++++++++++++
tests/vmx2xmldata/esx-in-the-wild-16.xml | 37 +++++++
tests/vmx2xmltest.c | 2 +
129 files changed, 1128 insertions(+), 49 deletions(-)
create mode 100644 tests/genericxml2xmlindata/controller-nvme.xml
create mode 100644 tests/genericxml2xmlindata/disk-nvme-invalid-serials.xml
create mode 100644 tests/genericxml2xmlindata/disk-nvme.xml
create mode 100644 tests/genericxml2xmloutdata/disk-nvme.xml
create mode 100644 tests/qemuxmlconfdata/disk-target-nvme.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/disk-target-nvme.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/disk-target-nvme.xml
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-15.vmx
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-15.xml
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-16.vmx
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-16.xml
--
2.49.0
3 weeks, 1 day
[PATCH] docs: fix indent of hostdev examples
by Daniel P. Berrangé
From: Daniel P. Berrangé <berrange(a)redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
docs/formatdomain.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
index 8753ee9c23..ca4e84983f 100644
--- a/docs/formatdomain.rst
+++ b/docs/formatdomain.rst
@@ -4600,15 +4600,15 @@ or:
...
<devices>
<hostdev mode='subsystem' type='mdev' model='vfio-pci'>
- <source>
- <address uuid='c2177883-f1bb-47f0-914d-32a22e3a8804'/>
- </source>
+ <source>
+ <address uuid='c2177883-f1bb-47f0-914d-32a22e3a8804'/>
+ </source>
</hostdev>
<hostdev mode='subsystem' type='mdev' model='vfio-ccw'>
<source>
<address uuid='9063cba3-ecef-47b6-abcf-3fef4fdcad85'/>
</source>
- <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
+ <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
</hostdev>
</devices>
...
--
2.49.0
3 weeks, 1 day
[PATCH] docs: Change units to 'kiB' from 'kB'/'kilobytes'/'kb'
by Peter Krempa
From: Peter Krempa <pkrempa(a)redhat.com>
Use the short unit for kibibytes instead of the confusing or plainly
wrong units.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/594
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
docs/formatdomain.rst | 4 ++--
docs/formatnetwork.rst | 4 ++--
docs/manpages/virsh.rst | 6 +++---
include/libvirt/libvirt-domain.h | 12 ++++++------
src/libvirt-domain.c | 12 ++++++------
5 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
index 8753ee9c23..82606ef35f 100644
--- a/docs/formatdomain.rst
+++ b/docs/formatdomain.rst
@@ -1139,7 +1139,7 @@ influence how virtual memory pages are backed by host pages.
element is introduced. It has one compulsory attribute ``size`` which
specifies which hugepages should be used (especially useful on systems
supporting hugepages of different sizes). The default unit for the ``size``
- attribute is kilobytes (multiplier of 1024). If you want to use different
+ attribute is kiB (multiplier of 1024). If you want to use different
unit, use optional ``unit`` attribute. For systems with NUMA, the optional
``nodeset`` attribute may come handy as it ties given guest's NUMA nodes to
certain hugepage sizes. From the example snippet, one gigabyte hugepages are
@@ -4298,7 +4298,7 @@ attribute are
- ``pcie-to-pci-bridge`` ( :since:`since 4.3.0` )
The root controllers (``pci-root`` and ``pcie-root``) have an optional
-``pcihole64`` element specifying how big (in kilobytes, or in the unit specified
+``pcihole64`` element specifying how big (in kiB, or in the unit specified
by ``pcihole64``'s ``unit`` attribute) the 64-bit PCI hole should be. Some
guests (like Windows XP or Windows Server 2003) might crash when QEMU and
Seabios are recent enough to support 64-bit PCI holes, unless this is disabled
diff --git a/docs/formatnetwork.rst b/docs/formatnetwork.rst
index 053fe6ad56..6694a145af 100644
--- a/docs/formatnetwork.rst
+++ b/docs/formatnetwork.rst
@@ -468,10 +468,10 @@ follows, where accepted values for each attribute is an integer number.
``average``
Specifies the desired average bit rate for the interface being shaped (in
- kilobytes/second).
+ kiB/second).
``peak``
Optional attribute which specifies the maximum rate at which the bridge can
- send data (in kilobytes/second). Note the limitation of implementation: this
+ send data (in kiB/second). Note the limitation of implementation: this
attribute in the ``outbound`` element is ignored (as Linux ingress filters
don't know it yet).
``burst``
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index 3a00778467..895a905b08 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -2274,7 +2274,7 @@ If no *--inbound* or *--outbound* is specified, this command will
query and show the bandwidth settings. Otherwise, it will set the
inbound or outbound bandwidth. *average,peak,burst,floor* is the same as
in command *attach-interface*. Values for *average*, *peak* and *floor*
-are expressed in kilobytes per second, while *burst* is expressed in kilobytes
+are expressed in kiB per second, while *burst* is expressed in kiB
in a single burst at *peak* speed as described in the Network XML
documentation at
`https://libvirt.org/formatnetwork.html#quality-of-service <https://libvirt.org/formatnetwork.html#quality-of-service>`__.
@@ -5261,8 +5261,8 @@ interface. At least one from the *average*, *floor* pair must be
specified. The other two *peak* and *burst* are optional, so
"average,peak", "average,,burst", "average,,,floor", "average" and
",,,floor" are also legal. Values for *average*, *floor* and *peak*
-are expressed in kilobytes per second, while *burst* is expressed in
-kilobytes in a single burst at *peak* speed as described in the
+are expressed in kiB per second, while *burst* is expressed in
+kiB in a single burst at *peak* speed as described in the
Network XML documentation at
`https://libvirt.org/formatnetwork.html#quality-of-service <https://libvirt.org/formatnetwork.html#quality-of-service>`__.
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 9496631bcc..ac5daf7d0c 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -715,9 +715,9 @@ typedef virDomainInterfaceStatsStruct *virDomainInterfaceStatsPtr;
* Since: 0.7.5
*/
typedef enum {
- /* The total amount of data read from swap space (in kB). (Since: 0.7.5) */
+ /* The total amount of data read from swap space (in kiB). (Since: 0.7.5) */
VIR_DOMAIN_MEMORY_STAT_SWAP_IN = 0,
- /* The total amount of memory written out to swap space (in kB). (Since: 0.7.5) */
+ /* The total amount of memory written out to swap space (in kiB). (Since: 0.7.5) */
VIR_DOMAIN_MEMORY_STAT_SWAP_OUT = 1,
/*
@@ -738,7 +738,7 @@ typedef enum {
/*
* The amount of memory left completely unused by the system. Memory that
* is available but used for reclaimable caches should NOT be reported as
- * free. This value is expressed in kB.
+ * free. This value is expressed in kiB.
*
* Since: 0.7.5
*/
@@ -748,7 +748,7 @@ typedef enum {
* The total amount of usable memory as seen by the domain. This value
* may be less than the amount of memory assigned to the domain if a
* balloon driver is in use or if the guest OS does not initialize all
- * assigned pages. This value is expressed in kB.
+ * assigned pages. This value is expressed in kiB.
*
* Since: 0.7.5
*/
@@ -762,7 +762,7 @@ typedef enum {
VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON = 6,
/* Resident Set Size of the process running the domain. This value
- * is in kB
+ * is in kiB
*
* Since: 0.9.10
*/
@@ -785,7 +785,7 @@ typedef enum {
/*
* The amount of memory, that can be quickly reclaimed without
- * additional I/O (in kB). Typically these pages are used for caching files
+ * additional I/O (in kiB). Typically these pages are used for caching files
* from disk.
*
* Since: 4.6.0
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 93e8f5b853..ca110bdf85 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -6284,27 +6284,27 @@ virDomainGetInterfaceParameters(virDomainPtr domain,
* Memory Statistics:
*
* VIR_DOMAIN_MEMORY_STAT_SWAP_IN:
- * The total amount of data read from swap space (in kb).
+ * The total amount of data read from swap space (in kiB).
* VIR_DOMAIN_MEMORY_STAT_SWAP_OUT:
- * The total amount of memory written out to swap space (in kb).
+ * The total amount of memory written out to swap space (in kiB).
* VIR_DOMAIN_MEMORY_STAT_MAJOR_FAULT:
* The number of page faults that required disk IO to service.
* VIR_DOMAIN_MEMORY_STAT_MINOR_FAULT:
* The number of page faults serviced without disk IO.
* VIR_DOMAIN_MEMORY_STAT_UNUSED:
- * The amount of memory which is not being used for any purpose (in kb).
+ * The amount of memory which is not being used for any purpose (in kiB).
* VIR_DOMAIN_MEMORY_STAT_AVAILABLE:
- * The total amount of memory available to the domain's OS (in kb).
+ * The total amount of memory available to the domain's OS (in kiB).
* VIR_DOMAIN_MEMORY_STAT_USABLE:
* How much the balloon can be inflated without pushing the guest system
* to swap, corresponds to 'Available' in /proc/meminfo
* VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON:
- * Current balloon value (in kb).
+ * Current balloon value (in kiB).
* VIR_DOMAIN_MEMORY_STAT_LAST_UPDATE
* Timestamp of the last statistic
* VIR_DOMAIN_MEMORY_STAT_DISK_CACHES
* Memory that can be reclaimed without additional I/O, typically disk
- * caches (in kb).
+ * caches (in kiB).
* VIR_DOMAIN_MEMORY_STAT_HUGETLB_PGALLOC
* The number of successful huge page allocations from inside the domain
* VIR_DOMAIN_MEMORY_STAT_HUGETLB_PGFAIL
--
2.49.0
3 weeks, 1 day
[PATCH] esx: Avoid corner case where esxUtil_ParseDatastorePath could be called with NULL 'datastorePath'
by Peter Krempa
From: Peter Krempa <pkrempa(a)redhat.com>
The generated code which parses the data from XML in
esxVI_LookupDatastoreContentByDatastoreName can fill the 'folderPath'
property with NULL if it were missing from the input XML. While this is
not likely when talking to esx it is a possible outcome. Skipp NULL
results.
All other code paths already ensure that the function is not called with
NULL.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/776
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/esx/esx_storage_backend_vmfs.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/esx/esx_storage_backend_vmfs.c b/src/esx/esx_storage_backend_vmfs.c
index 145aff0c9c..8e13201fe2 100644
--- a/src/esx/esx_storage_backend_vmfs.c
+++ b/src/esx/esx_storage_backend_vmfs.c
@@ -616,6 +616,9 @@ esxStoragePoolListVolumes(virStoragePoolPtr pool, char **const names,
searchResults = searchResults->_next) {
g_autofree char *directoryAndFileName = NULL;
+ if (!searchResults->folderPath)
+ continue;
+
if (esxUtil_ParseDatastorePath(searchResults->folderPath, NULL, NULL,
&directoryAndFileName) < 0) {
goto cleanup;
@@ -759,6 +762,9 @@ esxStorageVolLookupByKey(virConnectPtr conn, const char *key)
searchResults = searchResults->_next) {
g_autofree char *directoryAndFileName = NULL;
+ if (searchResults->folderPath)
+ continue;
+
if (esxUtil_ParseDatastorePath(searchResults->folderPath, NULL,
NULL, &directoryAndFileName) < 0) {
goto cleanup;
--
2.49.0
3 weeks, 1 day
Release of libvirt-11.4.0
by Jiri Denemark
The 11.4.0 release of both libvirt and libvirt-python is tagged and
signed tarballs are available at
https://download.libvirt.org/
https://download.libvirt.org/python/
Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing feedback. Your work is greatly
appreciated.
* New features
* qemu: ppc64 POWER11 processor support
Support for the recently released IBM POWER11 processor was added.
* Packaging changes
* All helper programs are now detected from ``$PATH`` during runtime
All of the code was now converted to dynamically look up helper programs
in ``$PATH`` rather than doing the lookup at build time and then compiling
in the result.
Programs ``mount``, ``umount``, ``mkfs``, ``modprobe``, ``rmmod``,
``numad``, ``dmidecode``, ``ip``, ``tc``, ``mdevctl``, ``mm-ctl``,
``iscsiadm``, ``ovs-vsctl``, ``pkttyagent``, ``bhyveload``, ``bhyvectl``,
``bhyve``, ``ifconfig``, ``vzlist``, ``vzctl``, ``vzmigrate``, and the
tools from the lvm suite (``vgchange``, ``lvcreate``, etc..) are now not
needed during build and will still work properly if placed in ``$PATH``.
This also ensures that libvirt works correctly on distros that are
transitioning ``/sbin`` into ``/bin`` and upgraded installations have
a different layout from fresh installations.
* Improvements
* virsh: Add option ``--no-pkttyagent``
That option suppresses registration of pkttyagent with polkitd.
* bhyve: support NVRAM configuration for UEFI firmwares
The bhyve driver now supports specifying NVRAM store file, such as::
<os firmware='efi'>
<nvram/>
</os>
* qemu: Improve accuracy of FDC/floppy device support statement in capabilities XML
The data is now based on the presence of the controller in qemu rather than
just a denylist of machine types where floppies not work.
* Bug fixes
* qemu: Fix failure when reverting to internal snapshots
A regression in ``libvirt-11.2`` and ``libvirt-11.3`` prevents reverting to
an internal snapshot. Attempts to revert would produce the following error::
error: operation failed: load of internal snapshot 'foo1' job failed: Device 'libvirt-1-format' is writable but does not support snapshots
The only workaround is to avoid the broken versions.
* qemu: Fix virtqemud crash when resuming failed post-copy migration
A regression introduced in ``libvirt-11.2.0`` caused virtqemud on the
destination host to crash when trying to resume failed post-copy
migration.
* qemu: Treat the ``queues`` configuration of ``virtio-net`` as guest ABI
The queue count itself isn't a device frontend property but libvirt uses
it to calculate ``vectors`` option of the device which is a guest OS visible
property, thus ``queues`` must not change during migration. The ABI stability
check now handles this properly.
Enjoy.
Jirka
3 weeks, 1 day
[PATCH v2 0/4] bhyve: implement domain{Block,Interface,Memory}Stats
by Roman Bogorodskiy
Changes since v1:
- Added "bhyve: implement domainInterfaceStats" patch
PS It was temping to factor out obtaining struct kinfo_proc using
sysctlnametomib() + sysctl(), but I have to make it visible to use
outside of virprocess, e.g. in bhyve_driver.c, so it doesn't look like
it's worth the effort for now. Probably it'll make sense to implement a
FreeBSD version of virProcessGetStat() once there are more users of this
code.
Roman Bogorodskiy (4):
bhyve: implement domainInterfaceStats
virprocess: implement virProcessGetStatInfo() for FreeBSD
bhyve: implement domainMemoryStats
bhyve: implement domainBlockStats
src/bhyve/bhyve_driver.c | 140 +++++++++++++++++++++++++++++++++++++++
src/util/virprocess.c | 104 +++++++++++++++++++++--------
2 files changed, 218 insertions(+), 26 deletions(-)
--
2.49.0
3 weeks, 2 days