[PATCH] disk storage: fix allocation size for pool format dos
by Sebastian Mitterle
The changed condition was always false because the function was always
called with boundary values 0.
Use the free extent's start value to get its start offset from the
cylinder boundary and determine if the needed size for allocation
needs to be expanded too in case the offset doesn't fit within extra
bytes for alignment.
This fixes an issue where vol-create-from will call qemu-img convert
to create a destination volume of same capacity as the source volume
and qemu-img will error 'Cannot grow device files' due to the partition
being too small for the source although both destination partition and
source volume have the same capacity.
Signed-off-by: Sebastian Mitterle <smitterl(a)redhat.com>
---
src/storage/storage_backend_disk.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/storage/storage_backend_disk.c b/src/storage/storage_backend_disk.c
index a6d4e41220..ec0679d353 100644
--- a/src/storage/storage_backend_disk.c
+++ b/src/storage/storage_backend_disk.c
@@ -691,7 +691,7 @@ virStorageBackendDiskPartBoundaries(virStoragePoolObjPtr pool,
if (def->source.format == VIR_STORAGE_POOL_DISK_DOS) {
/* align to cylinder boundary */
neededSize += extraBytes;
- if ((*start % cylinderSize) > extraBytes) {
+ if ((dev->freeExtents[i].start % cylinderSize) > extraBytes) {
/* add an extra cylinder if the offset can't fit within
the extra bytes we have */
neededSize += cylinderSize;
--
2.25.2
4 years, 1 month
[PATCH] doc: add some examples for IPv6 NAT configuration
by Ian Wienand
Add some expanded examples for the nat ipv6 introduced with
927acaedec7effbe67a154d8bfa0e67f7d08e6c7.
Unfortunately while for IPv4 it's well-known what addresses ranges are
useful for NAT, with IPv6 unless you enjoy digging through RFC's going
back-and-forth over unique local addresses and the meaning of the word
"site" it's generally much less obvious. I've tried to add some
details on choosing a range inline with RFC 4193 and then some
pointers for when it maybe doesn't work in the guest as you first
expect despite you doing what the RFC's say!
Signed-off-by: Ian Wienand <iwienand(a)redhat.com>
---
docs/formatnetwork.html.in | 47 ++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/docs/formatnetwork.html.in b/docs/formatnetwork.html.in
index fb740111b1..94a4cab4d1 100644
--- a/docs/formatnetwork.html.in
+++ b/docs/formatnetwork.html.in
@@ -1209,6 +1209,53 @@
</ip>
</network></pre>
+ <h3><a id="examplesNATv6">IPv6 NAT based network</a></h3>
+
+ <p>
+ Below is a variation for also providing IPv6 NAT. This can be
+ especially useful when using multiple interfaces where some,
+ such as WiFi cards, can not be bridged (usually on a laptop),
+ making it difficult to provide end-to-end IPv6 routing.
+ </p>
+
+ <pre>
+<network>
+ <name>default6</name>
+ <bridge name="virbr0"/>
+ <forward mode="nat">
+ <nat ipv6='yes'>
+ <port start='1024' end='65535'/>
+ </nat>
+
+ <ip address="192.168.122.1" netmask="255.255.255.0">
+ <dhcp>
+ <range start="192.168.122.2" end="192.168.122.254"/>
+ </dhcp>
+ </ip>
+ <ip family="ipv6" address="fdXX:XXXX:XXXX:NNNN:: prefix="64"/>
+ </ip>
+</network></pre>
+
+ <p>IPv6 NAT addressing has some caveats over the more straight
+ forward IPv4 case.
+ <a href="https://tools.ietf.org/html/rfc4193">RFC 4193</a>
+ defines the address range <tt>fd00::/8</tt> for <tt>/48</tt> IPv6
+ private networks. It should be concatenated with a random 40-bit
+ string (i.e. 10 random hexadecimal digits replacing the <tt>X</tt>
+ values above, RFC 4193 provides
+ an <a href="https://tools.ietf.org/html/rfc4193#section-3.2.2">algorithm</a>
+ if you do not have a source of sufficient randomness). This
+ leaves <tt>0</tt> through <tt>ffff</tt> for subnets (<tt>N</tt>
+ above) which you can use at will.</p>
+
+ <p>Many operating systems will not consider these addresses as
+ preferential to IPv4, due to some practial history of these
+ addresses being present but unroutable and causing networking
+ issues. On many Linux distributions, you may need to
+ override <tt>/etc/gai.conf</tt> with values
+ from <a href="https://www.ietf.org/rfc/rfc3484.txt">RFC 3484</a>
+ to have your IPv6 NAT network correctly preferenced over IPv4.</p>
+
<h3><a id="examplesRoute">Routed network config</a></h3>
<p>
--
2.26.2
4 years, 1 month
[PATCH] client: fix memory leak in client msg
by Hao Wang
>From 3ad3fae4f2562a11bef8dcdd25b6a7e0b00d4e2c Mon Sep 17 00:00:00 2001
From: Hao Wang <wanghao232(a)huawei.com>
Date: Sat, 18 Jul 2020 15:43:30 +0800
Subject: [PATCH] client: fix memory leak in client msg
When closing client->waitDispatch in virNetClientIOEventLoopRemoveAll
or virNetClientIOEventLoopRemoveDone, VIR_FREE() is called to free
call->msg directly, resulting in leak of the memory call->msg->buffer
points to.
Use virNetMessageFree(call->msg) instead of VIR_FREE(call->msg).
Signed-off-by: Hao Wang <wanghao232(a)huawei.com>
---
src/rpc/virnetclient.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/rpc/virnetclient.c b/src/rpc/virnetclient.c
index 441f1502a6..f899493b64 100644
--- a/src/rpc/virnetclient.c
+++ b/src/rpc/virnetclient.c
@@ -1519,7 +1519,7 @@ static bool virNetClientIOEventLoopRemoveDone(virNetClientCallPtr call,
if (call->expectReply)
VIR_WARN("Got a call expecting a reply but without a waiting thread");
virCondDestroy(&call->cond);
- VIR_FREE(call->msg);
+ virNetMessageFree(call->msg);
VIR_FREE(call);
}
@@ -1546,7 +1546,7 @@ virNetClientIOEventLoopRemoveAll(virNetClientCallPtr call,
VIR_DEBUG("Removing call %p", call);
virCondDestroy(&call->cond);
- VIR_FREE(call->msg);
+ virNetMessageFree(call->msg);
VIR_FREE(call);
return true;
}
--
2.23.0
4 years, 1 month
[RFC PATCH 0/3] qemu: Propagate cluster size to new qcow2 images
by Peter Krempa
Cluster size may have performatnce implications. Propagate the cluster
size used by the original image to any copy or overlay.
Peter Krempa (3):
qemu: monitor: Detect image cluster size from
'query-named-block-nodes'
qemu: block: Allow specifying cluster size when using
'blockdev-create'
qemuBlockStorageSourceCreateDetectSize: Propagate cluster size for
'qcow2'
src/qemu/qemu_block.c | 6 ++++++
src/qemu/qemu_monitor.h | 3 +++
src/qemu/qemu_monitor_json.c | 3 +++
src/util/virstoragefile.h | 1 +
4 files changed, 13 insertions(+)
--
2.26.2
4 years, 1 month
[PATCH v2 00/13] resolve hangs/crashes on libvirtd shutdown
by Nikolay Shirokovskiy
I keep qemu VM event loop exiting synchronously but add code to avoid deadlock
that can be caused by this approach. I guess it is worth having synchronous
exiting of threads in this case to avoid crashes.
Patches that are already positively reviewed has appropriate 'Reviewed-by' lines.
Changes from v1:
- rename stateShutdown to state stateShutdownPrepare
- introduce net daemon shutdown callbacks
- make some adjustments in terms of qemu per VM's event loop thread
finishing
- factor out net server shutdown facilities into distinct patch
- increase shutdown timeout from 15s to 30s
Nikolay Shirokovskiy (13):
libvirt: add stateShutdownPrepare/stateShutdownWait to drivers
util: always initialize priority condition
util: add stop/drain functions to thread pool
rpc: don't unref service ref on socket behalf twice
rpc: add virNetDaemonSetShutdownCallbacks
rpc: add shutdown facilities to netserver
rpc: finish all threads before exiting main loop
qemu: don't shutdown event thread in monitor EOF callback
vireventthread: exit thread synchronously on finalize
qemu: avoid deadlock in qemuDomainObjStopWorker
qemu: implement driver's shutdown/shutdown wait methods
rpc: cleanup virNetDaemonClose method
util: remove unused virThreadPoolNew macro
scripts/check-drivername.py | 2 +
src/driver-state.h | 8 ++++
src/libvirt.c | 42 ++++++++++++++++
src/libvirt_internal.h | 2 +
src/libvirt_private.syms | 4 ++
src/libvirt_remote.syms | 2 +-
src/qemu/qemu_domain.c | 18 +++++--
src/qemu/qemu_driver.c | 32 +++++++++++++
src/qemu/qemu_process.c | 3 --
src/remote/remote_daemon.c | 6 +--
src/rpc/virnetdaemon.c | 109 ++++++++++++++++++++++++++++++++++++------
src/rpc/virnetdaemon.h | 8 +++-
src/rpc/virnetserver.c | 8 ++++
src/rpc/virnetserver.h | 1 +
src/rpc/virnetserverservice.c | 1 -
src/util/vireventthread.c | 1 +
src/util/virthreadpool.c | 65 +++++++++++++++++--------
src/util/virthreadpool.h | 6 +--
18 files changed, 267 insertions(+), 51 deletions(-)
--
1.8.3.1
4 years, 1 month
[PATCH v1 00/34] Rework building of domain's namespaces
by Michal Privoznik
There are couple of things happening in this series.
The first patch fixes a bug. We are leaking /dev/mapper/control in QEMU.
See the patch for detailed info. The second patch then cleans up code a
bit.
The third patch moves namespace handling code into a separate file.
Patches 4 - 15 then prepare the code for switch to different approach
in populating the namespace. Patches 16 - 26 then do the switch and
finally, the rest of the patches drops and deduplicates some code.
You can fetch the patches from here too:
https://gitlab.com/MichalPrivoznik/libvirt/-/commits/dm_namespace/
Michal Prívozník (34):
virDevMapperGetTargetsImpl: Close /dev/mapper/control in the end
virDevMapperGetTargets: Don't ignore EBADF
qemu: Separate out namespace handling code
qemu_domain_namespace: Rename qemuDomainCreateNamespace()
qemu_domain_namespace: Drop unused @cfg argument
qemu_domain_namespace: Check for namespace enablement earlier
qemuDomainNamespaceSetupHostdev: Create paths in one go
qemuDomainAttachDeviceMknodHelper: Don't leak data->target
qemu_domain_namespace.c: Rename qemuDomainAttachDeviceMknodData
qemuDomainAttachDeviceMknodRecursive: Isolate bind mounted devices
condition
qemuDomainAttachDeviceMknodHelper: Create more files in a single go
qemuDomainNamespaceMknodPaths: Create more files in one go
qemuDomainNamespaceMknodPaths: Turn @paths into string list
qemuDomainSetupDisk: Accept @src
qemu_domain_namespace: Repurpose qemuDomainBuildNamespace()
qemuDomainBuildNamespace: Populate basic /dev from daemon's namespace
qemuDomainBuildNamespace: Populate disks from daemon's namespace
qemuDomainBuildNamespace: Populate hostdevs from daemon's namespace
qemuDomainBuildNamespace: Populate memory from daemon's namespace
qemuDomainBuildNamespace: Populate chardevs from daemon's namespace
qemuDomainBuildNamespace: Populate TPM from daemon's namespace
qemuDomainBuildNamespace: Populate graphics from daemon's namespace
qemuDomainBuildNamespace: Populate inputs from daemon's namespace
qemuDomainBuildNamespace: Populate RNGs from daemon's namespace
qemuDomainBuildNamespace: Populate loader from daemon's namespace
qemuDomainBuildNamespace: Populate SEV from daemon's namespace
qemu_domain_namespace: Drop unused functions
qemuDomainDetachDeviceUnlink: Unlink paths in one go
qemuDomainNamespaceUnlinkPaths: Turn @paths into string list
qemuDomainNamespaceTeardownHostdev: Unlink paths in one go
qemuDomainNamespaceTeardownMemory: Deduplicate code
qemuDomainNamespaceTeardownChardev: Deduplicate code
qemuDomainNamespaceTeardownRNG: Deduplicate code
qemuDomainNamespaceTeardownInput: Deduplicate code
po/POTFILES.in | 1 +
src/qemu/Makefile.inc.am | 2 +
src/qemu/qemu_cgroup.c | 2 +-
src/qemu/qemu_conf.c | 1 +
src/qemu/qemu_domain.c | 1848 +-----------------------------
src/qemu/qemu_domain.h | 57 -
src/qemu/qemu_domain_namespace.c | 1610 ++++++++++++++++++++++++++
src/qemu/qemu_domain_namespace.h | 86 ++
src/qemu/qemu_driver.c | 1 +
src/qemu/qemu_hotplug.c | 1 +
src/qemu/qemu_process.c | 23 +-
src/qemu/qemu_security.c | 1 +
src/util/virdevmapper.c | 4 +-
13 files changed, 1727 insertions(+), 1910 deletions(-)
create mode 100644 src/qemu/qemu_domain_namespace.c
create mode 100644 src/qemu/qemu_domain_namespace.h
--
2.26.2
4 years, 1 month
[PATCH] rpm: Fix conditional for defining %_vpath_builddir for RHEL <= 7
by Neal Gompa
The conditional was incorrectly overriding %_vpath_builddir when
%rhel is not defined, which led to surprising behavior when the
global %_vpath_builddir path is set on Fedora already.
Signed-off-by: Neal Gompa <ngompa13(a)gmail.com>
---
libvirt.spec.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index bb74443484..4b9e04ae61 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -32,7 +32,7 @@
%endif
# On RHEL 7 and older macro _vpath_builddir is not defined.
-%if 0%{?rhel} <= 7
+%if 0%{?rhel} && 0%{?rhel} <= 7
%define _vpath_builddir %{_target_platform}
%endif
--
2.26.2
4 years, 1 month
[PATCH v2] qemu: Do not silently allow non-available timers on non-x86 systems
by Thomas Huth
libvirt currently silently allows <timer name="kvmclock"/> and some
other timer tags in the guest XML definition for timers that do not
exist on non-x86 systems. We should not silently ignore these tags
since the users might not get what they expected otherwise.
Note: The error is only generated if the timer is marked with
present="yes" - otherwise we would suddenly refuse XML definitions
that worked without problems before.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1754887
Signed-off-by: Thomas Huth <thuth(a)redhat.com>
---
v2: Check also for timer->present == 1
src/qemu/qemu_validate.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c
index 488f258d00..561e7b12c7 100644
--- a/src/qemu/qemu_validate.c
+++ b/src/qemu/qemu_validate.c
@@ -371,6 +371,18 @@ qemuValidateDomainDefClockTimers(const virDomainDef *def,
case VIR_DOMAIN_TIMER_NAME_TSC:
case VIR_DOMAIN_TIMER_NAME_KVMCLOCK:
case VIR_DOMAIN_TIMER_NAME_HYPERVCLOCK:
+ if (!ARCH_IS_X86(def->os.arch) && timer->present == 1) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Configuring the '%s' timer is not supported "
+ "for virtType=%s arch=%s machine=%s guests"),
+ virDomainTimerNameTypeToString(timer->name),
+ virDomainVirtTypeToString(def->virtType),
+ virArchToString(def->os.arch),
+ def->os.machine);
+ return -1;
+ }
+ break;
+
case VIR_DOMAIN_TIMER_NAME_LAST:
break;
--
2.18.1
4 years, 1 month
[PATCH] qemu_driver.c: Fix domfsinfo for non-PCI device information from guest agent
by Thomas Huth
qemuAgentFSInfoToPublic() currently only sets the devAlias for PCI devices.
However, the QEMU guest agent could also provide the device name in the
"dev" field of the response for other devices instead (well, at least after
fixing another problem in the current QEMU guest agent...). So if creating
the devAlias from the PCI information failed, let's fall back to the name
provided by the guest agent. This helps to fix the empty "Target" fields
that occur when running "virsh domfsinfo" on s390x where CCW devices are
used for the guest instead of PCI devices.
Also add a proper debug message here in case we completely failed to set the
device alias, since this problem here was very hard to debug: The only two
error messages that I've seen were "Unable to get filesystem information"
and "Unable to encode message payload" - which only indicates that something
went wrong in the RPC call. No debug message indicated the real problem, so
I had to learn the hard way why the RPC call failed (it apparently does not
like devAlias left to be NULL) and where the real problem comes from.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1755075
Signed-off-by: Thomas Huth <thuth(a)redhat.com>
---
src/qemu/qemu_driver.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d185666ed8..e45c61ee20 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -21935,14 +21935,17 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent,
qemuAgentDiskInfoPtr agentdisk = agent->disks[i];
virDomainDiskDefPtr diskDef;
- if (!(diskDef = virDomainDiskByAddress(vmdef,
- &agentdisk->pci_controller,
- agentdisk->bus,
- agentdisk->target,
- agentdisk->unit)))
- continue;
-
- ret->devAlias[i] = g_strdup(diskDef->dst);
+ diskDef = virDomainDiskByAddress(vmdef,
+ &agentdisk->pci_controller,
+ agentdisk->bus,
+ agentdisk->target,
+ agentdisk->unit);
+ if (diskDef != NULL)
+ ret->devAlias[i] = g_strdup(diskDef->dst);
+ else if (agentdisk->devnode != NULL)
+ ret->devAlias[i] = g_strdup(agentdisk->devnode);
+ else
+ VIR_DEBUG("Missing devnode name for '%s'.", ret->mountpoint);
}
return ret;
--
2.18.1
4 years, 1 month
[RESEND][PATCH] migration: fix xml file residual during vm crash with migration
by zhengchuan
>From 935ec812b822ca978631e72bb9b9a5d00df24a42 Mon Sep 17 00:00:00 2001
From: Zheng Chuan <zhengchuan(a)huawei.com>
Date: Mon, 27 Jul 2020 14:39:05 +0800
Subject: [PATCH] migration: fix xml file residual during vm crash with
migration
when migration is cancelled (such as kill -9 vmpid in Src, etc), it could
do virDomainSaveStatus() to save xml file after qemuProcessStop(), which results
in xml residulal.
Fix it by that do not do virDomainSaveStatus() if vm is not active.
Signed-off-by: Zheng Chuan <zhengchuan(a)huawei.com>
---
src/qemu/qemu_migration.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 2c7bf34..d2804ab 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -3073,6 +3073,9 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver,
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
jobPriv->migParams, priv->job.apiFlags);
+ if (!virDomainObjIsActive(vm))
+ goto done;
+
if (virDomainObjSave(vm, driver->xmlopt, cfg->stateDir) < 0)
VIR_WARN("Failed to save status on vm %s", vm->def->name);
}
--
1.8.3.1
4 years, 1 month