The issue about adding multipath device's targets into qemu-pr-helper's namespace
by Lin Ma
Hi all,
I have a namespace question about passthrough disk(multipath device).
In case of enabling namespace and cgroups in qemu.conf, The target(s) of
the
multipath device won't be added into qemu-pr-helper's namespace under
certain
situation, It causes the PERSISTENT RESERVE command failure in guest.
While user starts a vm,
To build namespace, The qemuDomainSetupDisk() will be invoked via
threadA(this
thread id will be the qemu's pid),
To build cgroup, The qemuSetupImagePathCgroup() will be invoked via
threadB.
Both of the functions invoke the virDevMapperGetTargets() trying to
parse a
multipath device to target paths string, Then fill the targetPaths[].
The issue I experienced is:
After libvirtd started, Everything works well for the first booted vm
which has
the passthrough multipath device.
But If I shut it down & start it again, OR keep it running & start
another vm
which has other passthrough multipath device, Then the target(s) of the
fresh
started vm won't be added into the related qemu-pr-helper's namespace
and it
causes PERSISTENT RESERVE command failure in the corresponding guest.
I digged into code, In this situation, The targetPaths[] in
qemuDomainSetupDisk()
won't be filled, it keeps NULL after virDevMapperGetTargets() returns.
The virDevMapperGetTargets doesn't fill targetPaths[] because the
dm_task_run()
of libdevmapper returns 0 with errno 9(Bad file descriptor).
So far, I don't understand why the dm_task_run() return 0 in this
situation.
BTW, The virDevMapperGetTargets() can always successfully fill the
targetPaths[]
in qemuSetupImagePathCgroup().
Please refer to the following 2 tests:
The multipath configuration on host:
host:~ # multipath -l
vm1-data (3600140582d9024bc13f4b8db5ff12de0) dm-11 FreeNAS,lv68
size=6.0G features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 2:0:0:2 sdd 8:48 active undef running
vm2-data (36001405fc5f29ace3ec4fb8acd32aae5) dm-8 FreeNAS,lv46
size=4.0G features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 2:0:0:1 sde 8:64 active undef running
===================================================================
Test A:
host:~ # systemctl restart libvirtd
host:~ # virsh list
Id Name State
--------------------
host:~ #
host:~ # virsh domblklist vm1
Target Source
------------------------------------------
vda /opt/vms/vm1/disk0.qcow2
sda /dev/mapper/vm1-data
host:~ #
host:~ # virsh start vm1
Domain vm1 started
host:~ # virsh list
Id Name State
---------------------------
1 vm1 running
host:~ # nsenter -t $(pidof qemu-pr-helper) -a bash
host:~ # ls -l /dev/sd*
brw-rw---- 1 root disk 8, 48 Jul 14 16:30 /dev/sdd
host:~ # exit
exit
host:~ #
vm1:~ # lsscsi
[0:0:0:0] disk FreeNAS lv68 0123 /dev/sda
vm1:~ #
vm1:~ # sg_persist --in -k /dev/sda
FreeNAS lv68 0123
Peripheral device type: disk
PR generation=0x0, there are NO registered reservation keys
vm1:~ #
host:~ # virsh shutdown vm1
Domain vm1 is being shutdown
host:~ # virsh list
Id Name State
--------------------
host:~ #
host:~ # virsh start vm1
Domain vm1 started
host:~ # virsh list
Id Name State
---------------------------
2 vm1 running
host:~ # nsenter -t $(pidof qemu-pr-helper) -a bash
host:~ # ls -l /dev/sd*
ls: cannot access '/dev/sd*': No such file or directory
host:~ # exit
exit
host:~ #
vm1:~ # sg_persist --in -k /dev/sda
FreeNAS lv68 0123
Peripheral device type: disk
PR in (Read keys): Aborted command
Aborted command
vm1:~ #
===================================================================
Test B:
host:~ # systemctl restart libvirtd
host:~ # virsh list
Id Name State
--------------------
host:~ #
host:~ # virsh domblklist vm1
Target Source
------------------------------------------
vda /opt/vms/vm1/disk0.qcow2
sda /dev/mapper/vm1-data
host:~ #
host:~ # virsh start vm1
Domain vm1 started
host:~ # virsh list
Id Name State
---------------------------
1 vm1 running
host:~ # nsenter -t $(pidof qemu-pr-helper) -a bash
host:~ # ls -l /dev/sd*
brw-rw---- 1 root disk 8, 48 Jul 14 17:28 /dev/sdd
host:~ # exit
exit
host:~ #
vm1:~ # lsscsi
[2:0:0:0] disk FreeNAS lv68 0123 /dev/sda
vm1:~ #
vm1:~ # sg_persist --in -k /dev/sda
FreeNAS lv68 0123
Peripheral device type: disk
PR generation=0x0, there are NO registered reservation keys
vm1:~ #
host:~ # virsh list
Id Name State
---------------------------
1 vm1 running
host:~ #
host:~ # virsh domblklist vm2
Target Source
------------------------------------------
vda /opt/vms/vm2/disk0.qcow2
sda /dev/mapper/vm2-data
host:~ #
host:~ # virsh start vm2
Domain vm2 started
host:~ # virsh list
Id Name State
---------------------------
1 vm1 running
2 vm2 running
host:~ # nsenter -t $(qemu-pr-helper pid of vm2) -a bash
host:~ # ls -l /dev/sd*
ls: cannot access '/dev/sd*': No such file or directory
host:~ # exit
exit
host:~ #
vm2:~ # lsscsi
[0:0:0:0] disk FreeNAS lv46 0123 /dev/sda
vm2:~ #
vm2:~ # sg_persist --in -k /dev/sda
FreeNAS lv46 0123
Peripheral device type: disk
PR in (Read keys): Aborted command
Aborted command
vm2:~ #
===================================================================
Any comments will be much appreciated.
Thanks in advance,
Lin
4 years, 5 months
[PATCH] news: Remove one of last two instances of -drive if=none usage
by Jianan Gao
There is a series of patched for describing remove one of last two instances
of -drive if=none usage to help QEMU in deprecation of -drive if=none without
the need to refactor all old boards.
Signed-off-by: Jianan Gao <jgao(a)redhat.com>
---
NEWS.rst | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index ff977968c7..b763e45e11 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -147,6 +147,11 @@ v6.4.0 (2020-06-02)
already does in these cases. Users are encouraged to provide complete NUMA
topologies to avoid unexpected changes in the domain XML.
+ * qemu: remove one of last two instances of -drive if=none usage
+
+ �� Remove one of last two instances of -drive if=none usage to help QEMU in
+ deprecation of -drive if=none without the need to refactor all old boards.
+
* **Bug fixes**
* qemu: fixed regression in network device hotplug with new qemu versions
--
2.21.3
4 years, 5 months
[PATCH 00/10] resolve hangs/crashes on libvirtd shutdown
by Nikolay Shirokovskiy
This series follows [1] but address the issue slightly differently.
Instead of polling for RPC thread pool termination it waits for
thread pool drain in distinct thread and then signal the main loop
to exit.
The series introduces new driver's methods stateShutdown/stateShutdownWait
to finish all driver's background threads. The methods however are only
implemented for qemu driver and only partially. There are other drivers
that have background threads and I don't check every one of them in
terms of how they manage their background threads.
For example node driver creates 2 threads. One of them is supposed to live
a for a short amount of time and thus not tracked. This thread can cause issues
on shutdown. The second thread is tracked and finished synchronously on driver
cleanup. So this thread can not cause crashes but can cause hangs theoretically
speaking so we may want to move the thread finishing to stateShutdownWait
method so that possible hang will be handled by shutdown timeout.
The qemu driver also has untracked threads and they can cause crashes on
shutdown. For example reconnect threads or reboot thread. These need to be
tracked.
I'm going to address these issues in qemu and other drivers once the overall
approach will be approved.
I added 2 new driver's methods so that thread finishing will be done in
parallel. If we have only one method then the shutdown is done one by one
effectively.
I've added clean shutdown timeout in event loop as suggested by Daniel in [2].
Now I think why we can't just go with systemd unit management? Systemd will
eventually send SIGKILL and we can tune the timeout using TimeoutStopUSec
parameter. This way we even don't need to introduce new driver's methods.
Driver's background threads can be finished in stateCleanup method. AFAIU as
drivers are cleaned up in reverse order it is safe in the sense that already
cleaned up driver can not be used by background threads of not yet cleaned up
driver. Of course this way the cleanup is not done in parallel. Well to
turn it into parallel we can introduce just stateShutdown which we don't need
to call in netdaemon code and thus don't introduce undesired dependency of
netdaemon on drivers concept.
[1] Resolve libvirtd hang on termination with connected long running client
https://www.redhat.com/archives/libvir-list/2018-July/msg00382.html
[2] Races / crashes in shutdown of libvirtd daemon
https://www.redhat.com/archives/libvir-list/2020-April/msg01328.html
Nikolay Shirokovskiy (10):
libvirt: add stateShutdown/stateShutdownWait to drivers
util: always initialize priority condition
util: add stop/drain functions to thread pool
rpc: don't unref service ref on socket behalf twice
rpc: finish all threads before exiting main loop
vireventthread: add virEventThreadClose
qemu: exit thread synchronously in qemuDomainObjStopWorker
qemu: implement driver's shutdown/shutdown wait methods
rpc: cleanup virNetDaemonClose method
util: remove unused virThreadPoolNew macro
scripts/check-drivername.py | 2 +
src/driver-state.h | 8 ++++
src/libvirt.c | 42 +++++++++++++++++++
src/libvirt_internal.h | 2 +
src/libvirt_private.syms | 3 ++
src/libvirt_remote.syms | 1 -
src/qemu/qemu_domain.c | 1 +
src/qemu/qemu_driver.c | 32 +++++++++++++++
src/remote/remote_daemon.c | 3 --
src/rpc/virnetdaemon.c | 95 ++++++++++++++++++++++++++++++++++++-------
src/rpc/virnetdaemon.h | 2 -
src/rpc/virnetserver.c | 8 ++++
src/rpc/virnetserver.h | 1 +
src/rpc/virnetserverservice.c | 1 -
src/util/vireventthread.c | 9 ++++
src/util/vireventthread.h | 1 +
src/util/virthreadpool.c | 65 ++++++++++++++++++++---------
src/util/virthreadpool.h | 6 +--
18 files changed, 238 insertions(+), 44 deletions(-)
--
1.8.3.1
4 years, 5 months
Re: [ovirt-devel] [ARM64] Possiblity to support oVirt on ARM64
by Nir Soffer
On Sun, Jul 19, 2020 at 5:04 PM Zhenyu Zheng <zhengzhenyulixi(a)gmail.com> wrote:
>
> Hi oVirt,
>
> We are currently trying to make oVirt work on ARM64 platform, since I'm quite new to oVirt community, I'm wondering what is the current status about ARM64 support in the oVirt upstream, as I saw the oVirt Wikipedia page mentioned there is an ongoing efforts to support ARM platform. We have a small team here and we are willing to also help to make this work.
Hi Zhenyu,
I think this is a great idea, both supporting more hardware, and
enlarging the oVirt
community.
Regarding hardware support we depend mostly on libvirt and qemu, and I
don't know
that is the status. Adding relevant lists and people.
I don't know about any effort on oVirt side, but last week I added
arm builds for
ovirt-imageio and it works:
https://copr.fedorainfracloud.org/coprs/nsoffer/ovirt-imageio-preview/bui...
We have many dependendencies, but oVirt itself is mostly python and java, with
tiny bits in C or using ctypes, so it should not be too hard.
I think the first thing is getting some hardware for testing. Do you
have such hardware,
or have some contacts that can help to get hardware contribution for this?
Nir
4 years, 5 months
[PATCH 1/1] formatdomain.html.in: mention pSeries NVDIMM 'align down' mechanic
by Daniel Henrique Barboza
The reason why we align down the guest area (total-size - label-size) is
explained in the body of qemuDomainNVDimmAlignSizePseries(). This
behavior must also be documented in the user docs.
Signed-off-by: Daniel Henrique Barboza <danielhb413(a)gmail.com>
---
docs/formatdomain.html.in | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index f5ee97de81..af6c809ddd 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -9412,8 +9412,10 @@ qemu-kvm -net nic,model=? /dev/null
</p>
<ol>
<li>the minimum label size is 128KiB,</li>
- <li>the remaining size (total-size - label-size) will be aligned
- to 4KiB as default.</li>
+ <li>the remaining size (total-size - label-size), also called guest
+ area, will be aligned to 4KiB as default. For pSeries guests, the
+ guest area will be aligned down to 256MiB, and the minimum size
+ of the guest area must be at least 256MiB plus the label-size.</li>
</ol>
</dd>
--
2.26.2
4 years, 5 months
[libvirt PATCH] docs: virConnectGetCapabilities do not provide pool types
by Pino Toscano
Remove the paragraph in the storage pool page that mentions
virConnectGetCapabilities, as virConnectGetCapabilities does not return
any information about pools.
Signed-off-by: Pino Toscano <ptoscano(a)redhat.com>
---
docs/formatstoragecaps.html.in | 6 ------
1 file changed, 6 deletions(-)
diff --git a/docs/formatstoragecaps.html.in b/docs/formatstoragecaps.html.in
index ee3888f44d..d8a1cacd96 100644
--- a/docs/formatstoragecaps.html.in
+++ b/docs/formatstoragecaps.html.in
@@ -13,12 +13,6 @@
supported, and if relevant the source format types, the required
source elements, and the target volume format types. </p>
- <p>The Storage Pool Capabilities XML provides more information than the
- <a href="/html/libvirt-libvirt-host.html#virConnectGetCapabilities">
- <code>virConnectGetCapabilities</code>
- </a>
- which only provides an enumerated list of supported pool types.</p>
-
<h2><a id="elements">Element and attribute overview</a></h2>
<p>A query interface was added to the virConnect API's to retrieve the
--
2.26.2
4 years, 5 months
[PATCH] qemu: pre-create the dbus directory in qemuStateInitialize
by Bihong Yu
>From 187323ce736dcd9b1a177d552630b0c6859a4798 Mon Sep 17 00:00:00 2001
From: Bihong Yu <yubihong(a)huawei.com>
Date: Tue, 14 Jul 2020 15:44:05 +0800
Subject: [PATCH] qemu: pre-create the dbus directory in qemuStateInitialize
There are races condiction to make '/run/libvirt/qemu/dbus' directory in
virDirCreateNoFork() while concurrent start VMs, and get "failed to create
directory '/run/libvirt/qemu/dbus': File exists" error message. pre-create the
dbus directory in qemuStateInitialize.
Signed-off-by:Bihong Yu <yubihong(a)huawei.com>
---
src/qemu/qemu_dbus.c | 4 +---
src/qemu/qemu_dbus.h | 2 +-
src/qemu/qemu_driver.c | 4 ++++
src/qemu/qemu_process.c | 3 ---
4 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_dbus.c b/src/qemu/qemu_dbus.c
index 51f6c94..0e0306a 100644
--- a/src/qemu/qemu_dbus.c
+++ b/src/qemu/qemu_dbus.c
@@ -34,10 +34,8 @@ VIR_LOG_INIT("qemu.dbus");
int
-qemuDBusPrepareHost(virQEMUDriverPtr driver)
+qemuDBusPreparePath(virQEMUDriverConfigPtr cfg)
{
- g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
-
return virDirCreate(cfg->dbusStateDir, 0770, cfg->user, cfg->group,
VIR_DIR_CREATE_ALLOW_EXIST);
}
diff --git a/src/qemu/qemu_dbus.h b/src/qemu/qemu_dbus.h
index 474eb10..6ce9f7b 100644
--- a/src/qemu/qemu_dbus.h
+++ b/src/qemu/qemu_dbus.h
@@ -21,7 +21,7 @@
#include "qemu_conf.h"
#include "qemu_domain.h"
-int qemuDBusPrepareHost(virQEMUDriverPtr driver);
+int qemuDBusPreparePath(virQEMUDriverConfigPtr cfg);
char *qemuDBusGetAddress(virQEMUDriverPtr driver,
virDomainObjPtr vm);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d185666..52b68c9 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -50,6 +50,7 @@
#include "qemu_security.h"
#include "qemu_checkpoint.h"
#include "qemu_backup.h"
+#include "qemu_dbus.h"
#include "virerror.h"
#include "virlog.h"
@@ -790,6 +791,9 @@ qemuStateInitialize(bool privileged,
cfg->migrationPortMax)) == NULL)
goto error;
+ if (qemuDBusPreparePath(cfg) < 0)
+ goto error;
+
if (qemuSecurityInit(qemu_driver) < 0)
goto error;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index eba14ed..46620ca 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -6449,9 +6449,6 @@ qemuProcessPrepareHost(virQEMUDriverPtr driver,
qemuDomainObjPrivatePtr priv = vm->privateData;
g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
- if (qemuDBusPrepareHost(driver) < 0)
- return -1;
-
if (qemuPrepareNVRAM(cfg, vm) < 0)
return -1;
--
1.8.3.1
4 years, 5 months
[libvirt PATCH v2 00/15] convert network and nwfilter directories to glib memory allocation.
by Laine Stump
V1 was here:
https://www.redhat.com/archives/libvir-list/2020-June/msg01156.html
Some patches were ACKed and pushed. I re-ordered/re-organized most of
the rest, and removed some others to deal with separately (the
xmlNodeContent stuff)
What's left here is a few preliminary patches, then the standard set,
once for network and again for nwfilter:
1) convert from VIR_(RE)ALLOC(_N) to g_new0()/g_renew()
2) use g_auto*() where appropriate, removing unneeded free's
3) get rid of now-extraneous labels
4) (controversial) replace any remaining VIR_FREE() with g_free() (and
possibly g_clear_pointer() when needed
NB: these patches require my virBuffer "convert to g_auto" series
as a prerequisite:
https://www.redhat.com/archives/libvir-list/2020-July/msg00185.html
Changes from V1:
* move conversion of virFirewall and virBuffer automatics to another
series (see above)
* re-order to replace VIR_ALLOC first (without adding any g_auto*)
instead of doing it after g_auto conversion of automatics, then do
all g_auto additions at o
* separate label elimination into separate patches per jtomko's
suggestion.
Laine Stump (15):
replace g_new() with g_new0() for consistency
util: define g_autoptr cleanups for a couple dnsmasq objects
define g_autoptr cleanup function for virNetworkDHCPLease
network: replace VIR_ALLOC/REALLOC with g_new0/g_renew
network: use g_auto wherever appropriate
network: eliminate unnecessary labels
network: use g_free() in place of remaining VIR_FREE()
nwfilter: remove unnecessary code from ebtablesGetSubChainInsts()
nwfilter: clear nrules when resetting virNWFilterInst
nwfilter: define a typedef for struct ebtablesSubChainInst
nwfilter: transform logic in virNWFilterRuleInstSort to eliminate
label
nwfilter: use standard label names when reasonable
nwfilter: replace VIR_ALLOC with g_new0
nwfilter: convert local pointers to use g_auto*
nwfilter: convert remaining VIR_FREE() to g_free()
src/datatypes.h | 2 +
src/network/bridge_driver.c | 536 ++++++++--------------
src/network/bridge_driver_linux.c | 22 +-
src/network/leaseshelper.c | 16 +-
src/nwfilter/nwfilter_dhcpsnoop.c | 150 +++---
src/nwfilter/nwfilter_driver.c | 13 +-
src/nwfilter/nwfilter_ebiptables_driver.c | 119 ++---
src/nwfilter/nwfilter_gentech_driver.c | 57 ++-
src/nwfilter/nwfilter_learnipaddr.c | 43 +-
src/qemu/qemu_backup.c | 2 +-
src/util/virdnsmasq.h | 4 +
src/util/virutil.c | 2 +-
tests/qemuhotplugmock.c | 2 +-
13 files changed, 379 insertions(+), 589 deletions(-)
--
2.25.4
4 years, 5 months
[PATCH] NEWS: mention readonly attribute is not yet supported by virtiofsd
by Jianan Gao
There was a clear statement on not supported by virtiofsd with
readonly attribute.
Signed-off-by: Jianan Gao <jgao(a)redhat.com>
---
NEWS.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index 2c6c628c61..ff977968c7 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -170,6 +170,10 @@ v6.4.0 (2020-06-02)
firewalld resets all iptables rules and chains on restart, and this
includes deleting those created by libvirt.
+ * qemu: reject readonly attribute for virtiofs
+
+ virtiofs does not yet support read-only shares.
+
v6.3.0 (2020-05-05)
===================
--
2.21.3
4 years, 5 months
[libvirt PATCH] qemu: Drop ret variable from qemuConnectCPUModelComparison
by Jiri Denemark
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_driver.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index b8ba2e3fb9..8e81c30a93 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13150,7 +13150,6 @@ qemuConnectCPUModelComparison(virQEMUCapsPtr qemuCaps,
{
g_autoptr(qemuProcessQMP) proc = NULL;
g_autofree char *result = NULL;
- int ret = VIR_CPU_COMPARE_ERROR;
if (!(proc = qemuProcessQMPNew(virQEMUCapsGetBinary(qemuCaps),
libDir, runUid, runGid, false)))
@@ -13163,15 +13162,17 @@ qemuConnectCPUModelComparison(virQEMUCapsPtr qemuCaps,
return VIR_CPU_COMPARE_ERROR;
if (STREQ(result, "identical"))
- ret = VIR_CPU_COMPARE_IDENTICAL;
- else if (STREQ(result, "superset"))
- ret = VIR_CPU_COMPARE_SUPERSET;
- else if (failIncompatible)
+ return VIR_CPU_COMPARE_IDENTICAL;
+
+ if (STREQ(result, "superset"))
+ return VIR_CPU_COMPARE_SUPERSET;
+
+ if (failIncompatible) {
virReportError(VIR_ERR_CPU_INCOMPATIBLE, NULL);
- else
- ret = VIR_CPU_COMPARE_INCOMPATIBLE;
+ return VIR_CPU_COMPARE_ERROR;
+ }
- return ret;
+ return VIR_CPU_COMPARE_INCOMPATIBLE;
}
--
2.27.0
4 years, 5 months