[libvirt][RFC PATCH] add a new 'default' option for attribute mode in numatune
by Luyao Zhong
Hi Libvirt experts,
I would like enhence the numatune snippet configuration. Given a example snippet:
<domain>
...
<numatune>
<memory mode="strict" nodeset="1-4,^3"/>
<memnode cellid="0" mode="strict" nodeset="1"/>
<memnode cellid="2" mode="preferred" nodeset="2"/>
</numatune>
...
</domain>
Currently, attribute mode is either 'interleave', 'strict', or 'preferred',
I propose to add a new 'default' option. I give the reason as following.
Presume we are using cgroups v1, Libvirt sets cpuset.mems for all vcpu threads
according to 'nodeset' in memory element. And translate the memnode element to
qemu config options (--object memory-backend-ram) for per numa cell, which
invoking mbind() system call at the end.[1]
But what if we want using default memory policy and request each guest numa cell
pinned to different host memory nodes? We can't use mbind via qemu config options,
because (I quoto here) "For MPOL_DEFAULT, the nodemask and maxnode arguments must
be specify the empty set of nodes." [2]
So my solution is introducing a new 'default' option for attribute mode. e.g.
<domain>
...
<numatune>
<memory mode="default" nodeset="1-2"/>
<memnode cellid="0" mode="default" nodeset="1"/>
<memnode cellid="1" mode="default" nodeset="2"/>
</numatune>
...
</domain>
If the mode is 'default', libvirt should avoid generating qemu command line
'--object memory-backend-ram', and invokes cgroups to set cpuset.mems for per guest numa
combining with numa topology config. Presume the numa topology is :
<cpu>
...
<numa>
<cell id='0' cpus='0-3' memory='512000' unit='KiB' />
<cell id='1' cpus='4-7' memory='512000' unit='KiB' />
</numa>
...
</cpu>
Then libvirt should set cpuset.mems to '1' for vcpus 0-3, and '2' for vcpus 4-7.
Is this reasonable and feasible? Welcome any comments.
Regards,
Luyao
[1]https://github.com/qemu/qemu/blob/f2a1cf9180f63e88bb38ff21c169da97c3f2b...
[2]https://man7.org/linux/man-pages/man2/mbind.2.html
--
2.25.1
3 years, 12 months
Races / crashes in shutdown of libvirtd daemon
by Daniel P. Berrangé
We got a new BZ filed about a libvirtd crash in shutdown
https://bugzilla.redhat.com/show_bug.cgi?id=1828207
We can see from the stack trace that the "interface" driver is in
the middle of servicing an RPC call for virConnectListAllInterfaces()
Meanwhile the libvirtd daemon is doing virObjectUnref(dmn) on the
virNetDaemonPtr object.
The fact that it is doing this unref, means that it must have already
call virStateCleanup(), given the code sequence:
/* Run event loop. */
virNetDaemonRun(dmn);
ret = 0;
virHookCall(VIR_HOOK_DRIVER_DAEMON, "-", VIR_HOOK_DAEMON_OP_SHUTDOWN,
0, "shutdown", NULL, NULL);
cleanup:
/* Keep cleanup order in inverse order of startup */
virNetDaemonClose(dmn);
virNetlinkEventServiceStopAll();
if (driversInitialized) {
/* NB: Possible issue with timing window between driversInitialized
* setting if virNetlinkEventServerStart fails */
driversInitialized = false;
virStateCleanup();
}
virObjectUnref(adminProgram);
virObjectUnref(srvAdm);
virObjectUnref(qemuProgram);
virObjectUnref(lxcProgram);
virObjectUnref(remoteProgram);
virObjectUnref(srv);
virObjectUnref(dmn);
Unless I'm missing something non-obvious, this cleanup code path is
inherantly broken & racy. When virNetDaemonRun() returns the RPC
worker threads are all still active. They are all liable to still
be executing RPC calls, which means any of the drivers may be in
use. So calling virStateCleanup() is an inherantly dangerous
thing to do. There is the further complication that once we have
exitted the main loop we may prevent the RPC calls from ever
completing, as they may be waiting on an event to be dispatched.
I know we're had various patch proposals in the past to improve the
robustness of shutdown cleanup but I can't remember the outcome of the
reviews. Hopefully people involved in those threads can jump in here...
IMHO the key problem here is the virNetDeamonRun() method which just
looks at the "quit" flag and immediately returns if it is set.
This needs to be changed so that when it sees quit == true, it takes
the following actions
1. Call virNetDaemonClose() to drop all RPC clients and thus prevent
new RPC calls arriving
2. Flush any RPC calls which are queued but not yet assigned to a
worker thread
3. Tell worker threads to exit after finishing their current job
4. Wait for all worker threads to exit
5. Now virNetDaemonRun may return
At this point we can call virStateCleanup and the various other
things, as we know no drivers are still active in RPC calls.
Having said that, there could be background threads in the the
drivers which are doing work that uses the event loop thread.
So we probably need a virStateClose() method that we call from
virNetDaemonRun, *after* all worker threads are gone, which would
cleanup any background threads while the event loop is still
running.
The issue is that step 4 above ("Wait for all worker threads to exit")
may take too long, or indeed never complete. To deal with this, it
will need a timeout. In the remote_daemon.c cleanup code path, if
there are still worker threads present, then we need to skip all
cleanup and simply call _exit(0) to terminate the process with no
attempt at cleanup, since it would be unsafe to try anything else.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
4 years
[libvirt PATCH v2 00/16] Add support for persistent mediated devices
by Jonathon Jongsma
This patch series follows the previously-merged series which added support for
transient mediated devices. This series expands mdev support to include
persistent device definitions. Again, it relies on mdevctl as the backend.
It follows the common libvirt pattern of APIs by adding the following new APIs
for node devices:
- virNodeDeviceDefineXML() - defines a persistent device
- virNodeDeviceUndefine() - undefines a persistent device
- virNodeDeviceCreate() - starts a previously-defined device
It also adds virsh commands mapping to these new APIs: nodedev-define,
nodedev-undefine, and nodedev-start.
The method of staying up-to-date with devices defined by mdevctl is currently=
a
little bit crude due to the fact that mdevctl does not emit any events when n=
ew
devices are added or removed. As a workaround, we create a file monitor for t=
he
mdevctl config directory and re-query mdevctl when we detect changes within
that directory. In the future, mdevctl may introduce a more elegant solution.
Changes in v2:
- rebase to latest git master
Jonathon Jongsma (16):
tests: remove extra trailing semicolon
nodedev: introduce concept of 'active' node devices
nodedev: Add ability to filter by active state
virsh: Add --active, --inactive, --all to nodedev-list
nodedev: add ability to list and parse defined mdevs
nodedev: add STOPPED/STARTED lifecycle events
nodedev: add mdevctl devices to node device list
nodedev: handle mdevs that disappear from mdevctl
nodedev: add an mdevctl thread
api: add virNodeDeviceDefineXML()
virsh: add nodedev-define command
api: add virNodeDeviceUndefine()
virsh: Factor out function to find node device
virsh: add nodedev-undefine command
api: add virNodeDeviceCreate()
virsh: add "nodedev-start" command
examples/c/misc/event-test.c | 4 +
include/libvirt/libvirt-nodedev.h | 19 +-
src/conf/node_device_conf.h | 9 +
src/conf/virnodedeviceobj.c | 24 +
src/conf/virnodedeviceobj.h | 7 +
src/driver-nodedev.h | 14 +
src/libvirt-nodedev.c | 115 ++++
src/libvirt_private.syms | 2 +
src/libvirt_public.syms | 6 +
src/node_device/node_device_driver.c | 525 +++++++++++++++++-
src/node_device/node_device_driver.h | 38 ++
src/node_device/node_device_udev.c | 275 ++++++++-
src/remote/remote_driver.c | 3 +
src/remote/remote_protocol.x | 40 +-
src/remote_protocol-structs | 16 +
src/rpc/gendispatch.pl | 1 +
...19_36ea_4111_8f0a_8c9a70e21366-define.argv | 1 +
...19_36ea_4111_8f0a_8c9a70e21366-define.json | 1 +
...39_495e_4243_ad9f_beb3f14c23d9-define.argv | 1 +
...39_495e_4243_ad9f_beb3f14c23d9-define.json | 1 +
...16_1ca8_49ac_b176_871d16c13076-define.argv | 1 +
...16_1ca8_49ac_b176_871d16c13076-define.json | 1 +
tests/nodedevmdevctldata/mdevctl-create.argv | 1 +
.../mdevctl-list-defined.argv | 1 +
.../mdevctl-list-multiple-parents.json | 59 ++
.../mdevctl-list-multiple-parents.out.xml | 39 ++
.../mdevctl-list-multiple.json | 59 ++
.../mdevctl-list-multiple.out.xml | 39 ++
.../mdevctl-list-single-noattr.json | 11 +
.../mdevctl-list-single-noattr.out.xml | 8 +
.../mdevctl-list-single.json | 31 ++
.../mdevctl-list-single.out.xml | 14 +
.../nodedevmdevctldata/mdevctl-undefine.argv | 1 +
tests/nodedevmdevctltest.c | 227 +++++++-
tools/virsh-nodedev.c | 281 ++++++++--
35 files changed, 1787 insertions(+), 88 deletions(-)
create mode 100644 tests/nodedevmdevctldata/mdev_d069d019_36ea_4111_8f0a_8c9=
a70e21366-define.argv
create mode 100644 tests/nodedevmdevctldata/mdev_d069d019_36ea_4111_8f0a_8c9=
a70e21366-define.json
create mode 100644 tests/nodedevmdevctldata/mdev_d2441d39_495e_4243_ad9f_beb=
3f14c23d9-define.argv
create mode 100644 tests/nodedevmdevctldata/mdev_d2441d39_495e_4243_ad9f_beb=
3f14c23d9-define.json
create mode 100644 tests/nodedevmdevctldata/mdev_fedc4916_1ca8_49ac_b176_871=
d16c13076-define.argv
create mode 100644 tests/nodedevmdevctldata/mdev_fedc4916_1ca8_49ac_b176_871=
d16c13076-define.json
create mode 100644 tests/nodedevmdevctldata/mdevctl-create.argv
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-defined.argv
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-multiple-parents.js=
on
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-multiple-parents.ou=
t.xml
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-multiple.json
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-multiple.out.xml
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-single-noattr.json
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-single-noattr.out.x=
ml
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-single.json
create mode 100644 tests/nodedevmdevctldata/mdevctl-list-single.out.xml
create mode 100644 tests/nodedevmdevctldata/mdevctl-undefine.argv
--=20
2.26.2
4 years
[PATCH 0/2] qemu: migration corner case fix and cleanup
by Nikolay Shirokovskiy
Nikolay Shirokovskiy (2):
qemu: fix qemuMigrationSrcCleanup to use qemuMigrationJobFinish
qemu: don't needlessly unset close callback during perform phase
src/qemu/qemu_migration.c | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)
--
1.8.3.1
4 years
[libvirt PATCH v2 00/10] remote: introduce a custom netcat impl for ssh tunnelling
by Daniel P. Berrangé
We have long had a problem with use of netcat for ssh tunnelling because
there's no guarantee the UNIX socket path the client builds will match
the UNIX socket path the remote host uses. We don't even allow session
mode SSH tunnelling for this reason. We also can't easily auto-spawn
libvirtd in session mode.
With the introduction of modular daemons we also have potential for two
completely different UNIX socket paths even for system mode, and the
client can't know which to use.
The solution to all these problems is to introduce a custom netcat impl.
Instead passing the UNIX socket path, we pass the libvirt driver URI.
The custom netcat then decides which socket path to use based on the
remote build host environment.
We still have to support netcat for interoperability with legacy libvirt
versions, but we can default to the new virt-nc.
Daniel P. Berrangé (10):
rpc: merge logic for generating remote SSH shell script
remote: push logic for default netcat binary into common helper
remote: split off enums into separate source file
remote: split out function for parsing URI scheme
remote: parse the remote transport string earlier
remote: split out function for constructing socket path
remote: extract logic for determining daemon to connect to
remote: introduce virt-ssh-helper binary
rpc: switch order of args in virNetClientNewSSH
rpc: use new virt-ssh-helper binary for remote tunnelling
build-aux/syntax-check.mk | 2 +-
docs/uri.html.in | 24 +-
libvirt.spec.in | 2 +
po/POTFILES.in | 2 +
src/libvirt_remote.syms | 1 +
src/remote/Makefile.inc.am | 33 +++
src/remote/remote_driver.c | 323 ++++---------------------
src/remote/remote_sockets.c | 277 +++++++++++++++++++++
src/remote/remote_sockets.h | 70 ++++++
src/remote/remote_ssh_helper.c | 425 +++++++++++++++++++++++++++++++++
src/rpc/virnetclient.c | 166 ++++++++-----
src/rpc/virnetclient.h | 29 ++-
src/rpc/virnetsocket.c | 37 +--
src/rpc/virnetsocket.h | 4 +-
tests/virnetsockettest.c | 12 +-
15 files changed, 1035 insertions(+), 372 deletions(-)
create mode 100644 src/remote/remote_sockets.c
create mode 100644 src/remote/remote_sockets.h
create mode 100644 src/remote/remote_ssh_helper.c
--
2.26.2
4 years
[PATCH] qemu.conf: Re-word the description for *_tls_x509_verify
by Fangge Jin
The original descirption for *_tls_x509_verify is a little misleading
by saying that "Enabling this option will reject any client who does
not have a ca-cert.pem certificate".
Signed-off-by: Fangge Jin <fjin(a)redhat.com>
---
src/qemu/qemu.conf | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf
index a96bedb114..b1bd3cecbd 100644
--- a/src/qemu/qemu.conf
+++ b/src/qemu/qemu.conf
@@ -109,9 +109,8 @@
# issuing an x509 certificate to every client who needs to connect.
#
# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the vnc_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
+# certificate(as described in default_tls_x509_verify) signed by the
+# CA in the vnc_tls_x509_cert_dir (or default_tls_x509_cert_dir).
#
# If this option is not supplied, it will be set to the value of
# "default_tls_x509_verify".
@@ -248,9 +247,8 @@
# issuing an x509 certificate to every client who needs to connect.
#
# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the chardev_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
+# certificate(as described in default_tls_x509_verify) signed by the
+# CA in the chardev_tls_x509_cert_dir (or default_tls_x509_cert_dir).
#
# If this option is not supplied, it will be set to the value of
# "default_tls_x509_verify".
@@ -375,9 +373,8 @@
# issuing an x509 certificate to every client who needs to connect.
#
# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the migrate_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
+# certificate(as described in default_tls_x509_verify) signed by the
+# CA in the migrate_tls_x509_cert_dir (or default_tls_x509_cert_dir).
#
# If this option is not supplied, it will be set to the value of
# "default_tls_x509_verify".
@@ -412,9 +409,8 @@
# issuing an x509 certificate to every client who needs to connect.
#
# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the backup_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
+# certificate(as described in default_tls_x509_verify) signed by the
+# CA in the backup_tls_x509_cert_dir (or default_tls_x509_cert_dir).
#
# If this option is not supplied, it will be set to the value of
# "default_tls_x509_verify".
--
2.20.1
4 years, 1 month
[PATCH] qemu: clear residual QMP caps processes during QEMU driver initialization
by Bihong Yu
>From c328ff62b11d58553fd2032a85fd3295e009b3d3 Mon Sep 17 00:00:00 2001
From: Bihong Yu <yubihong(a)huawei.com>
Date: Fri, 17 Jul 2020 16:55:12 +0800
Subject: [PATCH] qemu: clear residual QMP caps processes during QEMU driver
initialization
In some cases, the QMP capabilities processes possible residue. So we need to
clear the residual QMP caps processes during starting libvirt.
Signed-off-by:Bihong Yu <yubihong(a)huawei.com>
---
src/qemu/qemu_driver.c | 2 ++
src/qemu/qemu_process.c | 40 ++++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_process.h | 2 ++
3 files changed, 44 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d185666..d627fd7 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -889,6 +889,8 @@ qemuStateInitialize(bool privileged,
run_gid = cfg->group;
}
+ qemuProcessQMPClear(cfg->libDir);
+
qemu_driver->qemuCapsCache = virQEMUCapsCacheNew(cfg->libDir,
cfg->cacheDir,
run_uid,
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 70fc24b..d545e3e 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8312,6 +8312,46 @@ static qemuMonitorCallbacks callbacks = {
};
+/**
+ * qemuProcessQMPClear
+ *
+ * Try to kill residual QMP caps processes
+ */
+void
+qemuProcessQMPClear(const char *libDir)
+{
+ virErrorPtr orig_err;
+ DIR *dirp = NULL;
+ struct dirent *dp;
+
+ if (!virFileExists(libDir))
+ return;
+
+ if (virDirOpen(&dirp, libDir) < 0)
+ return;
+
+ virErrorPreserveLast(&orig_err);
+ while (virDirRead(dirp, &dp, libDir) > 0) {
+ g_autofree char *qmp_uniqDir = NULL;
+ g_autofree char *qmp_pidfile = NULL;
+
+ if (!STRPREFIX(dp->d_name, "qmp-"))
+ continue;
+
+ qmp_uniqDir = g_strdup_printf("%s/%s", libDir, dp->d_name);
+ qmp_pidfile = g_strdup_printf("%s/%s", qmp_uniqDir, "qmp.pid");
+
+ ignore_value(virPidFileForceCleanupPath(qmp_pidfile));
+
+ if (qmp_uniqDir)
+ rmdir(qmp_uniqDir);
+ }
+ virErrorRestore(&orig_err);
+
+ VIR_DIR_CLOSE(dirp);
+}
+
+
static void
qemuProcessQMPStop(qemuProcessQMPPtr proc)
{
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 15e67b9..b039e6c 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -233,6 +233,8 @@ qemuProcessQMPPtr qemuProcessQMPNew(const char *binary,
gid_t runGid,
bool forceTCG);
+void qemuProcessQMPClear(const char *libDir);
+
void qemuProcessQMPFree(qemuProcessQMPPtr proc);
int qemuProcessQMPStart(qemuProcessQMPPtr proc);
--
1.8.3.1
4 years, 1 month
[libvirt PATCH v4 00/11] remote: introduce a custom netcat impl for ssh tunnelling
by Daniel P. Berrangé
We have long had a problem with use of netcat for ssh tunnelling because
there's no guarantee the UNIX socket path the client builds will match
the UNIX socket path the remote host uses. We don't even allow session
mode SSH tunnelling for this reason. We also can't easily auto-spawn
libvirtd in session mode.
With the introduction of modular daemons we also have potential for two
completely different UNIX socket paths even for system mode, and the
client can't know which to use.
The solution to all these problems is to introduce a custom netcat impl.
Instead passing the UNIX socket path, we pass the libvirt driver URI.
The custom netcat then decides which socket path to use based on the
remote build host environment.
We still have to support netcat for interoperability with legacy libvirt
versions, but we can default to the new virt-nc.
v4: Now with many fixed bugs to make it actually work
v3: Now with more meson and less autotools !
Daniel P. Berrangé (11):
rpc: merge logic for generating remote SSH shell script
remote: push logic for default netcat binary into common helper
remote: split off enums into separate source file
remote: split out function for parsing URI scheme
remote: parse the remote transport string earlier
remote: split out function for constructing socket path
remote: extract logic for determining daemon to connect to
remote: introduce virt-ssh-helper binary
rpc: switch order of args in virNetClientNewSSH
rpc: use new virt-ssh-helper binary for remote tunnelling
remote: fix error reporting for invalid daemon mode
build-aux/syntax-check.mk | 2 +-
docs/uri.html.in | 24 +-
libvirt.spec.in | 2 +
po/POTFILES.in | 2 +
src/libvirt_remote.syms | 1 +
src/remote/meson.build | 18 ++
src/remote/remote_driver.c | 331 +++++--------------------
src/remote/remote_sockets.c | 277 +++++++++++++++++++++
src/remote/remote_sockets.h | 70 ++++++
src/remote/remote_ssh_helper.c | 425 +++++++++++++++++++++++++++++++++
src/rpc/virnetclient.c | 167 +++++++++----
src/rpc/virnetclient.h | 29 ++-
src/rpc/virnetsocket.c | 37 +--
src/rpc/virnetsocket.h | 4 +-
tests/virnetsockettest.c | 12 +-
15 files changed, 1030 insertions(+), 371 deletions(-)
create mode 100644 src/remote/remote_sockets.c
create mode 100644 src/remote/remote_sockets.h
create mode 100644 src/remote/remote_ssh_helper.c
--
2.26.2
4 years, 1 month
[PATCH v1] qemu: monitor: substitute missing model name for host-passthrough
by Collin Walling
When executing the hypervisor-cpu-compare/baseline commands and
the XML file contains a CPU definition using host-passthrough
and no model name, the commands will fail and return an error
message from the QMP response.
Let's fix this by checking for host-passthrough and a missing
model name when converting a CPU definition to a JSON object.
If these conditions are matched, then the JSON object will be
constructed using "host" as the model name.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1850680
Signed-off-by: Collin Walling <walling(a)linux.ibm.com>
---
src/qemu/qemu_monitor_json.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 3070c1e6b3..448a3a9356 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -5804,9 +5804,20 @@ qemuMonitorJSONMakeCPUModel(virCPUDefPtr cpu,
{
virJSONValuePtr model = virJSONValueNewObject();
virJSONValuePtr props = NULL;
+ const char *model_name = cpu->model;
size_t i;
- if (virJSONValueObjectAppendString(model, "name", cpu->model) < 0)
+ if (!model_name) {
+ if (cpu->mode == VIR_CPU_MODE_HOST_PASSTHROUGH) {
+ model_name = "host";
+ } else {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("cpu parameter is missing a model name"));
+ goto error;
+ }
+ }
+
+ if (virJSONValueObjectAppendString(model, "name", model_name) < 0)
goto error;
if (cpu->nfeatures || !migratable) {
--
2.26.2
4 years, 2 months