[libvirt] [PATCH 00/11] qemu: backup: Fixes and improvements
by Peter Krempa
I've got a report of pull-mode backup not working properly. While
investigating I've found out few other problems and one regression
caused by a recent patch. All of those are fixed below.
Additionally the last 3 patches are RFC and were created based on the
discussion on qemu-block and I don't really intend to push them right
away. At least until the discussion there is over:
https://lists.gnu.org/archive/html/qemu-block/2019-12/msg00416.html
Peter Krempa (11):
qemu: block: Use proper asyncJob when waiting for completion of
blockdev-create
qemu: Reset the node-name allocator in qemuDomainObjPrivateDataClear
qemu: backup: Configure backup store image with backing file
qemu: backup: Move deletion of backup images to job termination
qemu: blockjob: Remove infrastructure for remembering to delete image
qemu: backup: Properly propagate async job type when cancelling the
job
qemu: process: Terminate backup job on VM destroy
schemas: backup: Remove pointless <choice> for 'name' of backup disk
RFC:
conf: backup: Allow configuration of names exported via NBD
qemu: backup: Implement support for backup disk export name
configuration
qemu: backup: Implement support for backup disk bitmap name
configuration
docs/formatbackup.html.in | 9 +++
docs/schemas/domainbackup.rng | 16 ++--
src/conf/backup_conf.c | 11 +++
src/conf/backup_conf.h | 2 +
src/qemu/qemu_backup.c | 77 +++++++++++++------
src/qemu/qemu_backup.h | 10 ++-
src/qemu/qemu_block.c | 4 +-
src/qemu/qemu_blockjob.c | 20 +----
src/qemu/qemu_blockjob.h | 2 -
src/qemu/qemu_domain.c | 10 +--
src/qemu/qemu_driver.c | 2 +-
src/qemu/qemu_process.c | 8 +-
.../backup-pull-seclabel.xml | 2 +-
.../backup-pull-seclabel.xml | 2 +-
.../qemustatusxml2xmldata/backup-pull-in.xml | 1 -
15 files changed, 110 insertions(+), 66 deletions(-)
--
2.23.0
4 years, 10 months
[libvirt] [PATCH] Cleaning up pools is not always ideal
by ebenner
Libvirt cleans up pools that error out by deactivating them. When using
LVM this is not very useful. We rather would have our own system handle
any real failures and to many situations where errors are meaningless
and unimportant cause Libvirt to disable our pools continuously causing
lots of grief. This patch is what we use to add a config option to
disable this behavior.
An example of one of the issues we encountered requiring this patch is a
race condition between Libvirt and LVM. Pool refreshes will hand if LVM
is having an action performed, then Libvirt will continue when LVM
finishes. When a pool is being refreshed while an LV is being removed,
the refresh hangs as expected, when the removal is completed the refresh
continues. Afterwards when the refresh goes to refresh the specific LV
it errors because it cannot be found and the entire pool is disabled.
Bellow are two commands when run together reliably recreates the issue.
while :; do lvcreate /dev/vgpool --name TEST -L 100G && lvremove -f
/dev/vgpool/TEST; done
while :; do virsh pool-refresh vgpool; done
I am aware this is not the most elegant solution here and am up for
suggestions for resolving the underlying issue, however we never need
our pools to be disabled because of an error and I am sure there are
other's who's usecase may be similar.
---
src/remote/libvirtd.conf.in | 7 +++++++
src/storage/storage_driver.c | 38 ++++++++++++++++++++++++++++++++----
2 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/src/remote/libvirtd.conf.in b/src/remote/libvirtd.conf.in
index 34741183cc..5600c26eca 100644
--- a/src/remote/libvirtd.conf.in
+++ b/src/remote/libvirtd.conf.in
@@ -506,3 +506,10 @@
# potential infinite waits blocking libvirt.
#
#ovs_timeout = 5
+
+###################################################################
+# This decides whether to disable pools that errors in some way such as
during a refresh
+# This can negatively affect LVM pools. 1 = disable the pools, 2 =
don't disable the pools
+# Do not use this if you are not using only LVM pools
+#
+cleanup_pools = 1
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 6bbf52f729..93ebcd662c 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -55,6 +55,7 @@
VIR_LOG_INIT("storage.storage_driver");
static virStorageDriverStatePtr driver;
+static int cleanup_pools = -1;
static int storageStateCleanup(void);
@@ -74,6 +75,20 @@ static void storageDriverUnlock(void)
virMutexUnlock(&driver->lock);
}
+static int cleanupPools(void)
+{
+ if (cleanup_pools == -1) {
+ g_autoptr(virConf) libvirtConf = NULL;
+ virConfLoadConfig(&libvirtConf, "libvirtd.conf");
+
+ if (!virConfGetValueInt(libvirtConf, "cleanup_pools",
&cleanup_pools)
+ || (cleanup_pools != 0 && cleanup_pools != 1))
+ cleanup_pools = 1;
+ }
+
+ return cleanup_pools;
+}
+
static void
storagePoolRefreshFailCleanup(virStorageBackendPtr backend,
@@ -81,14 +96,26 @@ storagePoolRefreshFailCleanup(virStorageBackendPtr
backend,
const char *stateFile)
{
virErrorPtr orig_err;
-
virErrorPreserveLast(&orig_err);
virStoragePoolObjClearVols(obj);
if (stateFile)
unlink(stateFile);
- if (backend->stopPool)
- backend->stopPool(obj);
+
+ if (!cleanupPools()) {
+ virStoragePoolDefPtr def = virStoragePoolObjGetDef(obj);
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Failed to refresh storage pool '%s'.
Would have disabled this pool but clean_pools = 0: %s"),
+ def->name, virGetLastErrorMessage());
+ } else {
+ virStoragePoolDefPtr def = virStoragePoolObjGetDef(obj);
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Failed to refresh storage pool '%s'.
Disabled this pool because clean_pools = 1: %s"),
+ def->name, virGetLastErrorMessage());
+ if (backend->stopPool) {
+ backend->stopPool(obj);
+ }
+ }
virErrorRestore(&orig_err);
}
@@ -101,7 +128,10 @@ storagePoolRefreshImpl(virStorageBackendPtr backend,
virStoragePoolObjClearVols(obj);
if (backend->refreshPool(obj) < 0) {
storagePoolRefreshFailCleanup(backend, obj, stateFile);
- return -1;
+ if (!cleanupPools())
+ return 0;
+ else
+ return -1;
}
return 0;
--
2.24.1
4 years, 10 months
[libvirt] [PATCH v2 00/12] Various bhyve driver improvements for FreeBSD
by Ryan Moeller
The main changes are:
* Add support for standard hooks in bhyve backend:
- prepare
- start
- started
- stopped
- release
* Some code cleanup & general housekeeping in bhyve backend
* Add support for reboot in bhyve backend, both from within the VM and from the host
Ryan Moeller (12):
Add hooks for bhyve backend
Fix build errors on FreeBSD
Simplify bhyve driver caps helpers
Remove redundant parameter to virBhyveProcessStart()
Factor out conn
Fix indentation
Eliminate rc variable
Make bhyveMonitor a virClass
Don't bother seeking to the end of a file opened O_APPEND
Add reboot support for bhyve backend
Fix bhyvexml2argvtest
Add missing virtlogd.init for OpenRC
src/bhyve/bhyve_command.c | 40 ++++-----
src/bhyve/bhyve_command.h | 4 +-
src/bhyve/bhyve_driver.c | 67 ++++++++++----
src/bhyve/bhyve_driver.h | 4 +-
src/bhyve/bhyve_monitor.c | 165 +++++++++++++++++++++++------------
src/bhyve/bhyve_monitor.h | 2 +
src/bhyve/bhyve_process.c | 106 +++++++++++++++-------
src/bhyve/bhyve_process.h | 4 +-
src/conf/virnetworkobj.c | 7 +-
src/logging/Makefile.inc.am | 10 +++
src/logging/virtlogd.init.in | 14 +++
src/util/virhook.c | 15 ++++
src/util/virhook.h | 11 +++
tests/bhyvexml2argvtest.c | 4 +-
14 files changed, 319 insertions(+), 134 deletions(-)
create mode 100644 src/logging/virtlogd.init.in
--
2.23.0
4 years, 10 months
[libvirt] [PATCH] virsh: Add a completer for `domifaddr` --source parameter.
by Julio Faracco
The command `domifaddr` can use three different sources to grab IP
address of a Virtual Machine: lease, agent and arp. This parameter does
not have a completer function to return source options.
Signed-off-by: Julio Faracco <jcfaracco(a)gmail.com>
---
tools/virsh-completer-domain.c | 17 +++++++++++++++++
tools/virsh-completer-domain.h | 5 +++++
tools/virsh-domain-monitor.c | 1 +
3 files changed, 23 insertions(+)
diff --git a/tools/virsh-completer-domain.c b/tools/virsh-completer-domain.c
index 0311ee50d0..c8709baa38 100644
--- a/tools/virsh-completer-domain.c
+++ b/tools/virsh-completer-domain.c
@@ -296,3 +296,20 @@ virshDomainShutdownModeCompleter(vshControl *ctl,
return virshCommaStringListComplete(mode, modes);
}
+
+
+char **
+virshDomainIfAddrSourceCompleter(vshControl *ctl,
+ const vshCmd *cmd,
+ unsigned int flags)
+{
+ const char *sources[] = {"lease", "agent", "arp", NULL};
+ const char *source = NULL;
+
+ virCheckFlags(0, NULL);
+
+ if (vshCommandOptStringQuiet(ctl, cmd, "source", &source) < 0)
+ return NULL;
+
+ return virshCommaStringListComplete(source, sources);
+}
diff --git a/tools/virsh-completer-domain.h b/tools/virsh-completer-domain.h
index 083ab327cc..f5e5625051 100644
--- a/tools/virsh-completer-domain.h
+++ b/tools/virsh-completer-domain.h
@@ -53,3 +53,8 @@ char ** virshDomainDeviceAliasCompleter(vshControl *ctl,
char ** virshDomainShutdownModeCompleter(vshControl *ctl,
const vshCmd *cmd,
unsigned int flags);
+
+char **
+virshDomainIfAddrSourceCompleter(vshControl *ctl,
+ const vshCmd *cmd,
+ unsigned int flags);
diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 30b186ffd1..1d1f87eb9e 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -2346,6 +2346,7 @@ static const vshCmdOptDef opts_domifaddr[] = {
{.name = "source",
.type = VSH_OT_STRING,
.flags = VSH_OFLAG_NONE,
+ .completer = virshDomainIfAddrSourceCompleter,
.help = N_("address source: 'lease', 'agent', or 'arp'")},
{.name = NULL}
};
--
2.20.1
4 years, 10 months
[libvirt] [PULL 0/3] Block patches
by Stefan Hajnoczi
The following changes since commit aceeaa69d28e6f08a24395d0aa6915b687d0a681:
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2019-12-17' into staging (2019-12-17 15:55:20 +0000)
are available in the Git repository at:
https://github.com/stefanha/qemu.git tags/block-pull-request
for you to fetch changes up to 725fe5d10dbd4259b1853b7d253cef83a3c0d22a:
virtio-blk: fix out-of-bounds access to bitmap in notify_guest_bh (2019-12-19 16:20:25 +0000)
----------------------------------------------------------------
Pull request
----------------------------------------------------------------
Li Hangjing (1):
virtio-blk: fix out-of-bounds access to bitmap in notify_guest_bh
Stefan Hajnoczi (2):
virtio-blk: deprecate SCSI passthrough
docs: fix rst syntax errors in unbuilt docs
docs/arm-cpu-features.rst | 6 +++---
docs/virtio-net-failover.rst | 4 ++--
docs/virtio-pmem.rst | 19 ++++++++++---------
hw/block/dataplane/virtio-blk.c | 2 +-
qemu-deprecated.texi | 11 +++++++++++
5 files changed, 27 insertions(+), 15 deletions(-)
--
2.23.0
4 years, 10 months
[libvirt] [PATCH] docs: Harmonize backend names for QEMU and LXC
by Michael Weiser
Trivially replace usages of qemu and lxc in the virsh manpage with their
more heavily used and (according to Wikipedia) correct upper-case
spellings QEMU and LXC.
Signed-off-by: Michael Weiser <michael.weiser(a)gmx.de>
Suggested-by: Daniel Henrique Barboza <danielhb413(a)gmail.com>
---
docs/manpages/virsh.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index fea0527caf..7e26676570 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -577,7 +577,7 @@ from the domain's XML <os/> element and <type/> subelement or one from a
list of machines from the ``virsh capabilities`` output for a specific
architecture and domain type.
-For the qemu hypervisor, a *virttype* of either 'qemu' or 'kvm' must be
+For the QEMU hypervisor, a *virttype* of either 'qemu' or 'kvm' must be
supplied along with either the *emulatorbin* or *arch* in order to
generate output for the default *machine*. Supplying a *machine*
value will generate output for the specific machine.
@@ -1072,7 +1072,7 @@ read I/O operations limit.
write I/O operations limit.
*--size-iops-sec* specifies size I/O operations limit per second.
*--group-name* specifies group name to share I/O quota between multiple drives.
-For a qemu domain, if no name is provided, then the default is to have a single
+For a QEMU domain, if no name is provided, then the default is to have a single
group for each *device*.
Older versions of virsh only accepted these options with underscore
@@ -1084,7 +1084,7 @@ An explicit 0 also clears any limit. A non-zero value for a given total
cannot be mixed with non-zero values for read or write.
It is up to the hypervisor to determine how to handle the length values.
-For the qemu hypervisor, if an I/O limit value or maximum value is set,
+For the QEMU hypervisor, if an I/O limit value or maximum value is set,
then the default value of 1 second will be displayed. Supplying a 0 will
reset the value back to the default.
@@ -1642,7 +1642,7 @@ domblkstat
Get device block stats for a running domain. A *block-device* corresponds
to a unique target name (<target dev='name'/>) or source file (<source
file='name'/>) for one of the disk devices attached to *domain* (see
-also ``domblklist`` for listing these names). On a lxc or qemu domain,
+also ``domblklist`` for listing these names). On a LXC or QEMU domain,
omitting the *block-device* yields device block stats summarily for the
entire domain.
@@ -3247,7 +3247,7 @@ destination). Some hypervisors do not support this feature and will return an
error if this parameter is used.
Optional *disks-port* sets the port that hypervisor on destination side should
-bind to for incoming disks traffic. Currently it is supported only by qemu.
+bind to for incoming disks traffic. Currently it is supported only by QEMU.
migrate-compcache
--
2.24.1
4 years, 10 months
[libvirt] [PATCH v2 0/3] Mark and document restore with managed save as risky
by Michael Weiser
This series marks restore of an inactive qemu snapshot while there is
managed saved state as risky due to the reasons explained in patch 1 and
3. Patch 2 is a simple reformatting of the documentation with no other
changes in preparation of addition of more reasons why reverts might
need to be forced.
Changes from v1:
- reword error message to "error: revert requires force: snapshot
without memory state, removal of existing managed saved state strongly
recommended to avoid corruption"
- add documentation of the new behaviour
Michael Weiser (3):
qemu: Warn of restore with managed save being risky
docs: Reformat snapshot-revert force reasons
docs: Add snapshot-revert qemu managedsave force
docs/manpages/virsh.rst | 39 ++++++++++++++++++++++++---------------
src/qemu/qemu_driver.c | 9 +++++++++
2 files changed, 33 insertions(+), 15 deletions(-)
--
2.24.1
4 years, 10 months
[libvirt] [PATCH] Add overrides for network port UUID getter/lookup methods
by Daniel P. Berrangé
The generator creates broken code for all these methods.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
generator.py | 3 ++
libvirt-override-api.xml | 16 ++++++++
libvirt-override.c | 83 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 102 insertions(+)
diff --git a/generator.py b/generator.py
index cba9d47..426f007 100755
--- a/generator.py
+++ b/generator.py
@@ -430,6 +430,9 @@ skip_impl = (
'virNetworkGetUUID',
'virNetworkGetUUIDString',
'virNetworkLookupByUUID',
+ 'virNetworkPortGetUUID',
+ 'virNetworkPortGetUUIDString',
+ 'virNetworkPortLookupByUUID',
'virDomainGetAutostart',
'virNetworkGetAutostart',
'virDomainBlockStats',
diff --git a/libvirt-override-api.xml b/libvirt-override-api.xml
index 7a0d4c5..4ab403c 100644
--- a/libvirt-override-api.xml
+++ b/libvirt-override-api.xml
@@ -64,6 +64,12 @@
<arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/>
<arg name='uuid' type='const unsigned char *' info='the UUID string for the network, must be 16 bytes'/>
</function>
+ <function name='virNetworkPortLookupByUUID' file='python'>
+ <info>Try to lookup a port on the given network based on its UUID.</info>
+ <return type='virNetworkPortPtr' info='a new network port object or NULL in case of failure'/>
+ <arg name='net' type='virNetworkPtr' info='pointer to the network object'/>
+ <arg name='uuid' type='const unsigned char *' info='the UUID string for the network port, must be 16 bytes'/>
+ </function>
<function name='virDomainGetInfo' file='python'>
<info>Extract information about a domain. Note that if the connection used to get the domain is limited only a partial set of the information can be extracted.</info>
<return type='char *' info='the list of information or None in case of error'/>
@@ -153,6 +159,16 @@
<return type='char *' info='the UUID string or None in case of error'/>
<arg name='net' type='virNetworkPtr' info='a network object'/>
</function>
+ <function name='virNetworkPortGetUUID' file='python'>
+ <info>Extract the UUID unique Identifier of a network port.</info>
+ <return type='char *' info='the 16 bytes string or None in case of error'/>
+ <arg name='domain' type='virNetworkPortPtr' info='a network port object'/>
+ </function>
+ <function name='virNetworkPortGetUUIDString' file='python'>
+ <info>Fetch globally unique ID of the network port as a string.</info>
+ <return type='char *' info='the UUID string or None in case of error'/>
+ <arg name='net' type='virNetworkPortPtr' info='a network port object'/>
+ </function>
<function name='virStoragePoolGetUUID' file='python'>
<info>Extract the UUID unique Identifier of a storage pool.</info>
<return type='char *' info='the 16 bytes string or None in case of error'/>
diff --git a/libvirt-override.c b/libvirt-override.c
index 4e4c00a..2b39ace 100644
--- a/libvirt-override.c
+++ b/libvirt-override.c
@@ -10185,6 +10185,86 @@ libvirt_virNetworkPortGetParameters(PyObject *self ATTRIBUTE_UNUSED,
virTypedParamsFree(params, nparams);
return dict;
}
+
+static PyObject *
+libvirt_virNetworkPortGetUUID(PyObject *self ATTRIBUTE_UNUSED,
+ PyObject *args)
+{
+ unsigned char uuid[VIR_UUID_BUFLEN];
+ virNetworkPortPtr port;
+ PyObject *pyobj_port;
+ int c_retval;
+
+ if (!PyArg_ParseTuple(args, (char *)"O:virNetworkPortGetUUID", &pyobj_port))
+ return NULL;
+ port = (virNetworkPortPtr) PyvirNetworkPort_Get(pyobj_port);
+
+ if (port == NULL)
+ return VIR_PY_NONE;
+
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ c_retval = virNetworkPortGetUUID(port, &uuid[0]);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ if (c_retval < 0)
+ return VIR_PY_NONE;
+
+ return libvirt_charPtrSizeWrap((char *) &uuid[0], VIR_UUID_BUFLEN);
+}
+
+static PyObject *
+libvirt_virNetworkPortGetUUIDString(PyObject *self ATTRIBUTE_UNUSED,
+ PyObject *args)
+{
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virNetworkPortPtr port;
+ PyObject *pyobj_port;
+ int c_retval;
+
+ if (!PyArg_ParseTuple(args, (char *)"O:virNetworkPortGetUUIDString",
+ &pyobj_port))
+ return NULL;
+ port = (virNetworkPortPtr) PyvirNetworkPort_Get(pyobj_port);
+
+ if (port == NULL)
+ return VIR_PY_NONE;
+
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ c_retval = virNetworkPortGetUUIDString(port, &uuidstr[0]);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ if (c_retval < 0)
+ return VIR_PY_NONE;
+
+ return libvirt_constcharPtrWrap((char *) &uuidstr[0]);
+}
+
+static PyObject *
+libvirt_virNetworkPortLookupByUUID(PyObject *self ATTRIBUTE_UNUSED,
+ PyObject *args)
+{
+ virNetworkPortPtr c_retval;
+ virNetworkPtr net;
+ PyObject *pyobj_net;
+ unsigned char *uuid;
+ int len;
+
+ if (!PyArg_ParseTuple(args, (char *)"Oz#:virNetworkPortLookupByUUID",
+ &pyobj_net, &uuid, &len))
+ return NULL;
+ net = (virNetworkPtr) PyvirNetwork_Get(pyobj_net);
+
+ if ((uuid == NULL) || (len != VIR_UUID_BUFLEN))
+ return VIR_PY_NONE;
+
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ c_retval = virNetworkPortLookupByUUID(net, uuid);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ return libvirt_virNetworkPortPtrWrap((virNetworkPortPtr) c_retval);
+}
+
+
#endif /* LIBVIR_CHECK_VERSION(5, 5, 0) */
#if LIBVIR_CHECK_VERSION(5, 7, 0)
@@ -10535,6 +10615,9 @@ static PyMethodDef libvirtMethods[] = {
{(char *) "virNetworkListAllPorts", libvirt_virNetworkListAllPorts, METH_VARARGS, NULL},
{(char *) "virNetworkPortSetParameters", libvirt_virNetworkPortSetParameters, METH_VARARGS, NULL},
{(char *) "virNetworkPortGetParameters", libvirt_virNetworkPortGetParameters, METH_VARARGS, NULL},
+ {(char *) "virNetworkPortGetUUID", libvirt_virNetworkPortGetUUID, METH_VARARGS, NULL},
+ {(char *) "virNetworkPortGetUUIDString", libvirt_virNetworkPortGetUUIDString, METH_VARARGS, NULL},
+ {(char *) "virNetworkPortLookupByUUID", libvirt_virNetworkPortLookupByUUID, METH_VARARGS, NULL},
#endif /* LIBVIR_CHECK_VERSION(5, 5, 0) */
#if LIBVIR_CHECK_VERSION(5, 7, 0)
{(char *) "virDomainGetGuestInfo", libvirt_virDomainGetGuestInfo, METH_VARARGS, NULL},
--
2.24.1
4 years, 10 months
[libvirt] when libvirt xml define nic driver e1000, my vm fail to start
by thomas.kuang
hi, all:
my xml define interface as following:
<interface type='ethernet'>
<mac address='00:04:00:41:03:11'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
when run the command:
bash-4.1# virsh start root-vsys_v77
error: Failed to start domain root-vsys_v77
error: Unable to open /dev/net/tun, is tun module loaded?: No such file or directory
is my xml define error ? how can create my vm with nic driver e1000 ?
bash-4.1# libvirtd -v ,yum install libvirt
2020-01-02 10:00:49.261+0000: 8349: info : libvirt version: 4.5.0, package: 23.el7 (CentOS BuildSystem <http://bugs.centos.org>, 2019-08-09-00:39:08, x86-02.bsys.centos.org)
2020-01-02 10:00:49.261+0000: 8349: info : hostname: NSG
2020-01-02 10:00:49.261+0000: 8349: warning : virGetHostnameImpl:719 : getaddrinfo failed for 'NSG': Name or service not known
2020-01-02 10:00:49.263+0000: 8349: info : libvirt version: 4.5.0, package: 23.el7 (CentOS BuildSystem <http://bugs.centos.org>, 2019-08-09-00:39:08, x86-02.bsys.centos.org)
2020-01-02 10:00:49.263+0000: 8349: info : hostname: NSG
2020-01-02 10:00:49.263+0000: 8349: info : virObjectNew:248 : OBJECT_NEW: obj=0x5605a6104de0 classname=virAccessManager
2020-01-02 10:00:49.263+0000: 8349: info : virObjectNew:248 : OBJECT_NEW: obj=0x5605a60f54e0 classname=virAccessManager
my emulator is qemu-3.0 ,i download qemu source code and compiled the emulator:
bash-4.1# /usr/local/bin/qemu-system-x86_64 --version
QEMU emulator version 3.0.0
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
thanks !
4 years, 10 months
[libvirt] [PATCH] qemu: Warn of restore with managed save being risky
by Michael Weiser
Internal snapshots of a non-running domain do not carry any memory state
and restoring such a snapshot will not replace existing saved memory
state. This allows a scenario, where a user first suspends a domain into
managedsave, restores a non-running snapshot and then resumes the domain
from managedsave. After that, the guest system will run with its
previous memory state atop a different disk state. The most obvious
possible fallout from this is extensive file system corruption. Swap
content and RAID bitmaps might also be off.
This has been discussed[1] and fixed[2] from the end-user perspective for
virt-manager.
This patch marks the restore operation as risky at the libvirt level,
requiring the user to remove the saved memory state first or force the
operation.
[1] https://www.redhat.com/archives/virt-tools-list/2019-November/msg00011.html
[2] https://www.redhat.com/archives/virt-tools-list/2019-December/msg00049.html
Signed-off-by: Michael Weiser <michael.weiser(a)gmx.de>
Cc: Cole Robinson <crobinso(a)redhat.com>
---
src/qemu/qemu_driver.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ec8faf384c..dcd103d3bb 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16652,6 +16652,15 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
_("must respawn qemu to start inactive snapshot"));
goto endjob;
}
+ if (vm->hasManagedSave &&
+ !(snapdef->state == VIR_DOMAIN_SNAPSHOT_RUNNING ||
+ snapdef->state == VIR_DOMAIN_SNAPSHOT_PAUSED)) {
+ virReportError(VIR_ERR_SNAPSHOT_REVERT_RISKY, "%s",
+ _("revert to snapshot while there is a managed "
+ "saved state will cause corruption when run, "
+ "remove saved state first"));
+ goto endjob;
+ }
}
if (snap->def->dom) {
--
2.24.1
4 years, 10 months