[libvirt] [PATCH] libvirtd: create run dir when running at non-root user
by xuhj@linux.vnet.ibm.com
From: soulxu <soulxu(a)soulxu-ThinkPad-T410.(none)>
Signed-off-by: Xu He Jie <xuhj(a)linux.vnet.ibm.com>
When libvirtd is running at non-root user, it won't create ${HOME}/.libvirt.
It will show error message:
17:44:16.838: 7035: error : virPidFileAcquirePath:322 : Failed to open pid file
---
daemon/libvirtd.c | 52 ++++++++++++++++++++++++++++++++++++++--------------
1 files changed, 38 insertions(+), 14 deletions(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index 423c3d7..1ce8acd 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -1249,6 +1249,8 @@ int main(int argc, char **argv) {
bool privileged = geteuid() == 0 ? true : false;
bool implicit_conf = false;
bool use_polkit_dbus;
+ char *run_dir = NULL;
+ mode_t old_umask;
struct option opts[] = {
{ "verbose", no_argument, &verbose, 1},
@@ -1403,23 +1405,42 @@ int main(int argc, char **argv) {
/* Ensure the rundir exists (on tmpfs on some systems) */
if (privileged) {
- const char *rundir = LOCALSTATEDIR "/run/libvirt";
- mode_t old_umask;
-
- old_umask = umask(022);
- if (mkdir (rundir, 0755)) {
- if (errno != EEXIST) {
- char ebuf[1024];
- VIR_ERROR(_("unable to create rundir %s: %s"), rundir,
- virStrerror(errno, ebuf, sizeof(ebuf)));
- ret = VIR_DAEMON_ERR_RUNDIR;
- umask(old_umask);
- goto cleanup;
- }
+ run_dir = strdup(LOCALSTATEDIR "/run/libvirt");
+ if (!run_dir) {
+ VIR_ERROR(_("Can't allocate memory"));
+ goto cleanup;
+ }
+ }
+ else {
+ char *user_dir = NULL;
+
+ if (!(user_dir = virGetUserDirectory(geteuid()))) {
+ VIR_ERROR(_("Can't determine user directory"));
+ goto cleanup;
+ }
+
+ if (virAsprintf(&run_dir, "%s/.libvirt/", user_dir) < 0) {
+ VIR_ERROR(_("Can't allocate memory"));
+ VIR_FREE(user_dir);
+ goto cleanup;
}
- umask(old_umask);
+
+ VIR_FREE(user_dir);
}
+ old_umask = umask(022);
+ if (mkdir (run_dir, 0755)) {
+ if (errno != EEXIST) {
+ char ebuf[1024];
+ VIR_ERROR(_("unable to create rundir %s: %s"), run_dir,
+ virStrerror(errno, ebuf, sizeof(ebuf)));
+ ret = VIR_DAEMON_ERR_RUNDIR;
+ umask(old_umask);
+ goto cleanup;
+ }
+ }
+ umask(old_umask);
+
/* Try to claim the pidfile, exiting if we can't */
if ((pid_file_fd = virPidFileAcquirePath(pid_file, getpid())) < 0) {
ret = VIR_DAEMON_ERR_PIDFILE;
@@ -1571,6 +1592,9 @@ cleanup:
VIR_FREE(sock_file_ro);
VIR_FREE(pid_file);
VIR_FREE(remote_config_file);
+ if (run_dir)
+ VIR_FREE(run_dir);
+
daemonConfigFree(config);
virLogShutdown();
--
1.7.4.1
13 years, 3 months
[libvirt] RFCv2: virDomainSnapshotCreateXML enhancements
by Eric Blake
[BCC'ing those who have responded to earlier RFC's]
I've posted previous RFCs for improving snapshot support:
ideas on managing a subset of disks:
https://www.redhat.com/archives/libvir-list/2011-May/msg00042.html
ideas on managing snapshots of storage volumes not tied to a domain
https://www.redhat.com/archives/libvir-list/2011-June/msg00761.html
After re-reading the feedback received on those threads, I think I've
settled on a pretty robust design for my first round of adding
improvements to the management of snapshots tied to a domain, while
leaving the door open for future extensions.
Sorry this email is so long (I've had it open in my editor for more than
48 hours now as I keep improving it), but hopefully it is worth the
effort to read. See the bottom if you want the shorter summary on the
proposed changes.
First, some definitions:
========================
disk snapshot: the state of a virtual disk used at a given time; once a
snapshot exists, then it is possible to track a delta of changes that
have happened since that time.
internal disk snapshot: a disk snapshot where both the saved state and
delta reside in the same file (possible with qcow2 and qed). If a disk
image is not in use by qemu, this is possible via 'qemu-img snapshot -c'.
external disk snapshot: a disk snapshot where the saved state is one
file, and the delta is tracked in another file. For a disk image not in
use by qemu, this can be done with qemu-img to create a new qcow2 file
wrapping any type of existing file. Recent qemu has also learned the
'snapshot_blkdev' monitor command for creating external snapshots while
qemu is using a disk, and the goal of this RFC is to expose that
functionality from within existing libvirt APIs.
saved state: all non-disk information used to resume a guest at the same
state, assuming the disks did not change. With qemu, this is possible
via migration to a file.
checkpoint: a combination of saved state and a disk snapshot. With
qemu, the 'savevm' monitor command creates a checkpoint using internal
snapshots. It may also be possible to combine saved state and disk
snapshots created while the guest is offline for a form of
checkpointing, although this RFC focuses on disk snapshots created while
the guest is running.
snapshot: can be either 'disk snapshot' or 'checkpoint'; the rest of
this email will attempt to use 'snapshot' where either form works, and a
qualified term where no ambiguity is intended.
Existing libvirt functionality
==============================
The virDomainSnapshotCreateXML currently manages a hierarchy of
"snapshots", although it is currently only used for "checkpoints", where
every snapshot has a name and a possibly empty parent. The idea is that
once a domain has a snapshot, there is always a current snapshot, and
all new snapshots are created with a parent of a previously existing
snapshot (although there are still some bugs to be fixed in managing the
current snapshot over a libvirtd restart). It is possible to have
disjoint hierarchies, if you delete a root snapshot that had more than
one child (making both children become independent roots). The snapshot
hierarchy is maintained by libvirt (in a typical installation, the files
in /var/lib/libvirt/qemu/snapshot/<dom>/<name> track each named
snapshot, using <domainsnapshot> XML); using additional metadata not
present in the qcow2 internal snapshot format (that is, while qcow2 can
maintain multiple snapshots, it does not maintain relations between
them). Remember, the "current" snapshot is not the current machine
state, but the snapshot that would become the parent if you create a new
snapshot; perhaps we could have named it the "loaded" snapshot, but the
API names are set in stone now.
Libvirt also has APIs for listing all snapshots, querying the current
snapshot, reverting back to the state of another snapshot, and deleting
a snapshot. Deletion comes with a choice of deleting just that named
version (removing one node in the hierarchy and re-parenting all
children) or that tree of the hierarchy (that named version and all
children).
Since qemu checkpoints can currently only be created via internal disk
snapshots, libvirt has not had to track any file name relationships - a
single "snapshot" corresponds to a qcow2 snapshot name within all qcow2
disks associated to a domain; furthermore, snapshot creation was limited
to domains where all modifiable disks were already in qcow2 format.
However, these "checkpoints" could be created on both running domains
(qemu savevm) or inactive domains (qemu-img snapshot -c), with the
latter technically being a case of just internal disk snapshots.
Libvirt currently has a bug in that it only saves <domain>/<uuid> rather
than the full domain xml along with a checkpoint - if any devices are
hot-plugged (or in the case of offline snapshots, if the domain
configuration is changed) after a snapshot but before the revert, then
things will most likely blow up due to the differences in devices in use
by qemu vs. the devices expected by the snapshot.
Reverting to a snapshot can also be considered as a form of data loss -
you are discarding the disk changes and ram state that have happened
since the last snapshot. To some degree, this is by design - the very
nature of reverting to a snapshot implies throwing away changes;
however, it may be nice to add a safety valve so that by default,
reverting to a live checkpoint from an offline state works, but
reverting from a running domain should require some confirmation that it
is okay to throw away accumulated running state.
Libvirt also currently has a limitation where snapshots are local to one
host - the moment you migrate a snapshot to another host, you have lost
access to all snapshot metadata.
Proposed enhancements
=====================
Note that these proposals merely add xml attribute and subelement
extensions, as well as API flags, rather than creating any new API,
which makes it a nice candidate for backporting the patch series based
on this RFC into older releases as appropriate.
Creation
++++++++
I propose reusing the virDomainSnapshotCreateXML API and
<domainsnapshot> xml for both "checkpoints" and "disk snapshots", all
maintained within a single hierarchy. That is, the parent of a disk
snapshot can be a checkpoint or another disk snapshot, and the parent of
a checkpoint can be another checkpoint or a disk snapshot. And, since I
defined "snapshot" to mean either "checkpoint" or "disk snapshot", this
single hierarchy of "snapshots" will still be valid once it is expanded
to include more than just "checkpoints". Since libvirt already has to
maintain additional metadata to track parent-child relationships between
snapshots, it should not be hard to augment that XML to store additional
information needed to track external disk snapshots.
The default is that virDomainSnapshotCreateXML(,0) creates a checkpoint,
while leaving qemu running; I propose two new flags to fine-tune things:
virDomainSnapshotCreateXML(, VIR_DOMAIN_SNAPSHOT_CREATE_HALT) will
create the checkpoint then halt the qemu process, and
virDomainSnapshotCreateXML(, VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY) will
create a disk snapshot rather than a checkpoint (on qemu, by using a
sequence including the new 'snapshot_blkdev' monitor command).
Specifying both flags at once is a form of data loss (you are losing the
ram state), and I suspect it to be rarely used, but since it may be
worthwhile in testing whether a disk snapshot is truly crash-consistent,
I won't refuse the combination.
Other flags may be added in the future; I know of at least two features
in qemu that may warrant some flags once they are stable: 1. a guest
agent fsfreeze/fsthaw command will allow the guest to get the file
system into a stable state prior to the snapshot, meaning that reverting
to that snapshot can skip out on any fsck or journal replay actions. Of
course, this is a best effort attempt since guest agent interaction is
untrustworthy (comparable to memory ballooning - the guest may not
support the agent or may intentionally send falsified responses over the
agent), so the agent should only be used when explicitly requested -
this would be done with a new flag
VIR_DOMAIN_SNAPSHOT_CREATE_GUEST_FREEZE. 2. there is thought of adding
a qemu monitor command to freeze just I/O to a particular subset of
disks, rather than the current approach of having to pause all vcpus
before doing a snapshot of multiple disks. Once that is added, libvirt
should use the new monitor command by default, but for compatibility
testing, it may be worth adding VIR_DOMAIN_SNAPSHOT_CREATE_VCPU_PAUSE to
require a full vcpu pause instead of the faster iopause mechanism.
My first xml change is that <domainsnapshot> will now always track the
full <domain> xml (prior to any file modifications), normally as an
output-only part of the snapshot (that is, a <domain> sublement of
<domainsnapshot> will always be present in virDomainGetXMLDesc, but is
generally ignored in virDomainSnapshotCreateXML - more on this below).
This gives us the capability to use XML ABI compatibility checks
(similar to those used in virDomainMigrate2, virDomainRestoreFlags, and
virDomainSaveImageDefineXML). And, given that the full <domain> xml is
now present in the snapshot metadata, this means that we need to add
virDomainSnapshotGetXMLDesc(snap, VIR_DOMAIN_XML_SECURE), so that any
security-sensitive data doesn't leak out to read-only connections.
Right now, domain ABI compatibility is only checked for
VIR_DOMAIN_XML_INACTIVE contents of xml; I'm thinking that the snapshot
<domain> will always be the inactive version (sufficient for starting a
new qemu), although I may end up changing my mind and storing the active
version (when attempting to revert from live qemu to another live
checkpoint, all while using a single qemu process, the ABI compatibility
checking may need enhancements to discover differences not visible in
inactive xml but fatally different between the active xml when using
'loadvm', but which not matter to virsh save/restore where a new qemu
process is created every time).
Next, we need a way to control which subset of disks is involved in a
snapshot command. Previous mail has documented that for ESX, the
decision can only be made at boot time - a disk can be persistent
(involved in snapshots, and saves changes across domain boots);
independent-persistent (is not involved in snapshots, but saves changes
across domain boots); or independent-nonpersistent (is not involved in
snapshots, and all changes during a domain run are discarded when the
domain quits). In <domain> xml, I will represent this by two new
optional attributes:
<disk snapshot='no|external|internal' persistent='yes|no'>...</disk>
For now, qemu will reject snapshot=internal (the snapshot_blkdev monitor
command does not yet support it, although it was documented as a
possible extension); I'm not sure whether ESX supports external,
internal, or both. Likewise, both ESX and qemu will reject
persistent=no unless snapshot=no is also specified or implied (it makes
no sense to create a snapshot if you know the disk will be thrown away
on next boot), but keeping the options orthogonal may prove useful for
some future extension. If either option is omitted, the default for
snapshot is 'no' if the disk is <shared> or <readonly> or persistent=no,
and 'external' otherwise; and the default for persistent is 'yes' for
all disks (domain_conf.h will have to represent nonpersistent=0 for
easier coding with sane 0-initialized defaults, but no need to expose
that ugly name in the xml). I'm not sure whether to reject an explicit
persistent=no coupled with <readonly>, or just ignore it (if the disk is
readonly, it can't change, so there is nothing to throw away after the
domain quits). Creation of an external snapshot requires rewriting the
active domain XML to reflect the new filename.
While ESX can only select the subset of disks to snapshot at boot time,
qemu can alter the selection at runtime. Therefore, I propose also
modifying the <domainsnapshot> xml to take a new subelement <disks> to
fine-tune which disks are involved in a snapshot. For now, a checkpoint
must omit <disks> on virDomainSnapshotCreateXML input (that is, <disks>
must only be present if the VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY is
used, and checkpoints always cover full system state, and on qemu this
checkpoint uses internal snapshots). Meanwhile, for disk snapshots, if
the <disks> element is omitted, then one is automatically created using
the attributes in the <domain> xml. For ESX, if the <disks> element is
present, it must select the same disks as the <domain> xml. Offline
checkpoints will continue to use <state>shutoff</state> in the xml
output, while new disk snapshots will use <state>disk-snapshot</state>
to indicate that the disk state was obtained from a running VM and might
be only crash-consistent rather than stable.
The <disks> element has an optional number of <disk> subelements; at
most one per <disk> in the <devices> section of <domain>. Each <disk>
element has a mandatory attribute name='name', which must match the
<target dev='name'/> of the <domain> xml, as a way of getting 1:1
correspondence between domainsnapshot/disks/disk and domain/devices/disk
while using names that should already be unique. Each <disk> also has
an optional snapshot='no|internal|external' attribute, similar to the
proposal for <domain>/<devices>/<disk>; if not provided, the attribute
defaults to the one from the <domain>. If snapshot=external, then there
may be an optional subelement <source file='path'/>, which gives the
desired new file name. If external is requested, but the <source>
subelement is not present, then libvirt will generate a suitable
filename, probably by concatenating the existing name with the snapshot
name, and remembering that the snapshot name is generated as a timestamp
if not specified. Also, for external snapshots, the <disk> element may
have an optional sub-element specifying the driver (useful for selecting
qcow2 vs. qed in the qemu 'snapshot_blkdev' monitor command); again,
this can normally be generated by default.
Future extensions may include teaching qemu to allow coupling
checkpoints with external snapshots by allowing a <disks> element even
for checkpoints. (That is, while the initial implementation will always
output <disks> for <state>disk-snapshot</state> and never output <disks>
for <state>shutoff</state>, but this may not always hold in the future).
Likewise, we may discover when implementing lvm or btrfs snapshots
that additional subelements to each <disk> would be useful for
specifying additional aspects for creating snapshots using that
technology, where the omission of those subelements has a sane default
state.
libvirt can be taught to honor persistent=no for qemu by creating a
qcow2 wrapper file prior to starting qemu, then tearing down that
wrapper after the fact, although I'll probably leave that for later in
my patch series.
As an example, a valid input <domainsnapshot> for creation of a qemu
disk snapshot would be:
<domainsnapshot>
<name>snapshot</name>
<disks>
<disk name='vda'/>
<disk name='vdb' snapshot='no'/>
<disk name='vdc' snapshot='external'>
<source file='/path/to/new'/>
</disk>
</disks>
</domainsnapshot>
which requests that the <disk> matching the target dev=vda defer to the
<domain> default for whether to snapshot (and if the domain default
requires creating an external snapshot, then libvirt will create the new
file name; this could also be specified by omitting the <disk
name='vda'/> subelement altogether); the <disk> matching vdb is not
snapshotted, and the <disk> matching vdc is involved in an external
snapshot where the user specifies the new filename of /path/to/new. On
dumpxml output, the output will be fully populated with the items
generated by libvirt, and be displayed as:
<domainsnapshot>
<name>snapshot</name>
<state>disk-snapshot</state>
<parent>
<name>prior</name>
</parent>
<creationTime>1312945292</creationTime>
<domain>
<!-- previously just uuid, but now the full domain XML,
including... -->
...
<devices>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='raw'/>
<source file='/path/to/original'/>
<target dev='vda' bus='virtio'/>
</disk>
...
</devices>
</domain>
<disks>
<disk name='vda' snapshot='external'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/original.snapshot'>
</disk>
<disk name='vdb' snapshot='no'/>
<disk name='vdc' snapshot='external'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/new'/>
</disk>
</disks>
</domainsnapshot>
And, if the user were to do 'virsh dumpxml' of the domain, they would
now see the updated <disk> contents:
<domain>
...
<devices>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/original.snapshot'/>
<target dev='vda' bus='virtio'/>
</disk>
...
</devices>
</domain>
++++++++++
Reverting
When it comes to reverting to a snapshot, the only time it is possible
to revert to a live image is if the snapshot is a "checkpoint" of a
running or paused domain, because qemu must be able to restore the ram
state. Reverting to any other snapshot (both the existing "checkpoint"
of an offline image, which uses internal disk snapshots, and my new
"disk snapshot" which uses external disk snapshots even though it was
created against a running image), will revert the disks back to the
named state, but default to leaving the guest in an offline state. Two
new mutually exclusive flags will allow to both revert to snapshot disk
state and affect the resulting qemu state;
virDomainRevertToSnapshot(snap, VIR_DOMAIN_SNAPSHOT_REVERT_START) to run
from the snapshot, and virDomainRevertToSnapshot(snap,
VIR_DOMAIN_SNAPSHOT_REVERT_PAUSE) to create a new qemu process but leave
it paused. If neither of these two flags is specified, then the default
will be determined by the snapshot itself. These flags also allow
overriding the running/paused aspect recorded in live checkpoints. Note
that I am not proposing a flag for reverting to just the disk state of a
live checkpoint; this is considered an uncommon operation, and can be
accomplished in two steps by reverting to paused state to restore disk
state followed by destroying the domain (but I can add a third
mutually-exclusive flag VIR_DOMAIN_SNAPSHOT_REVERT_STOP if we decide
that we really want this uncommon operation via a single API).
Reverting from a stopped state is always allowed, even if the XML is
incompatible, by basically rewriting the domain's xml definition.
Meanwhile, reverting from an online VM to a live checkpoint has two
flavors - if the XML is compatible, then the 'loadvm' monitor command
can be used, and the qemu process remains alive. But if the XML has
changed incompatibly since the checkpoint was created, then libvirt will
refuse to do the revert unless it has permission to start a new qemu
process, via another new flag: virDomainRevertToSnapshot(snap,
VIR_DOMAIN_SNAPSHOT_REVERT_FORCE). The new REVERT_FORCE flag also
provides a safety valve - reverting to a stopped state (whether an
existing offline checkpoint, or a new disk snapshot) from a running VM
will be rejected unless REVERT_FORCE is specified. For now, this
includes the case of using the REVERT_START flag to revert to a disk
snapshot and then start qemu - this is because qemu does not yet expose
a way to safely revert to a disk snapshot from within the same qemu
process. If, in the future, qemu gains support for undoing the effects
of 'snapshot_blkdev' via monitor commands, then it may be possible to
use REVERT_START without REVERT_FORCE and end up reusing the same qemu
process while still reverting to the disk snapshot state, by using some
of the same tricks as virDomainReboot to force the existing qemu process
to boot from the new disk state.
Of course, the new safety valve is a slight change in behavior - scripts
that used to use 'virsh snapshot-revert' may now have to use 'virsh
snapshot-revert --force' to do the same actions; for backwards
compatibility, the virsh implementation should first try without the
flag, and a new VIR_ERR_* code be introduced in order to let virsh
distinguish between a new implementation that rejected the revert
because _REVERT_FORCE was missing, and an old one that does not support
_REVERT_FORCE in the first place. But this is not the first time that
added safety valves have caused existing scripts to have to adapt -
consider the case of 'virsh undefine' which could previously pass in a
scenario where it now requires 'virsh undefine --managed-save'.
For transient domains, it is not possible to make an offline checkpoint
(since transient domains don't exist if they are not running or paused);
transient domains must use REVERT_START or REVERT_PAUSE to revert to a
disk snapshot. And given the above limitations about qemu, reverting to
a disk snapshot will currently require REVERT_FORCE, since a new qemu
process will necessarily be created.
Just as creating an external disk snapshot rewrote the domain xml to
match, reverting to an older snapshot will update the domain xml (it
should be a bit more obvious now why the
<domainsnapshot>/<domain>/<devices>/<disk> lists the old name, while
<domainsnapshot>/<disks>/<disk> lists the new name).
The other thing to be aware of is that with internal snapshots, qcow2
maintains a distinction between current state and a snapshot - that is,
qcow2 is _always_ tracking a delta, and never modifies a named snapshot,
even when you use 'qemu-img snapshot -a' to revert to different snapshot
names. But with named files, the original file now becomes a read-only
backing file to a new active file; if we revert to the original file,
and make any modifications to it, the active file that was using it as
backing will be corrupted. Therefore, the safest thing is to reject any
attempt to revert to any snapshot (whether checkpoint or disk snapshot)
that has an existing child snapshot consisting of an external disk
snapshot. The metadata for each of these children can be deleted
manually, but that requires quite a few API calls (learn how many
children exist, get the list of children, and for each child, get its
xml to see if that child has the target snapshot as a parent, and if so
delete the snapshot). So as shorthand, virDomainRevertToSnapshot will
be taught a new flag, VIR_DOMAIN_SNAPSHOT_REVERT_DELETE_CHILDREN, which
first deletes any children of the snapshot about to be deleted prior to
reverting to that particular child.
And as long as reversion is learning how to do some snapshot deletion,
it becomes possible to decide what to do with the qcow2 file that was
created at the time of the disk snapshot. The default behavior for qemu
will be to use qemu-img to recreate the qcow2 wrapper file as a 0-delta
change against the original file, and keeping the domain xml tied to the
wrapper name, but a new flag VIR_DOMAIN_SNAPSHOT_REVERT_DISCARD can be
used to instead completely delete the qcow2 wrapper file, and update the
domain xml back to the original filename.
Deleting
++++++++
Deleting snapshots also needs some improvements. With checkpoints, the
disk snapshot contents were internal snapshots, so no files had to be
deleted. But with external disk snapshots, there are some choices to be
made - when deleting a snapshot, should the two files be consolidated
back into one or left separate, and if consolidation occurs, what should
be the name of the new file.
Right now, qemu supports consolidation only in one direction - the
backing file can be consolidated into the new file by using the new
blockpull API. In fact, the combination of disk snapshot and block pull
can be used to implement local storage migration - create a disk
snapshot with a local file as the new file around the remote file used
as the snapshot, then use block pull to break the ties to the remote
snapshot. But there is currently no way to make qemu save the contents
of a new file back into its backing file and then swap back to the
backing file as the live disk; also, while you can use block pull to
break the relation between the snapshot and the live file, and then
rename the live file back over the backing file name, there is no way to
make qemu revert back to that file name short of doing the
snapshot/blockpull algorithm twice; and the end result will be qcow2
even if the original file was raw. Also, if qemu ever adds support for
merging back into a backing file, as well as a means to determine how
dirty a qcow2 file is in relation to its backing file, there are some
possible efficiency gains - if most blocks of a snapshot differ from the
backing file, it is faster to use blockpull to pull in the remaining
blocks from the backing file to the active file; whereas if most blocks
of a snapshot are inherited from the backing file, it is more efficient
to pull just the dirty blocks from the active file back into the backing
file. Knowing whether the original file was qcow2 or some other format
may also impact how to merge deltas from the new qcow2 file back into
the original file.
Additionally, having fine-tuned control over which of the two names to
keep when consolidating a snapshot would require passing that
information through xml, but the existing virDomainSnapshotDelete does
not take an XML argument. For now, I propose that deleting an external
disk snapshot will be required to leave both the snapshot and live disk
image files intact (except for the special case of REVERT_DISCARD
mentioned above that combines revert and delete into a single API); but
I could see the feasibility of a future extension which adds a new XML
<on_delete> subelement to <domainsnapshot>/<disks>/<disk> flags that
specifies which of two files to consolidate into, as well as a flag
VIR_DOMAIN_SNAPSHOT_DELETE_CONSOLIDATE which triggers libvirt to do the
consolidation for any <on_delete> subelements in the snapshot being
deleted (if the flag is omitted, the <on_delete> subelement is ignored
and both files remain).
The notion of deleting all children of a snapshot while keeping the
snapshot itself (mentioned above under the revert use case) seems common
enough that I will add a flag VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY;
this flag implies VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN, but leaves the
target snapshot intact.
Undefining
++++++++++
In one regards, undefining a domain that has snapshots is just as bad as
undefining a domain with managed save state - since libvirt is
maintaining metadata about snapshot hierarchies, leaving this metadata
behind _will_ interfere with creation of a new domain by the same name.
However, since both checkpoints and snapshots are stored in
user-accessible disk images, and only the metadata is stored by libvirt,
it should eventually be possible for the user to decide whether to
discard the metadata but keep the snapshot contents intact in the disk
images, or to discard both the metadata and the disk image snapshots.
Meanwhile, I propose changing the default behavior of
virDomainUndefine[Flags] to reject attempts to undefine a domain with
any defined snapshots, and to add a new flag for virDomainUndefineFlags,
virDomainUndefineFlags(,VIR_DOMAIN_UNDEFINE_SNAPSHOTS), to act as
shorthand for calling virDomainSnapshotDelete for all snapshots tied to
the domain. Note that this deletes the metadata, but not the underlying
storage volumes.
Migration
+++++++++
The simplest solution to the fact that snapshot metadata is host-local
is to make migration attempts fail if a domain has any associated
snapshots. For a first cut patch, that is probably what I'll go with -
it reduces libvirt functionality, but instantly plugs all the bugs that
you can currently trigger by migrating a domain with snapshots.
But we can do better. Right now, there is no way to inject the metadata
associated with an already-existing snapshot, whether that snapshot is
internal or external, and deleting internal snapshots always deletes the
data as well as the metadata. But I already documented that external
snapshots will keep both the new file and it's read-only original, in
most cases, which means the data is preserved even when the snapshot is
deleted. With a couple new flags, we can have
virDomainSnapshotDelete(snap, VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY)
which removes libvirt's metadata, but still leaves all the data of the
snapshot present (visible to qemu-img snapshot -l or via multiple file
names); as well as virDomainSnapshotCreateXML(dom, xml,
VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE), which says to add libvirt snapshot
metadata corresponding to existing snapshots without doing anything to
the current guest (no 'savevm' or 'snapshot_blkdev', although it may
still make sense to do some sanity checks to see that the metadata being
defined actually corresponds to an existing snapshot in 'qemu-img
snapshot -l' or that an external snapshot file exists and has the
correct backing file to the original name).
Additionally, with these two tools in place, you can now make
ABI-compatible tweaks to the <domain> xml stored in a snapshot metadata
(similar to how 'virsh save-image-edit' can tweak a save image, such as
changing the host name of a <disk>'s image to match what was done
externally with qemu-img or other external tool). You can also make an
extended protocol that first dumps all snapshot xml on the source,
redefines those snapshots on the destination, then deletes the metadata
on the source, all before migrating the domain itself (unfortunately, I
don't think it can be wired into the cookies of migration protocol v3,
as each <domainsnapshot> xml for each snapshot will be larger than the
<domain> itself, and an arbitrary number of snapshots with lots of xml
don't fit into a finite-sized cookie over rpc; ultimately, this may mean
a migration protocol v4 that has an arbitrary number of handshakes
between Begin on the source and Prepare on the dest in order to properly
handle all the interchange - having a feature negotiation between client
and host should be part of that interchange).
Future proposals
================
I still want to add APIs to manage storage volume snapshots for storage
volumes not associated with a current domain, as well as enhancing disk
snapshots to operate on more than just qcow2 file formats (for example,
lvm snapshots or btrfs copy-on-write clones). But I've already signed
up for quite a bit of code changes in just this email, so that will have
to come later. I hope that what I have designed here does not preclude
extensibility to future additions - for example, <storagevolsnapshot>
would be able to use a single <disk> sublement similar to the above
<domainsnapshot>/<disks>/<disk> sublement for describing the relation
between a disk and its backing file snapshot.
Quick Summary
=============
These are the changes I plan on making soon; I mentioned other possible
future changes above that would depend on these being complete first, or
which involve creation of new API.
The following API patterns currently "succeed", but risk data loss or
other bugs that can get libvirt into an inconsistent state; they will
now fail by default:
virDomainRevertToSnapshot to go from a running VM to a stopped
checkpoint will now fail by default. Justification: stopping a running
domain is a form of data loss. Mitigation: use
VIR_DOMAIN_SNAPSHOT_REVERT_FORCE for old behavior.
virDomainRevertToSnapshot to go from a running VM to a live checkpoint
with an ABI-incompatible <domain> will now fail by default.
Justification: qemu does not handle ABI incompatibilities, and even if
the 'loadvm' may have succeeded, this generally resulted in fullscale
guest corruption. Mitigation: use VIR_DOMAIN_SNAPSHOT_REVERT_FORCE to
start a new qemu process that properly conforms to the snapshot's ABI.
virDomainUndefine will now fail to undefine a domain with any snapshots.
Justification: leaving behind libvirt metadata can corrupt future
defines, comparable to recent managed save changes, plus it is a form of
data loss. Mitigation: use virDomainUndefineFlags.
virDomainUndefineFlags will now default to failing an undefine of a
domain with any snapshots. Justification: leaving behind libvirt
metadata can corrupt future defines, comparable to recent managed save
changes, plus it is a form of data loss. Mitigation: separately delete
all snapshots (or at least all snapshot metadata) first, or use
VIR_DOMAIN_UNDEFINE_SNAPSHOTS.
virDomainMigrate/virDomainMigrate2 will now default to fail if the
source has any snapshots. Justification: metadata must be transferred
along with the domain for the migration to be complete. Mitigation:
until an improved migration protocol can automatically do the
handshaking necessary to migrate all the snapshot metadata, a user can
manually loop over each snapshot prior to migration, using
virDomainSnapshotCreateXML with VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE on
the destination, then virDomainSnapshotDelete with
VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY on the source.
Add the following XML:
in <domain>/<devices>/<disk>:
add optional attribute snapshot='no|internal|external'
add optional attribute persistent='yes|no'
in <domainsnapshot>:
expand <domainsnapshot>/<domain> to be full domain, not just uuid
add <state>disk-snapshot</state>
add optional <disks>/<disk>, where each <disk> maps back to
<domain>/<devices>/<disk> and controls how to do external disk snapshots
Add the following flags to existing API:
virDomainSnapshotCreateXML:
VIR_DOMAIN_SNAPSHOT_CREATE_HALT
VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY
VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE
virDomainSnapshotGetXMLDesc
VIR_DOMAIN_XML_SECURE
virDomainRevertToSnapshot
VIR_DOMAIN_SNAPSHOT_REVERT_START
VIR_DOMAIN_SNAPSHOT_REVERT_PAUSE
VIR_DOMAIN_SNAPSHOT_REVERT_FORCE
VIR_DOMAIN_SNAPSHOT_REVERT_DELETE_CHILDREN
VIR_DOMAIN_SNAPSHOT_REVERT_DISCARD
virDomainSnapshotDelete
VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY
VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY
virDomainUndefineFlags
VIR_DOMAIN_UNDEFINE_SNAPSHOTS
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org
13 years, 3 months
[libvirt] [test-API][PATCH] Add 2 functions to support get and set the scheduler parameters flag
by Nan Zhang
Add 2 functions into domain API
get_sched_params_flags()
set_sched_params_flags()
---
lib/domainAPI.py | 26 +++++++++++++++++++++++++-
1 files changed, 25 insertions(+), 1 deletions(-)
diff --git a/lib/domainAPI.py b/lib/domainAPI.py
index 5667c20..a6efab7 100644
--- a/lib/domainAPI.py
+++ b/lib/domainAPI.py
@@ -523,6 +523,16 @@ class DomainAPI(object):
code = e.get_error_code()
raise exception.LibvirtAPI(message, code)
+ def get_sched_params_flags(self, domname, flags):
+ try:
+ dom_obj = self.get_domain_by_name(domname)
+ sched_params_flags = dom_obj.schedulerParametersFlags(flags)
+ return sched_params_flags
+ except libvirt.libvirtError, e:
+ message = e.get_error_message()
+ code = e.get_error_code()
+ raise exception.LibvirtAPI(message, code)
+
def set_sched_params(self, domname, params):
try:
dom_obj = self.get_domain_by_name(domname)
@@ -533,6 +543,16 @@ class DomainAPI(object):
code = e.get_error_code()
raise exception.LibvirtAPI(message, code)
+ def set_sched_params_flags(self, domname, params, flags):
+ try:
+ dom_obj = self.get_domain_by_name(domname)
+ retval = dom_obj.setSchedulerParameters(params, flags)
+ return retval
+ except libvirt.libvirtError, e:
+ message = e.get_error_message()
+ code = e.get_error_code()
+ raise exception.LibvirtAPI(message, code)
+
def core_dump(self, domname, to, flags = 0):
try:
dom_obj = self.get_domain_by_name(domname)
@@ -769,7 +789,6 @@ VIR_DOMAIN_SHUTDOWN = 4
VIR_DOMAIN_SHUTOFF = 5
VIR_DOMAIN_CRASHED = 6
-
# virDomainMigrateFlags
VIR_MIGRATE_LIVE = 1
VIR_MIGRATE_PEER2PEER = 2
@@ -780,3 +799,8 @@ VIR_MIGRATE_PAUSED = 32
VIR_MIGRATE_NON_SHARED_DISK = 64
VIR_MIGRATE_NON_SHARED_INC = 128
+# virDomainModificationImpact
+VIR_DOMAIN_AFFECT_CURRENT = 0
+VIR_DOMAIN_AFFECT_LIVE = 1
+VIR_DOMAIN_AFFECT_CONFIG = 2
+
--
1.7.4.4
13 years, 3 months
[libvirt] [PATCH] virsh: Clarify documentation of -d option
by Jiri Denemark
The default is 4, not 0.
---
tools/virsh.pod | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index e17f309..9355d6c 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -72,7 +72,8 @@ instead of the default connection.
=item B<-d>, B<--debug> I<LEVEL>
Enable debug messages at integer I<LEVEL> and above. I<LEVEL> can
-range from 0 (default) to 4.
+range from 0 to 4 (default). See the documentation of B<VIRSH_DEBUG>
+environment variable for the description of each I<LEVEL>.
=item B<-l>, B<--log> I<FILE>
--
1.7.6.1
13 years, 3 months
[libvirt] [PATCH v3 0/3] Correctly label migration TCP socket
by Jiri Denemark
With current libvirt and qemu, migration is not working if SELinux is in
enforcing mode, since the TCP socket we pass to qemu is not labeled in a way
that would allow qemu to read from it.
After this patchset, migration works even in enforcing mode.
Jiri Denemark (3):
security: Rename SetSocketLabel APIs to SetDaemonSocketLabel
security: Introduce SetSocketLabel
qemu: Correctly label migration TCP socket
src/libvirt_private.syms | 1 +
src/qemu/qemu_migration.c | 5 +++-
src/qemu/qemu_process.c | 3 +-
src/security/security_dac.c | 11 +++++++++-
src/security/security_driver.h | 3 ++
src/security/security_manager.c | 10 +++++++++
src/security/security_manager.h | 2 +
src/security/security_nop.c | 7 ++++++
src/security/security_selinux.c | 42 +++++++++++++++++++++++++++++++++++++-
src/security/security_stack.c | 17 +++++++++++++++
10 files changed, 96 insertions(+), 5 deletions(-)
--
1.7.6.1
13 years, 3 months
[libvirt] [test-API][PATCH v2] Add ownership_test.py test case
by Wayne Sun
* Save a domain to a file, check the ownership of the file after
save and restore operation
---
repos/domain/ownership_test.py | 302 ++++++++++++++++++++++++++++++++++++++++
1 files changed, 302 insertions(+), 0 deletions(-)
create mode 100644 repos/domain/ownership_test.py
diff --git a/repos/domain/ownership_test.py b/repos/domain/ownership_test.py
new file mode 100644
index 0000000..cba1424
--- /dev/null
+++ b/repos/domain/ownership_test.py
@@ -0,0 +1,302 @@
+#!/usr/bin/env python
+"""Setting the dynamic_ownership in /etc/libvirt/qemu.conf,
+ check the ownership of saved domain file. Test could be on
+ local or root_squash nfs.
+ domain:ownership_test
+ guestname
+ #GUESTNAME#
+ dynamic_ownership
+ 0|1
+ use_nfs
+ 0|1
+
+ use_nfs is a flag for decide using nfs or not
+"""
+
+__author__ = 'Wayne Sun: gsun(a)redhat.com'
+__date__ = 'Mon Jul 25, 2011'
+__version__ = '0.1.0'
+__credits__ = 'Copyright (C) 2011 Red Hat, Inc.'
+__all__ = ['ownership_test']
+
+import os
+import re
+import sys
+import commands
+
+QEMU_CONF = "/etc/libvirt/qemu.conf"
+SAVE_FILE = "/mnt/test.save"
+TEMP_FILE = "/tmp/test.save"
+
+from utils.Python import utils
+
+def append_path(path):
+ """Append root path of package"""
+ if path in sys.path:
+ pass
+ else:
+ sys.path.append(path)
+
+from lib import connectAPI
+from lib import domainAPI
+from utils.Python import utils
+from exception import LibvirtAPI
+
+pwd = os.getcwd()
+result = re.search('(.*)libvirt-test-API', pwd)
+append_path(result.group(0))
+
+def return_close(conn, logger, ret):
+ conn.close()
+ logger.info("closed hypervisor connection")
+ return ret
+
+def check_params(params):
+ """Verify inputing parameter dictionary"""
+ logger = params['logger']
+ keys = ['guestname', 'dynamic_ownership']
+ for key in keys:
+ if key not in params:
+ logger.error("%s is required" %key)
+ return 1
+ return 0
+
+def nfs_setup(util, logger):
+ """setup nfs on localhost
+ """
+ logger.info("set nfs service")
+ cmd = "echo /tmp *\(rw,root_squash\) > /etc/exports"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to config nfs export")
+ return 1
+
+ logger.info("start nfs service")
+ cmd = "service nfs start"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to start nfs service")
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ return 0
+
+def chown_file(util, filepath, logger):
+ """touch a file and setting the chown
+ """
+ if os.path.exists(filepath):
+ os.remove(filepath)
+
+ touch_cmd = "touch %s" % filepath
+ logger.info(touch_cmd)
+ ret, out = util.exec_cmd(touch_cmd, shell=True)
+ if ret:
+ logger.error("failed to touch a new file")
+ logger.error(out[0])
+ return 1
+
+ logger.info("set chown of %s as 107:107" % filepath)
+ chown_cmd = "chown 107:107 %s" % filepath
+ ret, out = util.exec_cmd(chown_cmd, shell=True)
+ if ret:
+ logger.error("failed to set the ownership of %s" % filepath)
+ return 1
+
+ logger.info("set %s mode as 644" % filepath)
+ cmd = "chmod 644 %s" % filepath
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to set the mode of %s" % filepath)
+ return 1
+
+ return 0
+
+def prepare_env(util, guestname, dynamic_ownership, use_nfs, logger):
+ """configure dynamic_ownership in /etc/libvirt/qemu.conf,
+ set chown of the file to save
+ """
+ logger.info("set the dynamic ownership in %s as %s" % \
+ (QEMU_CONF, dynamic_ownership))
+ set_cmd = "echo dynamic_ownership = %s >> %s" % (dynamic_ownership, QEMU_CONF)
+ ret, out = util.exec_cmd(set_cmd, shell=True)
+ if ret:
+ logger.error("failed to set dynamic ownership")
+
+ logger.info("restart libvirtd")
+ restart_cmd = "service libvirtd restart"
+ ret, out = util.exec_cmd(restart_cmd, shell=True)
+ if ret:
+ logger.error("failed to restart libvirtd")
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ if use_nfs == '1':
+ ret = nfs_setup(util, logger)
+ if ret:
+ return 1
+
+ logger.info("mount the nfs path to /mnt")
+ mount_cmd = "mount -o vers=3 127.0.0.1:/tmp /mnt"
+ ret, out = util.exec_cmd(mount_cmd, shell=True)
+ if ret:
+ logger.error("Failed to mount the nfs path")
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ filepath = TEMP_FILE
+ else:
+ filepath = SAVE_FILE
+
+ ret = chown_file(util, filepath, logger)
+ if ret:
+ return 1
+
+ return 0
+
+def ownership_get(logger):
+ """check the ownership of file"""
+
+ statinfo = os.stat(SAVE_FILE)
+ uid = statinfo.st_uid
+ gid = statinfo.st_gid
+
+ logger.info("the uid and gid of %s is %s:%s" %(SAVE_FILE, uid, gid))
+
+ return 0, uid, gid
+
+def ownership_test(params):
+ """Save a domain to a file, check the ownership of
+ the file after save and restore
+ """
+ # Initiate and check parameters
+ params_check_result = check_params(params)
+ if params_check_result:
+ return 1
+
+ logger = params['logger']
+ guestname = params['guestname']
+ dynamic_ownership = params['dynamic_ownership']
+ use_nfs = params['use_nfs']
+ test_result = False
+
+ util = utils.Utils()
+
+ # set env
+ logger.info("prepare the environment")
+ ret = prepare_env(util, guestname, dynamic_ownership, use_nfs, logger)
+ if ret:
+ logger.error("failed to prepare the environment")
+
+ # Connect to local hypervisor connection URI
+ uri = util.get_uri('127.0.0.1')
+ conn = connectAPI.ConnectAPI()
+ virconn = conn.open(uri)
+
+ # save domain to the file
+ logger.info("save the domain to the file")
+ domobj = domainAPI.DomainAPI(virconn)
+ try:
+ domobj.save(guestname, SAVE_FILE)
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+ logger.error("Error: fail to save %s domain" %guestname)
+ return return_close(conn, logger, 1)
+
+ logger.info("check the ownership of %s after save" % SAVE_FILE)
+ ret, uid, gid = ownership_get(logger)
+ if use_nfs == '1':
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ else:
+ if dynamic_ownership == '1':
+ if uid == 0 and gid == 0:
+ logger.info("As expected, the chown changed to root:root")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ elif dynamic_ownership == '0':
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ else:
+ logger.error("wrong dynamic_ownership value %s" % dynamic_ownership)
+ return 1
+
+
+ # restore domain from file
+ logger.info("restore the domain from the file")
+ try:
+ domobj.restore(guestname, SAVE_FILE)
+ logger.info("check the ownership of %s after restore" % SAVE_FILE)
+ ret, uid, gid = ownership_get(logger)
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, not change back as expected" % \
+ (SAVE_FILE, uid, gid))
+ test_result = False
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+ logger.error("Error: fail to restore %s domain" %guestname)
+ test_result = False
+
+ if test_result:
+ return return_close(conn, logger, 0)
+ else:
+ return return_close(conn, logger, 1)
+
+def ownership_test_clean(params):
+ """clean testing environment"""
+ logger = params['logger']
+ use_nfs = params['use_nfs']
+
+ util = utils.Utils()
+
+ if use_nfs == '1':
+ logger.info("umount the nfs path")
+ umount_cmd = "umount /mnt"
+ ret, out = util.exec_cmd(umount_cmd, shell=True)
+ if ret:
+ logger.error("Failed to mount the nfs path")
+ for i in range(len(out)):
+ logger.error(out[i])
+
+ logger.info("stop nfs service")
+ cmd = "service nfs stop"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("Failed to stop nfs service")
+ for i in range(len(out)):
+ logger.error(out[i])
+
+ logger.info("clear the exports file")
+ cmd = ">/etc/exports"
+ if ret:
+ logger.error("Failed to clear exports file")
+
+ filepath = TEMP_FILE
+ else:
+ filepath = SAVE_FILE
+
+ if os.path.exists(filepath):
+ logger.info("remove dump file from save %s" % filepath)
+ os.remove(filepath)
+
--
1.7.1
13 years, 3 months
[libvirt] [PATCH] Do not try to cancel non-existent migration on source
by Jiri Denemark
If migration failed on source daemon, the migration is automatically
canceled by the daemon itself. Thus we don't need to call
virDomainMigrateConfirm3(cancelled=1). Calling it doesn't cause any harm
but the resulting error message printed in logs may confuse people.
---
src/libvirt.c | 41 +++++++++++++++++++++++++----------------
1 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index c8af3e1..256828c 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -4141,6 +4141,7 @@ virDomainMigrateVersion3(virDomainPtr domain,
virErrorPtr orig_err = NULL;
int cancelled = 1;
unsigned long protection = 0;
+ bool notify_source = true;
VIR_DOMAIN_DEBUG(domain, "dconn=%p xmlin=%s, flags=%lx, "
"dname=%s, uri=%s, bandwidth=%lu",
@@ -4221,8 +4222,13 @@ virDomainMigrateVersion3(virDomainPtr domain,
uri, flags | protection, dname, bandwidth);
/* Perform failed. Make sure Finish doesn't overwrite the error */
- if (ret < 0)
+ if (ret < 0) {
orig_err = virSaveLastError();
+ /* Perform failed so we don't need to call confirm to let source know
+ * about the failure.
+ */
+ notify_source = false;
+ }
/* If Perform returns < 0, then we need to cancel the VM
* startup on the destination
@@ -4265,22 +4271,25 @@ finish:
confirm:
/*
- * If cancelled, then src VM will be restarted, else
- * it will be killed
- */
- VIR_DEBUG("Confirm3 %p ret=%d domain=%p", domain->conn, ret, domain);
- VIR_FREE(cookiein);
- cookiein = cookieout;
- cookieinlen = cookieoutlen;
- cookieout = NULL;
- cookieoutlen = 0;
- ret = domain->conn->driver->domainMigrateConfirm3
- (domain, cookiein, cookieinlen,
- flags | protection, cancelled);
- /* If Confirm3 returns -1, there's nothing more we can
- * do, but fortunately worst case is that there is a
- * domain left in 'paused' state on source.
+ * If cancelled, then src VM will be restarted, else it will be killed.
+ * Don't do this if migration failed on source and thus it was already
+ * cancelled there.
*/
+ if (notify_source) {
+ VIR_DEBUG("Confirm3 %p ret=%d domain=%p", domain->conn, ret, domain);
+ VIR_FREE(cookiein);
+ cookiein = cookieout;
+ cookieinlen = cookieoutlen;
+ cookieout = NULL;
+ cookieoutlen = 0;
+ ret = domain->conn->driver->domainMigrateConfirm3
+ (domain, cookiein, cookieinlen,
+ flags | protection, cancelled);
+ /* If Confirm3 returns -1, there's nothing more we can
+ * do, but fortunately worst case is that there is a
+ * domain left in 'paused' state on source.
+ */
+ }
done:
if (orig_err) {
--
1.7.6
13 years, 3 months
[libvirt] [PATCH] Ignore unused streams in virStreamAbort
by Jiri Denemark
When virStreamAbort is called on a stream that has not been used yet,
quite confusing error is returned: "this function is not supported by
the connection driver". Let's just ignore such streams as there's
nothing to abort anyway.
---
src/libvirt.c | 8 ++++++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index 256828c..72c47f8 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -13975,8 +13975,12 @@ int virStreamAbort(virStreamPtr stream)
return -1;
}
- if (stream->driver &&
- stream->driver->streamAbort) {
+ if (!stream->driver) {
+ VIR_DEBUG("aborting unused stream");
+ return 0;
+ }
+
+ if (stream->driver->streamAbort) {
int ret;
ret = (stream->driver->streamAbort)(stream);
if (ret < 0)
--
1.7.6
13 years, 3 months
[libvirt] [test-API][PATCH] add "imagetype" argument for guest installation testcases
by Guannan Ren
Add disk format argument to installation test scripts. With them, we
can test snapshot testing. The following is the conf file sample.
domain:install_linux_cdrom
guestname
test-api-guest
...
imagetype
qcow2
...
domain:shutdown
guestname
test-api-guest
snapshot:internal_create
guestname
test-api-guest
---
repos/domain/install_linux_cdrom.py | 23 ++++++++++-----
repos/domain/install_linux_net.py | 50 +++++++++++++++++----------------
repos/domain/install_windows_cdrom.py | 23 ++++++++++-----
3 files changed, 56 insertions(+), 40 deletions(-)
diff --git a/repos/domain/install_linux_cdrom.py b/repos/domain/install_linux_cdrom.py
index 7e8fee9..8d21797 100644
--- a/repos/domain/install_linux_cdrom.py
+++ b/repos/domain/install_linux_cdrom.py
@@ -8,6 +8,7 @@
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -70,6 +71,7 @@ def usage():
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -86,7 +88,7 @@ def check_params(params):
mandatory_args = ['guestname', 'guesttype', 'guestos', 'guestarch']
optional_args = ['memory', 'vcpu', 'disksize', 'imagepath', 'hdmodel',
'nicmodel', 'macaddr', 'ifacetype', 'source', 'type',
- 'volumepath']
+ 'volumepath', 'imagetype']
for arg in mandatory_args:
if arg not in params_given.keys():
@@ -255,7 +257,7 @@ def install_linux_cdrom(params):
logger.debug("the uri to connect is %s" % uri)
if params.has_key('imagepath') and not params.has_key('volumepath'):
- imgfullpath = os.join.path(params.get('imagepath'), guestname)
+ imgfullpath = os.path.join(params.get('imagepath'), guestname)
elif not params.has_key('imagepath') and not params.has_key('volumepath'):
if hypervisor == 'xen':
@@ -280,13 +282,18 @@ def install_linux_cdrom(params):
else:
seeksize = '10'
- logger.info("the size of disk image is %sG" % (seeksize))
- shell_disk_dd = "dd if=/dev/zero of=%s bs=1 count=1 seek=%sG" % \
- (imgfullpath, seeksize)
- logger.debug("the commands line of creating disk images is '%s'" %
- shell_disk_dd)
+ if params.has_key('imagetype'):
+ imagetype = params.get('imagetype')
+ else:
+ imagetype = 'raw'
+
+ logger.info("create disk image with size %sG, format %s" % (seeksize, imagetype))
+ disk_create = "qemu-img create -f %s %s %sG" % \
+ (imagetype, imgfullpath, seeksize)
+ logger.debug("the commands line of creating disk images is '%s'" % \
+ disk_create)
- (status, message) = commands.getstatusoutput(shell_disk_dd)
+ (status, message) = commands.getstatusoutput(disk_create)
if status != 0:
logger.debug(message)
diff --git a/repos/domain/install_linux_net.py b/repos/domain/install_linux_net.py
index 21ae378..1b0470e 100644
--- a/repos/domain/install_linux_net.py
+++ b/repos/domain/install_linux_net.py
@@ -9,6 +9,7 @@
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -72,6 +73,7 @@ def usage():
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -88,7 +90,8 @@ def check_params(params):
'guestarch','netmethod']
optional_args = ['memory', 'vcpu', 'disksize', 'imagepath',
- 'hdmodel', 'nicmodel', 'ifacetype', 'source', 'type']
+ 'hdmodel', 'nicmodel', 'ifacetype',
+ 'imagetype', 'source', 'type']
for arg in mandatory_args:
if arg not in params_given.keys():
@@ -233,7 +236,7 @@ def install_linux_net(params):
logger.debug("the uri to connect is %s" % uri)
if params.has_key('imagepath'):
- fullimagepath = os.join.path(params.get('imagepath'), guestname)
+ fullimagepath = os.path.join(params.get('imagepath'), guestname)
else:
if hypervisor == 'xen':
fullimagepath = os.path.join('/var/lib/xen/images', guestname)
@@ -246,29 +249,28 @@ def install_linux_net(params):
fullimagepath)
if params.has_key('disksize'):
- logger.info("the size of disk image is %sG" % (params.get('disksize')))
- shell_disk_dd = "dd if=/dev/zero of=%s bs=1 count=1 seek=%sG" % \
- (fullimagepath, params.get('disksize'))
- logger.debug("the commands line of creating disk images is '%s'" %
- shell_disk_dd)
-
- (status, message) = commands.getstatusoutput(shell_disk_dd)
- if status != 0:
- logger.debug(message)
- else:
- logger.info("creating disk images file is successful.")
+ seeksize = params.get('disksize')
else:
- logger.info("the size of disk image is 10G")
- shell_disk_dd = "dd if=/dev/zero of=%s bs=1 count=1 seek=10G" % \
- fullimagepath
- logger.debug("the commands line of creating disk images is '%s'" %
- shell_disk_dd)
-
- (status, message) = commands.getstatusoutput(shell_disk_dd)
- if status != 0:
- logger.debug(message)
- else:
- logger.info("creating disk images file is successful.")
+ seeksize = '10'
+
+ if params.has_key('imagetype'):
+ imagetype = params.get('imagetype')
+ else:
+ imagetype = 'raw'
+
+ logger.info("create disk image with size %sG, format %s" % (seeksize, imagetype))
+ disk_create = "qemu-img create -f %s %s %sG" % \
+ (imagetype, fullimagepath, seeksize)
+ logger.debug("the commands line of creating disk images is '%s'" % \
+ disk_create)
+
+ (status, message) = commands.getstatusoutput(disk_create)
+
+ if status != 0:
+ logger.debug(message)
+ else:
+ logger.info("creating disk images file is successful.")
+
logger.info("get system environment information")
envfile = os.path.join(homepath, 'env.cfg')
diff --git a/repos/domain/install_windows_cdrom.py b/repos/domain/install_windows_cdrom.py
index 2ea0ee7..9cf9e3b 100644
--- a/repos/domain/install_windows_cdrom.py
+++ b/repos/domain/install_windows_cdrom.py
@@ -8,6 +8,7 @@
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -68,6 +69,7 @@ def usage():
optional arguments: memory
vcpu
disksize
+ imagetype
imagepath
hdmodel
nicmodel
@@ -89,7 +91,7 @@ def check_params(params):
mandatory_args = ['guestname', 'guesttype', 'guestos', 'guestarch']
optional_args = ['memory', 'vcpu', 'disksize', 'imagepath', 'hdmodel',
'nicmodel', 'macaddr', 'ifacetype', 'source', 'type',
- 'volumepath']
+ 'volumepath', 'imagetype']
for arg in mandatory_args:
if arg not in params_given.keys():
@@ -294,7 +296,7 @@ def install_windows_cdrom(params):
logger.debug("the uri to connect is %s" % uri)
if params.has_key('imagepath') and not params.has_key('volumepath'):
- imgfullpath = os.join.path(params.get('imagepath'), guestname)
+ imgfullpath = os..path.join(params.get('imagepath'), guestname)
elif not params.has_key('imagepath') and not params.has_key('volumepath'):
if hypervisor == 'xen':
imgfullpath = os.path.join('/var/lib/xen/images', guestname)
@@ -318,13 +320,18 @@ def install_windows_cdrom(params):
else:
seeksize = '20'
- logger.info("the size of disk image is %sG" % (seeksize))
- shell_disk_dd = "dd if=/dev/zero of=%s bs=1 count=1 seek=%sG" % \
- (imgfullpath, seeksize)
- logger.debug("the commands line of creating disk images is '%s'" %
- shell_disk_dd)
+ if params.has_key('imagetype'):
+ imagetype = params.get('imagetype')
+ else:
+ imagetype = 'raw'
+
+ logger.info("create disk image with size %sG, format %s" % (seeksize, imagetype))
+ disk_create = "qemu-img create -f %s %s %sG" % \
+ (imagetype, imgfullpath, seeksize)
+ logger.debug("the commands line of creating disk images is '%s'" % \
+ disk_create)
- (status, message) = commands.getstatusoutput(shell_disk_dd)
+ (status, message) = commands.getstatusoutput(disk_create)
if status != 0:
logger.debug(message)
--
1.7.1
13 years, 3 months
[libvirt] [test-API][PATCH] Fix the missing ret variable problem
by Wayne Sun
---
repos/remoteAccess/tls_setup.py | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/repos/remoteAccess/tls_setup.py b/repos/remoteAccess/tls_setup.py
index 80d6b42..4e7f24e 100644
--- a/repos/remoteAccess/tls_setup.py
+++ b/repos/remoteAccess/tls_setup.py
@@ -343,6 +343,7 @@ def request_credentials(credentials, user_data):
def hypervisor_connecting_test(uri, auth_tls, username,
password, logger, expected_result):
""" connect remote server """
+ ret = 0
try:
conn = connectAPI.ConnectAPI()
if auth_tls == 'none':
@@ -355,6 +356,7 @@ def hypervisor_connecting_test(uri, auth_tls, username,
except LibvirtAPI, e:
logger.error("API error message: %s, error code is %s" % \
(e.response()['message'], e.response()['code']))
+ ret = 1
conn.close()
--
1.7.1
13 years, 3 months