[libvirt] [PATCH] freebsd: Fix build problem due to picking up the wrong libvirt.h
by Matthias Bolte
AM_GNU_GETTEXT calls AM_ICONV_LINK. AM_ICONV_LINK saves and alters
CPPFLAGS, but doesn't restore it when it finds libiconv. This
results in /usr/local/include ending up in the gcc command line
before the include path for the local include directory. This makes
gcc pick a previous installed libvirt.h instead of the correct one
from the source tree.
Workaround this problem by saving and restoring CPPFLAGS around
the AM_GNU_GETTEXT call.
---
configure.ac | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/configure.ac b/configure.ac
index b2ba930..8f46dbd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -2011,8 +2011,16 @@ dnl Enable building libvirtd?
AM_CONDITIONAL([WITH_LIBVIRTD],[test "x$with_libvirtd" = "xyes"])
dnl Check for gettext - don't go any newer than what RHEL 5 supports
+dnl
+dnl save and restore CPPFLAGS around gettext check as the internal iconv
+dnl check might leave -I/usr/local/include in CPPFLAGS on FreeBSD resulting
+dnl in the build picking up previously installed libvirt/libvirt.h instead
+dnl of the correct one from the soucre tree
+
+save_CPPFLAGS="$CPPFLAGS"
AM_GNU_GETTEXT_VERSION([0.17])
AM_GNU_GETTEXT([external])
+CPPFLAGS="$save_CPPFLAGS"
ALL_LINGUAS=`cd "$srcdir/po" > /dev/null && ls *.po | sed 's+\.po$++'`
--
1.7.0.4
13 years, 4 months
[libvirt] [PATCH v4] virsh: avoid missing zero value judgement in cmdBlkiotune
by ajia@redhat.com
* tools/virsh.c: fix missing zero value judgement in cmdBlkiotune and correct
vshError information.
when weight is equal to 0, the cmdBlkiotune will not raise any error information
when judge weight value first time, and execute else branch to judge weight
value again, strncpy(temp->field, VIR_DOMAIN_BLKIO_WEIGHT, sizeof(temp->field))
will be not executed for ever. However, if and only if param->field is equal
to VIR_DOMAIN_BLKIO_WEIGHT, underlying qemuDomainSetBlkioParameters function
will check whether weight value is in range [100, 1000].
* how to reproduce?
% virsh blkiotune ${guestname} --weight 0
Signed-off-by: Alex Jia <ajia(a)redhat.com>
---
tools/virsh.c | 9 +++++----
1 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 8bd22dc..feb45de 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4004,6 +4004,7 @@ cmdBlkiotune(vshControl * ctl, const vshCmd * cmd)
virDomainPtr dom;
int weight = 0;
int nparams = 0;
+ int rv = 0;
unsigned int i = 0;
virTypedParameterPtr params = NULL, temp = NULL;
bool ret = false;
@@ -4031,15 +4032,15 @@ cmdBlkiotune(vshControl * ctl, const vshCmd * cmd)
if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
return false;
- if (vshCommandOptInt(cmd, "weight", &weight) < 0) {
+ if ((rv = vshCommandOptInt(cmd, "weight", &weight)) < 0) {
vshError(ctl, "%s",
- _("Unable to parse integer parameter"));
+ _("Unable to parse non-integer parameter"));
goto cleanup;
}
- if (weight) {
+ if (rv > 0) {
nparams++;
- if (weight < 0) {
+ if (weight <= 0) {
vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
goto cleanup;
}
--
1.7.1
13 years, 4 months
[libvirt] [PATCH] qemu: fix nested job with driver lock held
by Eric Blake
qemuMigrationUpdateJobStatus (called in a loop by migration
and save tasks) uses qemuDomainObjEnterMonitorWithDriver;
however, that function ended up starting a nested job without
releasing the driver.
Since no one else is making nested calls, we can inline the
internal functions to properly track driver_locked.
* src/qemu/qemu_domain.h (qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused prototypes.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal):
Reflect driver lock to nested job.
(qemuDomainObjBeginNestedJob)
(qemuDomainObjBeginNestedJobWithDriver)
(qemuDomainObjEndNestedJob): Drop unused functions.
---
This does not solve the crash in 'virsh managedsave', but hopefully
will make analysis easier in finding where we are freeing the monitor
too early when probing to see if outgoing migration is completed.
src/qemu/qemu_domain.c | 59 +++++++++++------------------------------------
src/qemu/qemu_domain.h | 9 -------
2 files changed, 14 insertions(+), 54 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index fe88ce3..2eaaf3a 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -832,31 +832,6 @@ int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
}
/*
- * Use this to protect monitor sections within active async job.
- *
- * The caller must call qemuDomainObjBeginAsyncJob{,WithDriver} before it can
- * use this method. Never use this method if you only own non-async job, use
- * qemuDomainObjBeginJob{,WithDriver} instead.
- */
-int
-qemuDomainObjBeginNestedJob(struct qemud_driver *driver,
- virDomainObjPtr obj)
-{
- return qemuDomainObjBeginJobInternal(driver, false, obj,
- QEMU_JOB_ASYNC_NESTED,
- QEMU_ASYNC_JOB_NONE);
-}
-
-int
-qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
-{
- return qemuDomainObjBeginJobInternal(driver, true, obj,
- QEMU_JOB_ASYNC_NESTED,
- QEMU_ASYNC_JOB_NONE);
-}
-
-/*
* obj must be locked before calling, qemud_driver does not matter
*
* To be called after completing the work associated with the
@@ -888,21 +863,6 @@ qemuDomainObjEndAsyncJob(struct qemud_driver *driver, virDomainObjPtr obj)
return virDomainObjUnref(obj);
}
-void
-qemuDomainObjEndNestedJob(struct qemud_driver *driver, virDomainObjPtr obj)
-{
- qemuDomainObjPrivatePtr priv = obj->privateData;
-
- qemuDomainObjResetJob(priv);
- qemuDomainObjSaveJob(driver, obj);
- virCondSignal(&priv->job.cond);
-
- /* safe to ignore since the surrounding async job increased the reference
- * counter as well */
- ignore_value(virDomainObjUnref(obj));
-}
-
-
static int ATTRIBUTE_NONNULL(1)
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
bool driver_locked,
@@ -911,7 +871,9 @@ qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
qemuDomainObjPrivatePtr priv = obj->privateData;
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
- if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
+ if (qemuDomainObjBeginJobInternal(driver, driver_locked, obj,
+ QEMU_JOB_ASYNC_NESTED,
+ QEMU_ASYNC_JOB_NONE) < 0)
return -1;
if (!virDomainObjIsActive(obj)) {
qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
@@ -952,8 +914,15 @@ qemuDomainObjExitMonitorInternal(struct qemud_driver *driver,
priv->mon = NULL;
}
- if (priv->job.active == QEMU_JOB_ASYNC_NESTED)
- qemuDomainObjEndNestedJob(driver, obj);
+ if (priv->job.active == QEMU_JOB_ASYNC_NESTED) {
+ qemuDomainObjResetJob(priv);
+ qemuDomainObjSaveJob(driver, obj);
+ virCondSignal(&priv->job.cond);
+
+ /* safe to ignore since the surrounding async job increased
+ * the reference counter as well */
+ ignore_value(virDomainObjUnref(obj));
+ }
}
/*
@@ -962,7 +931,7 @@ qemuDomainObjExitMonitorInternal(struct qemud_driver *driver,
* To be called immediately before any QEMU monitor API call
* Must have already either called qemuDomainObjBeginJob() and checked
* that the VM is still active or called qemuDomainObjBeginAsyncJob, in which
- * case this will call qemuDomainObjBeginNestedJob.
+ * case this will start a nested job.
*
* To be followed with qemuDomainObjExitMonitor() once complete
*/
@@ -988,7 +957,7 @@ void qemuDomainObjExitMonitor(struct qemud_driver *driver,
* To be called immediately before any QEMU monitor API call
* Must have already either called qemuDomainObjBeginJobWithDriver() and
* checked that the VM is still active or called qemuDomainObjBeginAsyncJob,
- * in which case this will call qemuDomainObjBeginNestedJobWithDriver.
+ * in which case this will start a nested job.
*
* To be followed with qemuDomainObjExitMonitorWithDriver() once complete
*/
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 679259f..8bff8b0 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -143,9 +143,6 @@ int qemuDomainObjBeginAsyncJob(struct qemud_driver *driver,
virDomainObjPtr obj,
enum qemuDomainAsyncJob asyncJob)
ATTRIBUTE_RETURN_CHECK;
-int qemuDomainObjBeginNestedJob(struct qemud_driver *driver,
- virDomainObjPtr obj)
- ATTRIBUTE_RETURN_CHECK;
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
virDomainObjPtr obj,
enum qemuDomainJob job)
@@ -154,9 +151,6 @@ int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
virDomainObjPtr obj,
enum qemuDomainAsyncJob asyncJob)
ATTRIBUTE_RETURN_CHECK;
-int qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
- ATTRIBUTE_RETURN_CHECK;
int qemuDomainObjEndJob(struct qemud_driver *driver,
virDomainObjPtr obj)
@@ -164,9 +158,6 @@ int qemuDomainObjEndJob(struct qemud_driver *driver,
int qemuDomainObjEndAsyncJob(struct qemud_driver *driver,
virDomainObjPtr obj)
ATTRIBUTE_RETURN_CHECK;
-void qemuDomainObjEndNestedJob(struct qemud_driver *driver,
- virDomainObjPtr obj);
-
void qemuDomainObjSetJobPhase(struct qemud_driver *driver,
virDomainObjPtr obj,
int phase);
--
1.7.4.4
13 years, 4 months
[libvirt] [PATCH] build: avoid type-punning compiler warning
by Eric Blake
On RHEL 5, with gcc 4.1.2:
rpc/virnetsaslcontext.c: In function 'virNetSASLSessionUpdateBufSize':
rpc/virnetsaslcontext.c:396: warning: dereferncing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
* src/rpc/virnetsaslcontext.c (virNetSASLSessionUpdateBufSize):
Use a union to work around gcc warning.
---
src/rpc/virnetsaslcontext.c | 11 +++++++----
1 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/src/rpc/virnetsaslcontext.c b/src/rpc/virnetsaslcontext.c
index 71796b9..a0752dd 100644
--- a/src/rpc/virnetsaslcontext.c
+++ b/src/rpc/virnetsaslcontext.c
@@ -390,10 +390,13 @@ cleanup:
static int virNetSASLSessionUpdateBufSize(virNetSASLSessionPtr sasl)
{
- unsigned *maxbufsize;
+ union {
+ unsigned *maxbufsize;
+ const void *ptr;
+ } u;
int err;
- err = sasl_getprop(sasl->conn, SASL_MAXOUTBUF, (const void **)&maxbufsize);
+ err = sasl_getprop(sasl->conn, SASL_MAXOUTBUF, &u.ptr);
if (err != SASL_OK) {
virNetError(VIR_ERR_INTERNAL_ERROR,
_("cannot get security props %d (%s)"),
@@ -402,8 +405,8 @@ static int virNetSASLSessionUpdateBufSize(virNetSASLSessionPtr sasl)
}
VIR_DEBUG("Negotiated bufsize is %u vs requested size %zu",
- *maxbufsize, sasl->maxbufsize);
- sasl->maxbufsize = *maxbufsize;
+ *u.maxbufsize, sasl->maxbufsize);
+ sasl->maxbufsize = *u.maxbufsize;
return 0;
}
--
1.7.4.4
13 years, 4 months
[libvirt] [PATCH] qemu: Fix memory leak on metadata fetching
by Michal Privoznik
As written in virStorageFileGetMetadataFromFD decription, caller
must free metadata after use. Qemu driver miss this and therefore
leak metadata which can grow to huge mem leak if somebody query
for blockInfo a lot.
---
src/qemu/qemu_driver.c | 14 ++++++++++----
1 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 0f91910..d45c7c5 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -6949,7 +6949,7 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
int ret = -1;
int fd = -1;
off_t end;
- virStorageFileMetadata meta;
+ virStorageFileMetadata *meta = NULL;
virDomainDiskDefPtr disk = NULL;
struct stat sb;
int i;
@@ -7017,9 +7017,14 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
}
}
+ if (VIR_ALLOC(meta) < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
if (virStorageFileGetMetadataFromFD(path, fd,
format,
- &meta) < 0)
+ meta) < 0)
goto cleanup;
/* Get info for normal formats */
@@ -7056,8 +7061,8 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
/* If the file we probed has a capacity set, then override
* what we calculated from file/block extents */
- if (meta.capacity)
- info->capacity = meta.capacity;
+ if (meta->capacity)
+ info->capacity = meta->capacity;
/* Set default value .. */
info->allocation = info->physical;
@@ -7091,6 +7096,7 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
}
cleanup:
+ virStorageFileFreeMetadata(meta);
VIR_FORCE_CLOSE(fd);
if (vm)
virDomainObjUnlock(vm);
--
1.7.5.rc3
13 years, 4 months
Re: [libvirt] PCI devices passthough to LXC containers using libvirt
by Devendra K. Modium
Hi
Thanks for the reply.
I think the links that you provided show
how to deal with pci devices in case the hypervisor is kvm.
Please correct me if I am wrong.
But I am using LXC containers.I have skimmed through
the libvirt lxc driver code and found no functionality of
allowing specified devices into a container exists
other than currently where only hard coded devices are allowed
which can be seen in the file lxc_controller.c
struct cgroup_device_policy devices[] = {
{'c', LXC_DEV_MAJ_MEMORY, LXC_DEV_MIN_NULL},
{'c', LXC_DEV_MAJ_MEMORY, LXC_DEV_MIN_ZERO},
{'c', LXC_DEV_MAJ_MEMORY, LXC_DEV_MIN_FULL},
{'c', LXC_DEV_MAJ_MEMORY, LXC_DEV_MIN_RANDOM},
{'c', LXC_DEV_MAJ_MEMORY, LXC_DEV_MIN_URANDOM},
{'c', LXC_DEV_MAJ_TTY, LXC_DEV_MIN_TTY},
{'c', LXC_DEV_MAJ_TTY, LXC_DEV_MIN_PTMX},
{0, 0, 0}};
Please confirm this or let me know if there is any other interface libvirt_lxc
provides to allow specific pci/other devices into container.
Thanks in advance
Regards
Devendra
----- Original Message -----
From: "Alex Jia" <ajia(a)redhat.com>
To: "Devendra K. Modium" <dmodium(a)isi.edu>
Sent: Thursday, July 28, 2011 3:22:36 AM
Subject: Re: [libvirt] PCI devices passthough to LXC containers using libvirt
On 07/28/2011 05:13 AM, Devendra K. Modium wrote:
> Hi All
>
> Please let me know if anyone have given access to
> PCI devices for a LXC container.
>
> I have tried getting the xml from
> "virsh nodedev-dumpxml pci_device" and
> added to the libvirt xml file as shown below
>
> <device>
> <name>pci_0000_03_00_0</name>
> <parent>pci_0000_00_03_0</parent>
> <driver>
> <name>nvidia</name>
> </driver>
> <capability type='pci'>
> <domain>0</domain>
> <bus>3</bus>
> <slot>0</slot>
> <function>0</function>
> <product id='0x06fd' />
> <vendor id='0x10de'>nVidia Corporation</vendor>
> </capability>
> </device>
You shouldn't directly add the above xml to guest xml configuration, here is a usage,
it should be helpful for you:
http://libvirt.org/formatdomain.html#elementsUSB
http://fedoraproject.org/wiki/Category:Virtualization_KVM_PCI_Device_Assi...
Good Luck!
Alex
>
> But it didn't work. I see the logs and it says
> couldn't get physical and virtual functions of these devices with error
>
> get_physical_function_linux:323 : Attempting to get SR IOV physical function for device with sysfs path '/sys/devices/pci0000:00/0000:00:00.0'
> 16:48:34.033: 13802: debug : get_sriov_function:270 : Attempting to resolve device path from device link '/sys/devices/pci0000:00/0000:00:00.0/physfn'
> 16:48:34.033: 13802: debug : get_sriov_function:274 : SR IOV function link '/sys/devices/pci0000:00/0000:00:00.0/physfn' does not exist
> 16:48:34.033: 13802: debug : get_virtual_functions_linux:348 : Attempting to get SR IOV virtual functions for devicewith sysfs path '/sys/devices/pci0000:00/0000:00:00.0'
>
>
> If anyone got some guidelines how to debug, please let me know.
>
>
> Thanks in advance
>
> Regards
> Devendra
>
>
>
>
>
>
>
> --
> libvir-list mailing list
> libvir-list(a)redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
13 years, 4 months
[libvirt] [PATCH v3] virsh: avoid missing zero value judgement in cmdBlkiotune
by ajia@redhat.com
* tools/virsh.c: avoid missing zero value judgement in cmdBlkiotune, when
weight is equal to 0, the cmdBlkiotune will not raise any error information
when judge weight value first time, and execute else branch to judge weight
value again, strncpy(temp->field, VIR_DOMAIN_BLKIO_WEIGHT, sizeof(temp->field))
will be not executed for ever. However, if and only if param->field is equal
to VIR_DOMAIN_BLKIO_WEIGHT, underlying qemuDomainSetBlkioParameters function
will check whether weight value is in range [100, 1000].
* how to reproduce?
% virsh blkiotune ${guestname} --weight 0
Signed-off-by: Alex Jia <ajia(a)redhat.com>
---
tools/virsh.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 8bd22dc..f24050d 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4037,12 +4037,12 @@ cmdBlkiotune(vshControl * ctl, const vshCmd * cmd)
goto cleanup;
}
- if (weight) {
- nparams++;
- if (weight < 0) {
+ if (vshCommandOptInt(cmd, "weight", &weight) > 0) {
+ if (weight <= 0) {
vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
goto cleanup;
}
+ nparams++;
}
if (nparams == 0) {
--
1.7.1
13 years, 4 months
[libvirt] [PATCH v2] virsh: avoid missing zero value judgement in cmdBlkiotune
by ajia@redhat.com
* tools/virsh.c: avoid missing zero value judgement in cmdBlkiotune, when
weight is equal to 0, the cmdBlkiotune will not raise any error information
when judge weight value first time, and execute else branch to judge weight
value again, strncpy(temp->field, VIR_DOMAIN_BLKIO_WEIGHT, sizeof(temp->field))
will be not executed for ever. However, if and only if param->field is equal
to VIR_DOMAIN_BLKIO_WEIGHT, underlying qemuDomainSetBlkioParameters function
will check whether weight value is in range [100, 1000].
* how to reproduce?
% virsh blkiotune ${guestname} --weight 0
Signed-off-by: Alex Jia <ajia(a)redhat.com>
---
tools/virsh.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 8bd22dc..512f2c6 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4037,14 +4037,13 @@ cmdBlkiotune(vshControl * ctl, const vshCmd * cmd)
goto cleanup;
}
- if (weight) {
- nparams++;
- if (weight < 0) {
- vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
- goto cleanup;
- }
+ if (weight <= 0) {
+ vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
+ goto cleanup;
}
+ nparams++;
+
if (nparams == 0) {
/* get the number of blkio parameters */
if (virDomainGetBlkioParameters(dom, NULL, &nparams, flags) != 0) {
--
1.7.1
13 years, 4 months
[libvirt] RFC: API additions for enhanced snapshot support
by Eric Blake
Right now, libvirt has a snapshot API via virDomainSnapshotCreateXML,
but for qemu domains, it only works if all the guest disk images are
qcow2, and qemu rather than libvirt does all the work. However, it has
a couple of drawbacks: it is inherently tied to domains (there is no way
to manage snapshots of storage volumes not tied to a domain, even though
libvirt does that for qcow2 images associated with offline qemu domains
by using the qemu-img application). And it necessarily operates on all
of the images associated with a domain in parallel - if any disk image
is not qcow2, the snapshot fails, and there is no way to select a subset
of disks to save. However, it works on both active (disk and memory
state) and inactive domains (just disk state).
Upstream qemu is developing a 'live snapshot' feature, which allows the
creation of a snapshot without the current downtime of several seconds
required by the current 'savevm' monitor command, as well as means for
controlling applications (libvirt) to request that qemu pause I/O to a
particular disk, then externally perform a snapshot, then tell qemu to
resume I/O (perhaps on a different file name or fd from the host, but
with no change to the contents seen by the guest). Eventually, these
changes will make it possible for libvirt to create fast snapshots of
LVM partitions or btrfs files for guest disk images, as well as to
select which disks are saved in a snapshot (that is, save a
crash-consistent state of a subset of disks, without the corresponding
RAM state, rather than making a full system restore point); the latter
would work best with guest cooperation to quiesce disks before qemu
pauses I/O to that disk, but that is an orthogonal enhancement.
However, my first goal with API enhancements is to merely prove that
libvirt can manage a live snapshot by using qemu-img on a qcow2 image
rather than the current 'savevm' approach of qemu doing all the work.
Additionally, libvirt provides the virDomainSave command, which saves
just the state of the domain's memory, and stops the guest. A crude
libvirt-only snapshot could thus already be done by using virDomainSave,
then externally doing a snapshot of all disk images associated with the
domain by using virStorageVol APIs, except that such APIs don't yet
exist. Additionally, virDomainSave has no flags argument, so there is
no way to request that the guest be resumed after the snapshot completes.
Right now, I'm proposing the addition of virDomainSaveFlags, along with
a series of virStorageVolSnapshot* APIs that mirror the
virDomainSnapshot* APIs. This would mean adding:
/* Opaque type to manage a snapshot of a single storage volume. */
typedef virStorageVolSnapshotPtr;
/* Create a snapshot of a storage volume. XML is optional, if non-NULL,
it would be a new top-level element <volsnapshot> which is similar to
the top-level <domainsnapshot> for virDomainSnapshotCreateXML, to
specify name and description. Flags is 0 for now. */
virStorageVolSnapshotPtr virDomainSnapshotCreateXML(virStorageVolPtr
vol, const char *xml, unsigned int flags);
[For qcow2, this would be implemented with 'qemu-img snapshot -c',
similar to what virDomainSnapshotXML already does on inactive domains.
Later, we can add LVM and btrfs support, or even allow full file copies
of any file type. Also in the future, we could enhance XML to take a
new element that describes a relationship between the name of the
original and of the snapshot, in the case where a new filename has to be
created to complete the snapshot process.]
/* Probe if vol has snapshots. 1 if true, 0 if false, -1 on error.
Flags is 0 for now. */
int virStorageVolHasCurrentSnapshot(virStorageVolPtr vol, unsigned int
flags);
[For qcow2 images, snapshots can be contained within the same file and
managed with qemu-img -l, but for other formats, this may mean that
libvirt has to start managing externally saved data associated with the
storage pool that associates snapshots with filenames. In fact, even
for qcow2 it might be useful to support creation of new files backed by
the previous snapshot rather than cramming multiple snapshots in one
file, so we may have a use for flags to filter out the presence of
single-file vs. multiple-file snapshot setups.]
/* Revert a volume back to the state of a snapshot, returning 0 on
success. Flags is 0 for now. */
int virStorageVolRevertToSnapsot(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
[For qcow2, this would involve qemu-img snapshot -a. Here, a useful
flag might be whether to delete any changes made after the point of the
snapshot; virDomainRevertToSnapshot should probably honor the same type
of flag.]
/* Return the most recent snapshot of a volume, if one exists, or NULL
on failure. Flags is 0 for now. */
virStorageVolSnapshotPtr virStorageVolSnapshotCurrent(virStorageVolPtr
vol, unsigned int flags);
/* Delete the storage associated with a snapshot (although the opaque
snapshot object must still be independently freed). If flags is 0, any
child snapshots based off of this one are rebased onto the parent; if
flags is VIR_STORAGE_VOL_SNAPSHOT_DELETE_CHILDREN , then any child
snapshots based off of this one are also deleted. */
int virStorageVolSnapshotDelete(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
[For qcow2, this would involve qemu-img snapshot -d. For
multiple-file snapshots, this would also involve qemu-img commit.]
/* Free the object returned by
virStorageVolSnapshot{Current,CreateXML,LookupByName}. The storage
snapshot associated with this object still exists, if it has not been
deleted by virStorageVolSnapshotDelete. */
int virStorageVolSnapshotFree(virStorageVolSnapshotPtr snapshot);
/* Return the <volsnapshot> XML details about this snapshot object.
Flags is 0 for now. */
int virStorageVolSnapshotGetXMLDesc(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
/* Return the names of all snapshots associated with this volume, using
len from virStorageVolSnapshotLen. Flags is 0 for now. */
int virStorageVolSnapshotListNames(virStorageVolPtr vol, char **names,
int nameslen, unsigned int flags);
[For qcow2, this involves qemu-img -l. Additionally, if
virStorageVolHasCurrentSnapshot learns to filter on in-file vs.
multi-file snapshots, then the same flags would apply here.]
/* Get the opaque object tied to a snapshot name. Flags is 0 for now. */
virStorageVolSnapshotPtr
virStorageVolSnapshotLookupByName(virStorageVolPtr vol, const char
*name, unsigned int flags);
/* Determine how many snapshots are tied to a volume, or -1 on error.
Flags is 0 for now. */
int virStorageVolSnapshotNum(virStorageVolPtr vol, unsigned int flags);
[Same flags as for virStorageVolSnapshotListNames.]
/* Save a domain into the file 'to' with additional actions. If flags
is 0, then xml is ignored, and this is like virDomainSave. If flags
includes VIR_DOMAIN_SAVE_DISKS, then all of the associated disk images
are also snapshotted, as if by virStorageVolSnapshotCreateXML; the xml
argument is optional, but if present, it should be a <domainsnapshot>
element with <disk> sub-elements for directions on each disk that needs
a non-empty xml argument for proper volume snapshot creation. If flags
includes VIR_DOMAIN_SAVE_RESUME, then the guest is resumed after the
offline snapshot is complete (note that VIR_DOMAIN_SAVE_RESUME without
VIR_DOMAIN_SAVE_DISKS makes little sense, as a saved state file is
rendered useless if the disk images are modified before it is resumed).
If flags includes VIR_DOMAIN_SAVE_QUIESCE, this requests that a guest
agent quiesce disk state before the saved state file is created. */
int virDomainSaveFlags(virDomainPtr domain, const char *to, const char
*xml, unsigned int flags);
Also, the existing virDomainSnapshotCreateXML can be made more powerful
by adding new flags and enhancing the existing XML for <domainsnapshot>.
When flags is 0, the current behavior of saving memory state alongside
all disks (for running domains, via savevm) or just snapshotting all
disks with default settings (for offline domains, via qemu-img) is kept.
If flags includes VIR_DOMAIN_SNAPSHOT_LIVE, then the guest must be
running, and the new monitor commands for live snapshots are used. If
flags includes VIR_DOMAIN_SNAPSHOT_DISKS_ONLY, then only the disks are
snapshotted (on a running guest, this generally means they will only be
crash-consistent, and will need an fsck before that disk state can be
remounted), but it will shave off time by not saving memory. If flags
includes VIR_DOMAIN_SNAPSHOT_QUIESCE, then this will additionally
request that a guest agent quiesce disk state before the live snapshot
is taken (increasing the likelihood of a stable disk, rather than a
crash-consistent disk; but it requires cooperation from the guest so it
is no more reliable than memballoon changes).
As for the XML changes, it makes sense to snapshot just a subset of
disks when you only care about crash-consistent state or if you can rely
on a guest agent to quiesce the subset of disk(s) you care about, so the
existing <domainsnapshot> element needs a new optional subelement to
control which disks are snapshotted; additionally, this subelement will
be useful for disk image formats that require additional complexity
(such as a secondary file name, rather than the inline snapshot feature
of qcow2). I'm envisioning something like the following:
<domainsnapshot>
<name>whatever</name>
<disk name='/path/to/image1' snapshot='no'/>
<disk name='/path/to/image2'>
<volsnapshot>...</volsnapshot>
</disk>
</domainsnapshot>
where there can be up to as many <disk> elements as there are disk
<devices> in the domain xml; any domain disk not listed is given default
treatment. The name attribute of <disk> is mandatory, in order to match
this disk element to one of the domain disks. The snapshot='yes|no'
attribute is optional, defaulting to yes, in order to skip a particular
disk. The <volsnapshot> subelement is optional, but if present, it
would be the same XML as is provided to the
virStorageVolSnapshotCreateXML. [And since my first phase of
implementation will be focused on inline qcow2 snapshots, I don't yet
know what that XML will need to contain for any other type of snapshots,
such as mapping out how the snapshot backing file will be named in
relation to the possibly new live file.]
Any feedback on this approach? Any other APIs that would be useful to
add? I'd like to get all the new APIs in place for 0.9.3 with minimal
qcow2 functionality, then use the time before 0.9.4 to further enhance
the APIs to cover more snapshot cases but without having to add any new
APIs.
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org
13 years, 4 months
[libvirt] [PATCH] virsh: avoid missing zero value judgement in cmdBlkiotune
by ajia@redhat.com
* tools/virsh.c: avoid missing zero value judgement in cmdBlkiotune, when
weight is equal to 0, the cmdBlkiotune will not raise any error information
when judge weight value first time, and execute else branch to judge weight
value again, strncpy(temp->field, VIR_DOMAIN_BLKIO_WEIGHT, sizeof(temp->field))
will be not executed for ever. However, if and only if param->field is equal to
VIR_DOMAIN_BLKIO_WEIGHT, underlying qemuDomainSetBlkioParameters function
will check whether weight value is in range [100, 1000].
* how to reproduce?
% virsh blkiotune ${guestname} --weight 0
Signed-off-by: Alex Jia <ajia(a)redhat.com>
---
tools/virsh.c | 10 ++++------
1 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 8bd22dc..183d7c6 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4037,12 +4037,10 @@ cmdBlkiotune(vshControl * ctl, const vshCmd * cmd)
goto cleanup;
}
- if (weight) {
- nparams++;
- if (weight < 0) {
- vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
- goto cleanup;
- }
+ nparams++;
+ if (weight <= 0) {
+ vshError(ctl, _("Invalid value of %d for I/O weight"), weight);
+ goto cleanup;
}
if (nparams == 0) {
--
1.7.1
13 years, 4 months