[libvirt] [PATCHv2] Disable nwfilter driver when running unprivileged
by Ján Tomko
When running unprivileged, nwfilter state already skips
most of the initialization. Also forbid opening connections
to the nwfilter driver when unprivileged.
This changes the nwfilter-define error from:
error: cannot create config directory (null): Bad address
To:
this function is not supported by the connection driver:
virNWFilterDefineXML
https://bugzilla.redhat.com/show_bug.cgi?id=1029266
---
v1: https://www.redhat.com/archives/libvir-list/2013-November/msg00368.html
v2: forbid everything instead of just virNWFilterDefineXML
src/nwfilter/nwfilter_driver.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/nwfilter/nwfilter_driver.c b/src/nwfilter/nwfilter_driver.c
index 6602d73..d6e492f 100644
--- a/src/nwfilter/nwfilter_driver.c
+++ b/src/nwfilter/nwfilter_driver.c
@@ -415,7 +415,7 @@ nwfilterOpen(virConnectPtr conn,
{
virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR);
- if (!driverState)
+ if (!driverState || !driverState->privileged)
return VIR_DRV_OPEN_DECLINED;
conn->nwfilterPrivateData = driverState;
--
1.8.3.2
11 years, 1 month
[libvirt] [PATCH] Disable nwfilterDefineXML for unprivileged libvirtd
by Ján Tomko
Fail in a more friendly way than:
error: cannot create config directory (null): Bad address
https://bugzilla.redhat.com/show_bug.cgi?id=1029266
---
src/nwfilter/nwfilter_driver.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/nwfilter/nwfilter_driver.c b/src/nwfilter/nwfilter_driver.c
index 6602d73..c3ff4fe 100644
--- a/src/nwfilter/nwfilter_driver.c
+++ b/src/nwfilter/nwfilter_driver.c
@@ -551,13 +551,20 @@ nwfilterDefineXML(virConnectPtr conn,
const char *xml)
{
virNWFilterDriverStatePtr driver = conn->nwfilterPrivateData;
- virNWFilterDefPtr def;
+ virNWFilterDefPtr def = NULL;
virNWFilterObjPtr nwfilter = NULL;
virNWFilterPtr ret = NULL;
nwfilterDriverLock(driver);
virNWFilterCallbackDriversLock();
+ if (!driver->privileged) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+ _("network filters are only available when libvirtd "
+ "runs as root"));
+ goto cleanup;
+ }
+
if (!(def = virNWFilterDefParseString(xml)))
goto cleanup;
--
1.8.3.2
11 years, 1 month
[libvirt] [PATCH]lxc: mount dir as readonly if ownership couldn't be known
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
We bind mount some dir from host to guest.
With userns enabled, if we couldn't know
the dir's ownership, it's better to
mount them as readonly.
Signed-off-by: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
---
src/lxc/lxc_container.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/src/lxc/lxc_container.c b/src/lxc/lxc_container.c
index 255c711..f3f0c15 100644
--- a/src/lxc/lxc_container.c
+++ b/src/lxc/lxc_container.c
@@ -96,6 +96,8 @@
typedef char lxc_message_t;
#define LXC_CONTINUE_MSG 'c'
+#define OVERFLOWUID 65534
+
typedef struct __lxc_child_argv lxc_child_argv_t;
struct __lxc_child_argv {
virDomainDefPtr config;
@@ -1067,12 +1069,22 @@ static int lxcContainerMountFSBind(virDomainFSDefPtr fs,
char *src = NULL;
int ret = -1;
struct stat st;
+ bool readonly = false;
VIR_DEBUG("src=%s dst=%s", fs->src, fs->dst);
if (virAsprintf(&src, "%s%s", srcprefix, fs->src) < 0)
goto cleanup;
+ if (stat(src, &st) < 0) {
+ virReportSystemError(errno, _("Unable to stat bind source %s"),
+ src);
+ goto cleanup;
+ } else {
+ if (OVERFLOWUID == st.st_uid || OVERFLOWUID == st.st_gid)
+ readonly = true;
+ }
+
if (stat(fs->dst, &st) < 0) {
if (errno != ENOENT) {
virReportSystemError(errno, _("Unable to stat bind target %s"),
@@ -1119,7 +1131,7 @@ static int lxcContainerMountFSBind(virDomainFSDefPtr fs,
goto cleanup;
}
- if (fs->readonly) {
+ if (fs->readonly || readonly) {
VIR_DEBUG("Binding %s readonly", fs->dst);
if (mount(src, fs->dst, NULL, MS_BIND|MS_REMOUNT|MS_RDONLY, NULL) < 0) {
virReportSystemError(errno,
--
1.8.2.1
11 years, 1 month
[libvirt] [PATCH]lxc: make sure root wouldn't be null
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
virDomainGetRootFilesystem may return null.
We should take care of it.
Signed-off-by: Chen Hanxiao <chenhanxiao(a)cn.fujitsu.com>
---
src/lxc/lxc_container.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/lxc/lxc_container.c b/src/lxc/lxc_container.c
index 255c711..e8f7a75 100644
--- a/src/lxc/lxc_container.c
+++ b/src/lxc/lxc_container.c
@@ -1829,7 +1829,8 @@ static int lxcContainerChild(void *data)
if (lxcContainerSetID(vmDef) < 0)
goto cleanup;
- root = virDomainGetRootFilesystem(vmDef);
+ if (!(root = virDomainGetRootFilesystem(vmDef)))
+ goto cleanup;
if (argv->nttyPaths) {
const char *tty = argv->ttyPaths[0];
--
1.8.2.1
11 years, 1 month
[libvirt] [PATCH RFC] blockdev: copy legacy and common opts to qemu_drive_opts
by Amos Kong
Currently we have three QemuOptsList (qemu_common_drive_opts,
qemu_legacy_drive_opts, and qemu_drive_opts), only qemu_drive_opts
is added to vm_config_groups[].
We query commandline options by checking information in
vm_config_groups[], so we can only get a NULL parameter list now.
This patch copied desc items of qemu_legacy_drive_opts and
qemu_common_drive_opts to qemu_drive_opts.
Signed-off-by: Amos Kong <akong(a)redhat.com>
---
blockdev.c | 168 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 164 insertions(+), 4 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index b260477..28f3078 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2345,10 +2345,170 @@ QemuOptsList qemu_drive_opts = {
.name = "drive",
.head = QTAILQ_HEAD_INITIALIZER(qemu_drive_opts.head),
.desc = {
- /*
- * no elements => accept any params
- * validation will happen later
- */
+ /* qemu_legacy_drive_opts */
+ {
+ .name = "bus",
+ .type = QEMU_OPT_NUMBER,
+ .help = "bus number",
+ },{
+ .name = "unit",
+ .type = QEMU_OPT_NUMBER,
+ .help = "unit number (i.e. lun for scsi)",
+ },{
+ .name = "index",
+ .type = QEMU_OPT_NUMBER,
+ .help = "index number",
+ },{
+ .name = "media",
+ .type = QEMU_OPT_STRING,
+ .help = "media type (disk, cdrom)",
+ },{
+ .name = "if",
+ .type = QEMU_OPT_STRING,
+ .help = "interface (ide, scsi, sd, mtd, floppy, pflash, virtio)",
+ },{
+ .name = "cyls",
+ .type = QEMU_OPT_NUMBER,
+ .help = "number of cylinders (ide disk geometry)",
+ },{
+ .name = "heads",
+ .type = QEMU_OPT_NUMBER,
+ .help = "number of heads (ide disk geometry)",
+ },{
+ .name = "secs",
+ .type = QEMU_OPT_NUMBER,
+ .help = "number of sectors (ide disk geometry)",
+ },{
+ .name = "trans",
+ .type = QEMU_OPT_STRING,
+ .help = "chs translation (auto, lba, none)",
+ },{
+ .name = "boot",
+ .type = QEMU_OPT_BOOL,
+ .help = "(deprecated, ignored)",
+ },{
+ .name = "addr",
+ .type = QEMU_OPT_STRING,
+ .help = "pci address (virtio only)",
+ },
+
+ /* Options that are passed on, but have special semantics with -drive */
+ {
+ .name = "read-only",
+ .type = QEMU_OPT_BOOL,
+ .help = "open drive file as read-only",
+ },{
+ .name = "copy-on-read",
+ .type = QEMU_OPT_BOOL,
+ .help = "copy read data from backing file into image file",
+ },
+
+ /* qemu_common_drive_opts */
+ {
+ .name = "snapshot",
+ .type = QEMU_OPT_BOOL,
+ .help = "enable/disable snapshot mode",
+ },{
+ .name = "file",
+ .type = QEMU_OPT_STRING,
+ .help = "disk image",
+ },{
+ .name = "discard",
+ .type = QEMU_OPT_STRING,
+ .help = "discard operation (ignore/off, unmap/on)",
+ },{
+ .name = "cache.writeback",
+ .type = QEMU_OPT_BOOL,
+ .help = "enables writeback mode for any caches",
+ },{
+ .name = "cache.direct",
+ .type = QEMU_OPT_BOOL,
+ .help = "enables use of O_DIRECT (bypass the host page cache)",
+ },{
+ .name = "cache.no-flush",
+ .type = QEMU_OPT_BOOL,
+ .help = "ignore any flush requests for the device",
+ },{
+ .name = "aio",
+ .type = QEMU_OPT_STRING,
+ .help = "host AIO implementation (threads, native)",
+ },{
+ .name = "format",
+ .type = QEMU_OPT_STRING,
+ .help = "disk format (raw, qcow2, ...)",
+ },{
+ .name = "serial",
+ .type = QEMU_OPT_STRING,
+ .help = "disk serial number",
+ },{
+ .name = "rerror",
+ .type = QEMU_OPT_STRING,
+ .help = "read error action",
+ },{
+ .name = "werror",
+ .type = QEMU_OPT_STRING,
+ .help = "write error action",
+ },{
+ .name = "read-only",
+ .type = QEMU_OPT_BOOL,
+ .help = "open drive file as read-only",
+ },{
+ .name = "throttling.iops-total",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit total I/O operations per second",
+ },{
+ .name = "throttling.iops-read",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit read operations per second",
+ },{
+ .name = "throttling.iops-write",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit write operations per second",
+ },{
+ .name = "throttling.bps-total",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit total bytes per second",
+ },{
+ .name = "throttling.bps-read",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit read bytes per second",
+ },{
+ .name = "throttling.bps-write",
+ .type = QEMU_OPT_NUMBER,
+ .help = "limit write bytes per second",
+ },{
+ .name = "throttling.iops-total-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "I/O operations burst",
+ },{
+ .name = "throttling.iops-read-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "I/O operations read burst",
+ },{
+ .name = "throttling.iops-write-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "I/O operations write burst",
+ },{
+ .name = "throttling.bps-total-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "total bytes burst",
+ },{
+ .name = "throttling.bps-read-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "total bytes read burst",
+ },{
+ .name = "throttling.bps-write-max",
+ .type = QEMU_OPT_NUMBER,
+ .help = "total bytes write burst",
+ },{
+ .name = "throttling.iops-size",
+ .type = QEMU_OPT_NUMBER,
+ .help = "when limiting by iops max size of an I/O in bytes",
+ },{
+ .name = "copy-on-read",
+ .type = QEMU_OPT_BOOL,
+ .help = "copy read data from backing file into image file",
+ },
{ /* end of list */ }
},
};
--
1.8.3.1
11 years, 1 month
[libvirt] [PATCHv2 0/3] Split out disk source parsing code
by Peter Krempa
Version 2 incorporates review feedback.
Peter Krempa (3):
conf: Split out code to parse the source of a disk definition
conf: Rename virDomainDiskHostDefFree to virDomainDiskHostDefClear
conf: Refactor virDomainDiskSourceDefParse
src/conf/domain_conf.c | 269 ++++++++++++++++++++++++++---------------------
src/conf/domain_conf.h | 2 +-
src/libvirt_private.syms | 2 +-
src/qemu/qemu_command.c | 4 +-
4 files changed, 154 insertions(+), 123 deletions(-)
--
1.8.4.2
11 years, 1 month
[libvirt] [PATCH] Add support for virt machine with virtio-mmio devices
by Clark Laughlin
These changes allow the correct virtio-blk-device and virtio-net-device
devices to be used for the 'virt' machine type rather than the PCI virtio
devices.
---
src/qemu/qemu_command.c | 4 +++-
src/qemu/qemu_domain.c | 3 +++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 63e235d..901120e 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -1335,12 +1335,14 @@ cleanup:
return ret;
}
+
static int
qemuDomainAssignARMVirtioMMIOAddresses(virDomainDefPtr def,
virQEMUCapsPtr qemuCaps)
{
if (def->os.arch == VIR_ARCH_ARMV7L &&
- STRPREFIX(def->os.machine, "vexpress-") &&
+ (STRPREFIX(def->os.machine, "vexpress-") ||
+ STREQ(def->os.machine, "virt")) &&
virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_VIRTIO_MMIO)) {
qemuDomainPrimeVirtioDeviceAddresses(
def, VIR_DOMAIN_DEVICE_ADDRESS_TYPE_VIRTIO_MMIO);
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 81d0ba9..346fec3 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -797,6 +797,9 @@ qemuDomainDefaultNetModel(const virDomainDef *def)
if (STREQ(def->os.machine, "versatilepb"))
return "smc91c111";
+ if (STREQ(def->os.machine, "virt"))
+ return "virtio";
+
/* Incomplete. vexpress (and a few others) use this, but not all
* arm boards */
return "lan9118";
--
1.8.3.2
11 years, 1 month
[libvirt] [PATCH] Improve cgroups docs to cover systemd integration
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
As of libvirt 1.1.1 and systemd 205, the cgroups layout used by
libvirt has some changes. Update the 'cgroups.html' file from
the website to describe how it works in a systemd world.
Signed-off-by: Daniel P. Berrange <berrange(a)redhat.com>
---
docs/cgroups.html.in | 212 +++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 172 insertions(+), 40 deletions(-)
diff --git a/docs/cgroups.html.in b/docs/cgroups.html.in
index 77656b2..46cfb7b 100644
--- a/docs/cgroups.html.in
+++ b/docs/cgroups.html.in
@@ -47,17 +47,121 @@
<p>
As of libvirt 1.0.5 or later, the cgroups layout created by libvirt has been
simplified, in order to facilitate the setup of resource control policies by
- administrators / management applications. The layout is based on the concepts of
- "partitions" and "consumers". Each virtual machine or container is a consumer,
- and has a corresponding cgroup named <code>$VMNAME.libvirt-{qemu,lxc}</code>.
- Each consumer is associated with exactly one partition, which also have a
- corresponding cgroup usually named <code>$PARTNAME.partition</code>. The
- exceptions to this naming rule are the three top level default partitions,
- named <code>/system</code> (for system services), <code>/user</code> (for
- user login sessions) and <code>/machine</code> (for virtual machines and
- containers). By default every consumer will of course be associated with
- the <code>/machine</code> partition. This leads to a hierarchy that looks
- like
+ administrators / management applications. The new layout is based on the concepts
+ of "partitions" and "consumers". A "consumer" is a cgroup which holds the
+ processes for a single virtual machine or container. A "partition" is a cgroup
+ which does not contain any processes, but can have resource controls applied.
+ A "partition" will have zero or more child directories which may be either
+ "consumer" or "partition".
+ </p>
+
+ <p>
+ As of libvirt 1.1.1 or later, the cgroups layout will have some slight
+ differences when running on a host with systemd 205 or later. The overall
+ tree structure is the same, but there are some differences in the naming
+ conventions for the cgroup directories. Thus the following docs split
+ in two, one describing systemd hosts and the other non-systemd hosts.
+ </p>
+
+ <h3><a name="currentLayoutSystemd">Systemd cgroups integration</a></h3>
+
+ <p>
+ On hosts which use systemd, each consumer maps to a systemd scope unit,
+ while partitions map to a system slice unit.
+ </p>
+
+ <h4><a name="systemdScope">Systemd scope naming</a></h4>
+
+ <p>
+ The systemd convention is for the scope name of virtual machines / containers
+ to be of the general format <code>machine-$NAME.scope</code>. Libvirt forms the
+ <code>$NAME</code> part of this by concatenating the driver type with the name
+ of the guest, and then escaping any systemd reserved characters.
+ So for a guest <code>demo</code> running under the <code>lxc</code> driver,
+ we get a <code>$NAME</code> of <code>lxc-demo</code> which when escaped is
+ <code>lxc\x2ddemo</code>. So the complete scope name is <code>machine-lxc\x2ddemo.scope</code>.
+ The scope names map directly to the cgroup directory names.
+ </p>
+
+ <h4><a name="systemdSlice">Systemd slice naming</a></h4>
+
+ <p>
+ The systemd convention for slice naming is that a slice should include the
+ name of all of its parents prepended on its own name. So for a libvirt
+ partition <code>/machine/engineering/testing</code>, the slice name will
+ be <code>machine-engineering-testing.slice</code>. Again the slice names
+ map directly to the cgroup directory names. Systemd creates three top level
+ slices by default, <code>system.slice</code> <code>user.slice</code> and
+ <code>machine.slice</code>. All virtual machines or containers created
+ by libvirt will be associated with <code>machine.slice</code> by default.
+ </p>
+
+ <h4><a name="systemdLayout">Systemd cgroup layout</a></h4>
+
+ <p>
+ Given this, an possible systemd cgroups layout involing 3 qemu guests,
+ 3 lxc containers and 3 custom child slices, would be:
+ </p>
+
+ <pre>
+$ROOT
+ |
+ +- system.slice
+ | |
+ | +- libvirtd.service
+ |
+ +- machine.slice
+ |
+ +- machine-qemu\x2dvm1.scope
+ | |
+ | +- emulator
+ | +- vcpu0
+ | +- vcpu1
+ |
+ +- machine-qemu\x2dvm2.scope
+ | |
+ | +- emulator
+ | +- vcpu0
+ | +- vcpu1
+ |
+ +- machine-qemu\x2dvm3.scope
+ | |
+ | +- emulator
+ | +- vcpu0
+ | +- vcpu1
+ |
+ +- machine-engineering.slice
+ | |
+ | +- machine-engineering-testing.slice
+ | | |
+ | | +- machine-lxc\x2dcontainer1.scope
+ | |
+ | +- machine-engineering-production.slice
+ | |
+ | +- machine-lxc\x2dcontainer2.scope
+ |
+ +- machine-marketing.slice
+ |
+ +- machine-lxc\x2dcontainer3.scope
+ </pre>
+
+ <h3><a name="currentLayoutGeneric">Non-systemd cgroups layout</a></h3>
+
+ <p>
+ On hosts which do not use systemd, each consumer has a corresponding cgroup
+ named <code>$VMNAME.libvirt-{qemu,lxc}</code>. Each consumer is associated
+ with exactly one partition, which also have a corresponding cgroup usually
+ named <code>$PARTNAME.partition</code>. The exceptions to this naming rule
+ are the three top level default partitions, named <code>/system</code> (for
+ system services), <code>/user</code> (for user login sessions) and
+ <code>/machine</code> (for virtual machines and containers). By default
+ every consumer will of course be associated with the <code>/machine</code>
+ partition. This leads to a hierarchy that looks like:
+ </p>
+
+ <p>
+ Given this, an possible systemd cgroups layout involing 3 qemu guests,
+ 3 lxc containers and 2 custom child slices, would be:
</p>
<pre>
@@ -87,23 +191,21 @@ $ROOT
| +- vcpu0
| +- vcpu1
|
- +- container1.libvirt-lxc
- |
- +- container2.libvirt-lxc
+ +- engineering.partition
+ | |
+ | +- testing.partition
+ | | |
+ | | +- container1.libvirt-lxc
+ | |
+ | +- production.partition
+ | |
+ | +- container2.libvirt-lxc
|
- +- container3.libvirt-lxc
+ +- marketing.partition
+ |
+ +- container3.libvirt-lxc
</pre>
- <p>
- The default cgroups layout ensures that, when there is contention for
- CPU time, it is shared equally between system services, user sessions
- and virtual machines / containers. This prevents virtual machines from
- locking the administrator out of the host, or impacting execution of
- system services. Conversely, when there is no contention from
- system services / user sessions, it is possible for virtual machines
- to fully utilize the host CPUs.
- </p>
-
<h2><a name="customPartiton">Using custom partitions</a></h2>
<p>
@@ -127,12 +229,54 @@ $ROOT
</pre>
<p>
+ Note that the partition names in the guest XML are using a
+ generic naming format, not the the low level naming convention
+ required by the underlying host OS. ie you should not include
+ any of the <code>.partition</code> or <code>.slice</code>
+ suffixes in the XML config. Given a partition name
+ <code>/machine/production</code>, libvirt will automatically
+ apply the platform specific translation required to get
+ <code>/machine/production.partition</code> (non-systemd)
+ or <code>/machine.slice/machine-prodution.slice</code>
+ (systemd) as the underlying cgroup name
+ </p>
+
+ <p>
Libvirt will not auto-create the cgroups directory to back
this partition. In the future, libvirt / virsh will provide
APIs / commands to create custom partitions, but currently
- this is left as an exercise for the administrator. For
- example, given the XML config above, the admin would need
- to create a cgroup named '/machine/production.partition'
+ this is left as an exercise for the administrator.
+ </p>
+
+ <p>
+ <strong>Note:</strong> the ability to place guests in custom
+ partitions is only available with libvirt >= 1.0.5, using
+ the new cgroup layout. The legacy cgroups layout described
+ later in this document did not support customization per guest.
+ </p>
+
+ <h3><a name="createSystemd">Creating custom partitions (systemd)</a></h3>
+
+ <p>
+ Given the XML config above, the admin on a systemd based host would
+ need to create a unit file <code>/etc/systemd/system/machine-production.slice</code>
+ </p>
+
+ <pre>
+# cat > /etc/systemd/system/machine-testing.slice <<EOF
+[Unit]
+Description=VM testing slice
+Before=slices.target
+Wants=machine.slice
+EOF
+# systemctl start machine-testing.slice
+ </pre>
+
+ <h3><a name="createNonSystemd">Creating custom partitions (non-systemd)</a></h3>
+
+ <p>
+ Given the XML config above, the admin on a non-systemd based host
+ would need to create a cgroup named '/machine/production.partition'
</p>
<pre>
@@ -147,18 +291,6 @@ $ROOT
done
</pre>
- <p>
- <strong>Note:</strong> the cgroups directory created as a ".partition"
- suffix, but the XML config does not require this suffix.
- </p>
-
- <p>
- <strong>Note:</strong> the ability to place guests in custom
- partitions is only available with libvirt >= 1.0.5, using
- the new cgroup layout. The legacy cgroups layout described
- later did not support customization per guest.
- </p>
-
<h2><a name="resourceAPIs">Resource management APIs/commands</a></h2>
<p>
--
1.8.3.1
11 years, 1 month
[libvirt] Question about fsfreeze/fsthaw API
by Tomoki Sekiyama
Hi all,
Is there any plans to add APIs to execute fsfreeze/fsthaw in qemu guests?
(something like virDomainFSFreeze(domain,timeout,flags) and
virDomainFSThaw(domain,timeout,flags))
These would be useful in the case a guest has a disk device with its own
snapshot feature, such as cinder volumes in OpenStack configuration.
In such cases, libvirt clients want to issue fsfreeze/fsthaw before/after
taking the disk snapshot.
Currently we can execute them using virDomainQemuAgentCommand(). (e.g.
virsh qemu-agent-command dom '{"execute":"guest-fsfreeze-freeze"}' )
However, this is exposing internal implementation too much. And it
cannot leverage future implementation for the other hypervisors.
So it would be nice if we have well-defined API for fsfreeze/fsthaw.
If there is no plan for these API and this is acceptable, I will try to
implement this.
Any comments are welcome.
Thanks,
Tomoki Sekiyama
11 years, 1 month