[libvirt] [PATCH 0/4] Expose FSFreeze/FSThaw within the guest as commands

Currently FSFreeze and FSThaw are supported by qemu guest agent and they are used internally in snapshot-create command with --quiesce option. However, when users want to utilize the native snapshot feature of storage devices (such as LVM over iSCSI, various enterprise storage systems, etc.), they need to issue fsfreeze command separately from libvirt-driven snapshots. (OpenStack cinder provides these storages' snapshot feature, but it cannot quiesce the guest filesystems automatically for now.) Although virDomainQemuGuestAgent() API could be used for this purpose, it depends too much on specific hypervisor implementation. This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly. The APIs has mountPoint and flags option currently unsupported for future extension, as virDomainFSTrim() API. Duplicated FSFreeze results in error caused by qemu guest agent. --- Tomoki Sekiyama (4): Introduce virDomainFSFreeze() public API remote: Implement virDomainFSFreeze and virDomainFSThaw qemu: Implement virDomainFSFreeze virsh: Expose new virDomainFSFreeze and virDomainFSThaw API include/libvirt/libvirt.h.in | 8 ++ src/access/viraccessperm.c | 2 - src/access/viraccessperm.h | 6 ++ src/driver.h | 12 ++++ src/libvirt.c | 92 +++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ src/qemu/qemu_driver.c | 142 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 2 + src/remote/remote_protocol.x | 26 +++++++- src/remote_protocol-structs | 12 ++++ src/rpc/gendispatch.pl | 2 + tools/virsh-domain.c | 108 ++++++++++++++++++++++++++++++++ tools/virsh.pod | 17 +++++ 13 files changed, 433 insertions(+), 2 deletions(-)

This will freeze filesystems within guest. The API takes @mountPoint arguments which are currently not used, for future extensions of guest agent. Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com> --- include/libvirt/libvirt.h.in | 8 ++++ src/driver.h | 12 +++++ src/libvirt.c | 92 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 6 +++ 4 files changed, 118 insertions(+) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 80b2d78..559d916 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -5069,6 +5069,14 @@ int virDomainFSTrim(virDomainPtr dom, unsigned long long minimum, unsigned int flags); +int virDomainFSFreeze(virDomainPtr dom, + const char *mountPoint, + unsigned int flags); + +int virDomainFSThaw(virDomainPtr dom, + const char *mountPoint, + unsigned int flags); + /** * virSchedParameterType: * diff --git a/src/driver.h b/src/driver.h index 8cd164a..dd41aea 100644 --- a/src/driver.h +++ b/src/driver.h @@ -1128,6 +1128,16 @@ typedef int unsigned int flags, int cancelled); +typedef int +(*virDrvDomainFSFreeze)(virDomainPtr dom, + const char *mountPoint, + unsigned int flags); + +typedef int +(*virDrvDomainFSThaw)(virDomainPtr dom, + const char *mountPoint, + unsigned int flags); + typedef struct _virDriver virDriver; typedef virDriver *virDriverPtr; @@ -1339,6 +1349,8 @@ struct _virDriver { virDrvDomainMigrateFinish3Params domainMigrateFinish3Params; virDrvDomainMigrateConfirm3Params domainMigrateConfirm3Params; virDrvConnectGetCPUModelNames connectGetCPUModelNames; + virDrvDomainFSFreeze domainFSFreeze; + virDrvDomainFSThaw domainFSThaw; }; diff --git a/src/libvirt.c b/src/libvirt.c index 90608ab..1c09c43 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -22042,3 +22042,95 @@ error: virDispatchError(dom->conn); return -1; } + +/** + * virDomainFSFreeze: + * @dom: a domain object + * @mountPoint: which mount points to fsfreeze + * @flags: extra flags, not used yet, so callers should always pass 0 + * + * Freeze filesystems within the guest (hence guest agent may be + * required depending on hypervisor used). Either call it on each + * mounted filesystem (@mountPoint is NULL) or on specified @mountPoint. + * + * Returns 0 on success, -1 otherwise. + */ +int +virDomainFSFreeze(virDomainPtr dom, + const char *mountPoint, + unsigned int flags) +{ + VIR_DOMAIN_DEBUG(dom, "mountPoint=%s, flags=%x", mountPoint, flags); + + virResetLastError(); + + if (!VIR_IS_DOMAIN(dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (dom->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (dom->conn->driver->domainFSFreeze) { + int ret = dom->conn->driver->domainFSFreeze(dom, mountPoint, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} + +/** + * virDomainFSThaw: + * @dom: a domain object + * @mountPoint: which mount points to thaw + * @flags: extra flags, not used yet, so callers should always pass 0 + * + * Thaw the frozen filesystems within the guest (hence guest agent + * may be required depending on hypervisor used). Either call it on each + * mounted filesystem (@mountPoint is NULL) or on specified @mountPoint. + * + * Returns 0 on success, -1 otherwise. + */ +int +virDomainFSThaw(virDomainPtr dom, + const char *mountPoint, + unsigned int flags) +{ + VIR_DOMAIN_DEBUG(dom, "mountPoint=%s, flags=%x", mountPoint, flags); + + virResetLastError(); + + if (!VIR_IS_DOMAIN(dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (dom->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (dom->conn->driver->domainFSThaw) { + int ret = dom->conn->driver->domainFSThaw(dom, mountPoint, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index fe9b497..412192f 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -639,4 +639,10 @@ LIBVIRT_1.1.3 { virConnectGetCPUModelNames; } LIBVIRT_1.1.1; +LIBVIRT_1.1.5 { + global: + virDomainFSFreeze; + virDomainFSThaw; +} LIBVIRT_1.1.3; + # .... define new API here using predicted next version number ....

New rules are added in fixup_name in gendispatch.pl to keep the name FSFreeze and FSThaw. Also these use new ACL permission 'fs_freeze'. Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com> --- src/access/viraccessperm.c | 2 +- src/access/viraccessperm.h | 6 ++++++ src/remote/remote_driver.c | 2 ++ src/remote/remote_protocol.x | 26 +++++++++++++++++++++++++- src/remote_protocol-structs | 12 ++++++++++++ src/rpc/gendispatch.pl | 2 ++ 6 files changed, 48 insertions(+), 2 deletions(-) diff --git a/src/access/viraccessperm.c b/src/access/viraccessperm.c index d517c66..92f7366 100644 --- a/src/access/viraccessperm.c +++ b/src/access/viraccessperm.c @@ -42,7 +42,7 @@ VIR_ENUM_IMPL(virAccessPermDomain, "init_control", "inject_nmi", "send_input", "send_signal", "fs_trim", "block_read", "block_write", "mem_read", "open_graphics", "open_device", "screenshot", - "open_namespace"); + "open_namespace", "fs_freeze"); VIR_ENUM_IMPL(virAccessPermInterface, VIR_ACCESS_PERM_INTERFACE_LAST, diff --git a/src/access/viraccessperm.h b/src/access/viraccessperm.h index fdc461b..ab569ec 100644 --- a/src/access/viraccessperm.h +++ b/src/access/viraccessperm.h @@ -242,6 +242,12 @@ typedef enum { */ VIR_ACCESS_PERM_DOMAIN_FS_TRIM, /* Issue TRIM to guest filesystems */ + /** + * @desc: Fsfreeze and thaw domain filesystems + * @message: Freezing and thawing domain filesystems require authorization + */ + VIR_ACCESS_PERM_DOMAIN_FS_FREEZE, /* Freeze/thaw guest filesystems */ + /* Peeking at guest */ /** diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 7181949..fa12174 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -7013,6 +7013,8 @@ static virDriver remote_driver = { .domainMigrateFinish3Params = remoteDomainMigrateFinish3Params, /* 1.1.0 */ .domainMigrateConfirm3Params = remoteDomainMigrateConfirm3Params, /* 1.1.0 */ .connectGetCPUModelNames = remoteConnectGetCPUModelNames, /* 1.1.3 */ + .domainFSFreeze = remoteDomainFSFreeze, /* 1.1.5 */ + .domainFSThaw = remoteDomainFSThaw, /* 1.1.5 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index f942670..584440d 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -2849,6 +2849,18 @@ struct remote_connect_get_cpu_model_names_ret { int ret; }; +struct remote_domain_fsfreeze_args { + remote_nonnull_domain dom; + remote_string mountPoint; + unsigned int flags; +}; + +struct remote_domain_fsthaw_args { + remote_nonnull_domain dom; + remote_string mountPoint; + unsigned int flags; +}; + /*----- Protocol. -----*/ /* Define the program number, protocol version and procedure numbers here. */ @@ -5018,5 +5030,17 @@ enum remote_procedure { * @generate: none * @acl: connect:read */ - REMOTE_PROC_CONNECT_GET_CPU_MODEL_NAMES = 312 + REMOTE_PROC_CONNECT_GET_CPU_MODEL_NAMES = 312, + + /** + * @generate: both + * @acl: domain:fs_freeze + */ + REMOTE_PROC_DOMAIN_FSFREEZE = 313, + + /** + * @generate: both + * @acl: domain:fs_freeze + */ + REMOTE_PROC_DOMAIN_FSTHAW = 314 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 98d2d5b..b09c9b0 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -2328,6 +2328,16 @@ struct remote_connect_get_cpu_model_names_ret { } models; int ret; }; +struct remote_domain_fsfreeze_args { + remote_nonnull_domain dom; + remote_string mountPoint; + u_int flags; +}; +struct remote_domain_fsthaw_args { + remote_nonnull_domain dom; + remote_string mountPoint; + u_int flags; +}; enum remote_procedure { REMOTE_PROC_CONNECT_OPEN = 1, REMOTE_PROC_CONNECT_CLOSE = 2, @@ -2641,4 +2651,6 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_CREATE_WITH_FILES = 310, REMOTE_PROC_DOMAIN_EVENT_DEVICE_REMOVED = 311, REMOTE_PROC_CONNECT_GET_CPU_MODEL_NAMES = 312, + REMOTE_PROC_DOMAIN_FSFREEZE = 313, + REMOTE_PROC_DOMAIN_FSTHAW = 314, }; diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl index ceb1ad8..0b256f3 100755 --- a/src/rpc/gendispatch.pl +++ b/src/rpc/gendispatch.pl @@ -64,6 +64,8 @@ sub fixup_name { $name =~ s/Nmi$/NMI/; $name =~ s/Pm/PM/; $name =~ s/Fstrim$/FSTrim/; + $name =~ s/Fsfreeze$/FSFreeze/; + $name =~ s/Fsthaw$/FSThaw/; $name =~ s/Scsi/SCSI/; $name =~ s/Wwn$/WWN/;

Use qemuAgentFSFreeze() and qemuAgentFSThaw() already implemented for snapshot quiescing. @mountPoint must be NULL and @flags zero, because qemu guest agent doesn't support these arguments so far. Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com> --- src/qemu/qemu_driver.c | 142 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 142 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ef1359c..4e7cdcc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -15705,6 +15705,146 @@ qemuConnectGetCPUModelNames(virConnectPtr conn, } +static int +qemuDomainFSFreeze(virDomainPtr dom, + const char *mountPoint, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + qemuDomainObjPrivatePtr priv; + + virCheckFlags(0, -1); + + if (mountPoint) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s", + _("Specifying mount point " + "is not supported for now")); + return -1; + } + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + priv = vm->privateData; + + if (virDomainFSFreezeEnsureACL(dom->conn, vm->def) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto cleanup; + } + + if (!priv->agent) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s", + _("QEMU guest agent is not configured")); + goto cleanup; + } + + if (priv->agentError) { + virReportError(VIR_ERR_AGENT_UNRESPONSIVE, "%s", + _("QEMU guest agent is not " + "available due to an error")); + goto cleanup; + } + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto endjob; + } + + qemuDomainObjEnterAgent(vm); + ret = qemuAgentFSFreeze(priv->agent); + qemuDomainObjExitAgent(vm); + +endjob: + if (!qemuDomainObjEndJob(driver, vm)) + vm = NULL; + +cleanup: + if (vm) + virObjectUnlock(vm); + return ret; +} + + +static int +qemuDomainFSThaw(virDomainPtr dom, + const char *mountPoint, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + qemuDomainObjPrivatePtr priv; + + virCheckFlags(0, -1); + + if (mountPoint) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s", + _("Specifying mount point " + "is not supported for now")); + return -1; + } + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + priv = vm->privateData; + + if (virDomainFSFreezeEnsureACL(dom->conn, vm->def) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto cleanup; + } + + if (!priv->agent) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s", + _("QEMU guest agent is not configured")); + goto cleanup; + } + + if (priv->agentError) { + virReportError(VIR_ERR_AGENT_UNRESPONSIVE, "%s", + _("QEMU guest agent is not " + "available due to an error")); + goto cleanup; + } + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto endjob; + } + + qemuDomainObjEnterAgent(vm); + ret = qemuAgentFSThaw(priv->agent); + qemuDomainObjExitAgent(vm); + +endjob: + if (!qemuDomainObjEndJob(driver, vm)) + vm = NULL; + +cleanup: + if (vm) + virObjectUnlock(vm); + return ret; +} + + static virDriver qemuDriver = { .no = VIR_DRV_QEMU, .name = QEMU_DRIVER_NAME, @@ -15892,6 +16032,8 @@ static virDriver qemuDriver = { .domainMigrateFinish3Params = qemuDomainMigrateFinish3Params, /* 1.1.0 */ .domainMigrateConfirm3Params = qemuDomainMigrateConfirm3Params, /* 1.1.0 */ .connectGetCPUModelNames = qemuConnectGetCPUModelNames, /* 1.1.3 */ + .domainFSFreeze = qemuDomainFSFreeze, /* 1.1.5 */ + .domainFSThaw = qemuDomainFSThaw, /* 1.1.5 */ };

These are exposed under domfsfreeze command and domfsthaw command. Although the API doesn't support specifying mount point yet, expose it anyway. Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com> --- tools/virsh-domain.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 17 ++++++++ 2 files changed, 125 insertions(+) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 60abd3d..e28ac8f 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -10451,6 +10451,102 @@ cleanup: return ret; } +static const vshCmdInfo info_domfsfreeze[] = { + {.name = "help", + .data = N_("Freeze domain's mounted filesystems.") + }, + {.name = "desc", + .data = N_("Freeze domain's mounted filesystems.") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_domfsfreeze[] = { + {.name = "domain", + .type = VSH_OT_DATA, + .flags = VSH_OFLAG_REQ, + .help = N_("domain name, id or uuid") + }, + {.name = "mountpoint", + .type = VSH_OT_DATA, + .help = N_("which mount point to fsfreeze") + }, + {.name = NULL} +}; +static bool +cmdDomFSFreeze(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom = NULL; + bool ret = false; + const char *mountPoint = NULL; + unsigned int flags = 0; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return ret; + + if (vshCommandOptStringReq(ctl, cmd, "mountpoint", &mountPoint) < 0) + goto cleanup; + + if (virDomainFSFreeze(dom, mountPoint, flags) < 0) { + vshError(ctl, _("Unable to freeze filesystems")); + goto cleanup; + } + + ret = true; + +cleanup: + virDomainFree(dom); + return ret; +} + +static const vshCmdInfo info_domfsthaw[] = { + {.name = "help", + .data = N_("Thaw domain's mounted filesystems.") + }, + {.name = "desc", + .data = N_("Thaw domain's mounted filesystems.") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_domfsthaw[] = { + {.name = "domain", + .type = VSH_OT_DATA, + .flags = VSH_OFLAG_REQ, + .help = N_("domain name, id or uuid") + }, + {.name = "mountpoint", + .type = VSH_OT_DATA, + .help = N_("which mount point to thaw") + }, + {.name = NULL} +}; +static bool +cmdDomFSThaw(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom = NULL; + bool ret = false; + const char *mountPoint = NULL; + unsigned int flags = 0; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return ret; + + if (vshCommandOptStringReq(ctl, cmd, "mountpoint", &mountPoint) < 0) + goto cleanup; + + if (virDomainFSThaw(dom, mountPoint, flags) < 0) { + vshError(ctl, _("Unable to thaw filesystems")); + goto cleanup; + } + + ret = true; + +cleanup: + virDomainFree(dom); + return ret; +} + const vshCmdDef domManagementCmds[] = { {.name = "attach-device", .handler = cmdAttachDevice, @@ -10598,6 +10694,18 @@ const vshCmdDef domManagementCmds[] = { .info = info_domdisplay, .flags = 0 }, + {.name = "domfsfreeze", + .handler = cmdDomFSFreeze, + .opts = opts_domfsfreeze, + .info = info_domfsfreeze, + .flags = 0 + }, + {.name = "domfsthaw", + .handler = cmdDomFSThaw, + .opts = opts_domfsthaw, + .info = info_domfsthaw, + .flags = 0 + }, {.name = "domfstrim", .handler = cmdDomFSTrim, .opts = opts_domfstrim, diff --git a/tools/virsh.pod b/tools/virsh.pod index dac9a08..2835696 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -915,6 +915,23 @@ Output a URI which can be used to connect to the graphical display of the domain via VNC, SPICE or RDP. If I<--include-password> is specified, the SPICE channel password will be included in the URI. +=item B<domfsfreeze> I<domain> [I<--mountpoint mountPoint>] + +Freeze all mounted filesystems within a running domain to prepare for +consistent snapshots. If I<--mountpoint> parameter is specified, +only one mount point is frozen. + +Note that B<snapshot-create> command has a I<--quiesce> option to freeze +and thaw the filesystems automatically to keep snapshots consistent. +B<domfsfreeze> command is only needed when a user wants to utilize the +native snapshot features of storage devices not supported by libvirt yet. + +=item B<domfsthaw> I<domain> [I<--mountpoint mountPoint>] + +Thaw all mounted filesystems within a running domain, which are frozen +by domfsfreeze command. If I<--mountpoint> parameter is specified, +only one mount point is thawed. + =item B<domfstrim> I<domain> [I<--minimum> B<bytes>] [I<--mountpoint mountPoint>]

Any comments? On 11/18/13 11:38 , "Tomoki Sekiyama" <tomoki.sekiyama@hds.com> wrote:
Currently FSFreeze and FSThaw are supported by qemu guest agent and they are used internally in snapshot-create command with --quiesce option. However, when users want to utilize the native snapshot feature of storage devices (such as LVM over iSCSI, various enterprise storage systems, etc.), they need to issue fsfreeze command separately from libvirt-driven snapshots. (OpenStack cinder provides these storages' snapshot feature, but it cannot quiesce the guest filesystems automatically for now.)
Although virDomainQemuGuestAgent() API could be used for this purpose, it depends too much on specific hypervisor implementation.
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly.
The APIs has mountPoint and flags option currently unsupported for future extension, as virDomainFSTrim() API. Duplicated FSFreeze results in error caused by qemu guest agent.
---
Tomoki Sekiyama (4): Introduce virDomainFSFreeze() public API remote: Implement virDomainFSFreeze and virDomainFSThaw qemu: Implement virDomainFSFreeze virsh: Expose new virDomainFSFreeze and virDomainFSThaw API
include/libvirt/libvirt.h.in | 8 ++ src/access/viraccessperm.c | 2 - src/access/viraccessperm.h | 6 ++ src/driver.h | 12 ++++ src/libvirt.c | 92 +++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ src/qemu/qemu_driver.c | 142 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 2 + src/remote/remote_protocol.x | 26 +++++++- src/remote_protocol-structs | 12 ++++ src/rpc/gendispatch.pl | 2 + tools/virsh-domain.c | 108 ++++++++++++++++++++++++++++++++ tools/virsh.pod | 17 +++++ 13 files changed, 433 insertions(+), 2 deletions(-)

On 11/18/2013 09:38 AM, Tomoki Sekiyama wrote:
Currently FSFreeze and FSThaw are supported by qemu guest agent and they are used internally in snapshot-create command with --quiesce option. However, when users want to utilize the native snapshot feature of storage devices (such as LVM over iSCSI, various enterprise storage systems, etc.), they need to issue fsfreeze command separately from libvirt-driven snapshots. (OpenStack cinder provides these storages' snapshot feature, but it cannot quiesce the guest filesystems automatically for now.)
Although virDomainQemuGuestAgent() API could be used for this purpose, it depends too much on specific hypervisor implementation.
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly.
The APIs has mountPoint and flags option currently unsupported for future extension, as virDomainFSTrim() API. Duplicated FSFreeze results in error caused by qemu guest agent.
Hmm, I just realized this hasn't seen any response in a couple of months. I still haven't looked closely at the thread, but definitely think we need to add this (or something like it). Please feel free to ping the list every week or two if you don't seem to be getting a response, rather than letting it languish for a quarter of a year! -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 3/3/14 17:14 , "Eric Blake" <eblake@redhat.com> wrote:
On 11/18/2013 09:38 AM, Tomoki Sekiyama wrote:
Currently FSFreeze and FSThaw are supported by qemu guest agent and they are used internally in snapshot-create command with --quiesce option. However, when users want to utilize the native snapshot feature of storage devices (such as LVM over iSCSI, various enterprise storage systems, etc.), they need to issue fsfreeze command separately from libvirt-driven snapshots. (OpenStack cinder provides these storages' snapshot feature, but it cannot quiesce the guest filesystems automatically for now.)
Although virDomainQemuGuestAgent() API could be used for this purpose, it depends too much on specific hypervisor implementation.
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly.
The APIs has mountPoint and flags option currently unsupported for future extension, as virDomainFSTrim() API. Duplicated FSFreeze results in error caused by qemu guest agent.
Hmm, I just realized this hasn't seen any response in a couple of months. I still haven't looked closely at the thread, but definitely think we need to add this (or something like it).
Thanks. Based on previous discussion on last November, now I'm getting ready to Post "virDomainQuiesce" version, which does fsfreeze and fsthaw in a single API, with callback event to notify a client that guest fs are frozen so that the client can register custom event handler to create snapshot. It will use qemu async job mechanism to manage quiesced state and to exclude from the other APIs.
Please feel free to ping the list every week or two if you don't seem to be getting a response, rather than letting it languish for a quarter of a year!
-- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
Thanks, -- Tomoki Sekiyama

On 03/04/2014 08:39 AM, Tomoki Sekiyama wrote:
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly.
Thanks. Based on previous discussion on last November, now I'm getting ready to Post "virDomainQuiesce" version, which does fsfreeze and fsthaw in a single API,
No - the discussion back in November was questioning whether a callback would work, and the decision was that it was too complicated. Go with two API (freeze and thaw) and merely make sure that we lock out any other command that won't work while a freeze is in effect. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Tue, Mar 04, 2014 at 10:11:39AM -0700, Eric Blake wrote:
On 03/04/2014 08:39 AM, Tomoki Sekiyama wrote:
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh domfsfreeze/domfsthaw commands to enable the users to freeze and thaw domain's filesystems cleanly.
Thanks. Based on previous discussion on last November, now I'm getting ready to Post "virDomainQuiesce" version, which does fsfreeze and fsthaw in a single API,
No - the discussion back in November was questioning whether a callback would work, and the decision was that it was too complicated. Go with two API (freeze and thaw) and merely make sure that we lock out any other command that won't work while a freeze is in effect.
Yep, I don't believe a single API will work, and we should have separate calls. We just need to track the sanity internally to the QEMU driver. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 3/4/14 12:13 , "Daniel P. Berrange" <berrange@redhat.com> wrote:
On Tue, Mar 04, 2014 at 10:11:39AM -0700, Eric Blake wrote:
On 03/04/2014 08:39 AM, Tomoki Sekiyama wrote:
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and
domfsfreeze/domfsthaw commands to enable the users to freeze and
virsh thaw
domain's filesystems cleanly.
Thanks. Based on previous discussion on last November, now I'm getting ready to Post "virDomainQuiesce" version, which does fsfreeze and fsthaw in a single API,
No - the discussion back in November was questioning whether a callback would work, and the decision was that it was too complicated. Go with two API (freeze and thaw) and merely make sure that we lock out any other command that won't work while a freeze is in effect.
Yep, I don't believe a single API will work, and we should have separate calls. We just need to track the sanity internally to the QEMU driver.
OK, I will add sanity checking in the FSFreeze and FSThaw API. Thanks, -- Tomoki Sekiyama
participants (3)
-
Daniel P. Berrange
-
Eric Blake
-
Tomoki Sekiyama