[libvirt] [PATCH] storage: Deactive lv before remove it

This is to address BZ# https://bugzilla.redhat.com/show_bug.cgi?id=702260, though even if with this patch, the user might see error like "Unable to deactivate logical volume", it could fix the problem if the lv is referred to by another existing LVs, allowing the user remove the lv successfully without seeing error like "Can't remove open logical volume". For the error "Unable to deactivate logical volume", libvirt can't do more, it's problem of lvm, see BZ#: https://bugzilla.redhat.com/show_bug.cgi?id=570359 And the patch applied to upstream lvm to fix it: https://www.redhat.com/archives/lvm-devel/2011-May/msg00025.html --- configure.ac | 4 ++++ src/storage/storage_backend_logical.c | 30 ++++++++++++++++++++++++------ 2 files changed, 28 insertions(+), 6 deletions(-) diff --git a/configure.ac b/configure.ac index 7982e21..5c2eeb8 100644 --- a/configure.ac +++ b/configure.ac @@ -1666,6 +1666,7 @@ if test "$with_storage_lvm" = "yes" || test "$with_storage_lvm" = "check"; then AC_PATH_PROG([PVREMOVE], [pvremove], [], [$PATH:/sbin:/usr/sbin]) AC_PATH_PROG([VGREMOVE], [vgremove], [], [$PATH:/sbin:/usr/sbin]) AC_PATH_PROG([LVREMOVE], [lvremove], [], [$PATH:/sbin:/usr/sbin]) + AC_PATH_PROG([LVCHANGE], [lvchange], [], [$PATH:/sbin:/usr/sbin]) AC_PATH_PROG([VGCHANGE], [vgchange], [], [$PATH:/sbin:/usr/sbin]) AC_PATH_PROG([VGSCAN], [vgscan], [], [$PATH:/sbin:/usr/sbin]) AC_PATH_PROG([PVS], [pvs], [], [$PATH:/sbin:/usr/sbin]) @@ -1679,6 +1680,7 @@ if test "$with_storage_lvm" = "yes" || test "$with_storage_lvm" = "check"; then if test -z "$PVREMOVE" ; then AC_MSG_ERROR([We need pvremove for LVM storage driver]) ; fi if test -z "$VGREMOVE" ; then AC_MSG_ERROR([We need vgremove for LVM storage driver]) ; fi if test -z "$LVREMOVE" ; then AC_MSG_ERROR([We need lvremove for LVM storage driver]) ; fi + if test -z "$LVCHANGE" ; then AC_MSG_ERROR([We need lvchange for LVM storage driver]) ; fi if test -z "$VGCHANGE" ; then AC_MSG_ERROR([We need vgchange for LVM storage driver]) ; fi if test -z "$VGSCAN" ; then AC_MSG_ERROR([We need vgscan for LVM storage driver]) ; fi if test -z "$PVS" ; then AC_MSG_ERROR([We need pvs for LVM storage driver]) ; fi @@ -1691,6 +1693,7 @@ if test "$with_storage_lvm" = "yes" || test "$with_storage_lvm" = "check"; then if test -z "$PVREMOVE" ; then with_storage_lvm=no ; fi if test -z "$VGREMOVE" ; then with_storage_lvm=no ; fi if test -z "$LVREMOVE" ; then with_storage_lvm=no ; fi + if test -z "$LVCHANGE" ; then with_storage_lvm=no ; fi if test -z "$VGCHANGE" ; then with_storage_lvm=no ; fi if test -z "$VGSCAN" ; then with_storage_lvm=no ; fi if test -z "$PVS" ; then with_storage_lvm=no ; fi @@ -1708,6 +1711,7 @@ if test "$with_storage_lvm" = "yes" || test "$with_storage_lvm" = "check"; then AC_DEFINE_UNQUOTED([PVREMOVE],["$PVREMOVE"],[Location of pvremove program]) AC_DEFINE_UNQUOTED([VGREMOVE],["$VGREMOVE"],[Location of vgremove program]) AC_DEFINE_UNQUOTED([LVREMOVE],["$LVREMOVE"],[Location of lvremove program]) + AC_DEFINE_UNQUOTED([LVCHANGE],["$LVCHANGE"],[Location of lvchange program]) AC_DEFINE_UNQUOTED([VGCHANGE],["$VGCHANGE"],[Location of vgchange program]) AC_DEFINE_UNQUOTED([VGSCAN],["$VGSCAN"],[Location of vgscan program]) AC_DEFINE_UNQUOTED([PVS],["$PVS"],[Location of pvs program]) diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index 4de5442..03d7321 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -667,14 +667,32 @@ virStorageBackendLogicalDeleteVol(virConnectPtr conn ATTRIBUTE_UNUSED, virStorageVolDefPtr vol, unsigned int flags ATTRIBUTE_UNUSED) { - const char *cmdargv[] = { - LVREMOVE, "-f", vol->target.path, NULL - }; + int ret = -1; + virCommandPtr lvchange_cmd = NULL; + virCommandPtr lvremove_cmd = NULL; - if (virRun(cmdargv, NULL) < 0) - return -1; + lvchange_cmd = virCommandNewArgList(LVCHANGE, + "-an", + vol->target.path, + NULL); - return 0; + if (virCommandRun(lvchange_cmd, NULL) < 0) + goto cleanup; + + lvremove_cmd = virCommandNewArgList(LVREMOVE, + "-an", + vol->target.path, + NULL); + + if (virCommandRun(lvremove_cmd, NULL) < 0) + goto cleanup; + + ret = 0; + +cleanup: + virCommandFree(lvchange_cmd); + virCommandFree(lvremove_cmd); + return ret; } -- 1.7.4

Hi On 06/08/11 17:01, Osier Yang wrote:
This is to address BZ# https://bugzilla.redhat.com/show_bug.cgi?id=702260, though even if with this patch, the user might see error like "Unable to deactivate logical volume",
Can you try the attached patch? (in addition to the upstream lvm fix you mentioned below) Recent distributions generate udev events *after* the use of devices through "watch" rules. As a result, lvremove/lvchange can race with it and fail to remove/deactivate volume. I haven't tested the patch but I could fix the similar problem of lvremove by putting 'udevadm settle' before that. So I think it's worth trying.
it could fix the problem if the lv is referred to by another existing LVs, allowing the user remove the lv successfully without seeing error like "Can't remove open logical volume".
For the error "Unable to deactivate logical volume", libvirt can't do more, it's problem of lvm, see BZ#: https://bugzilla.redhat.com/show_bug.cgi?id=570359
And the patch applied to upstream lvm to fix it: https://www.redhat.com/archives/lvm-devel/2011-May/msg00025.html
This lvm patch fixes only the case where lvremove itself generates udev events. You'll still see the problem as the udev events will be generated when a VM finishes using the volume, for example. -- Jun'ichi Nomura, NEC Corporation diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index 7d5adf9..2a86f43 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -671,6 +671,8 @@ virStorageBackendLogicalDeleteVol(virConnectPtr conn ATTRIBUTE_UNUSED, LVREMOVE, "-f", vol->target.path, NULL }; + virFileWaitForDevices(); + if (virRun(cmdargv, NULL) < 0) return -1;

On 06/09/2011 08:33 AM, Jun'ichi Nomura wrote:
Hi
On 06/08/11 17:01, Osier Yang wrote:
This is to address BZ# https://bugzilla.redhat.com/show_bug.cgi?id=702260, though even if with this patch, the user might see error like "Unable to deactivate logical volume",
Can you try the attached patch? (in addition to the upstream lvm fix you mentioned below)
Recent distributions generate udev events *after* the use of devices through "watch" rules. As a result, lvremove/lvchange can race with it and fail to remove/deactivate volume.
I haven't tested the patch but I could fix the similar problem of lvremove by putting 'udevadm settle' before that. So I think it's worth trying.
it could fix the problem if the lv is referred to by another existing LVs, allowing the user remove the lv successfully without seeing error like "Can't remove open logical volume".
For the error "Unable to deactivate logical volume", libvirt can't do more, it's problem of lvm, see BZ#: https://bugzilla.redhat.com/show_bug.cgi?id=570359
And the patch applied to upstream lvm to fix it: https://www.redhat.com/archives/lvm-devel/2011-May/msg00025.html
This lvm patch fixes only the case where lvremove itself generates udev events. You'll still see the problem as the udev events will be generated when a VM finishes using the volume, for example.
The patch looks better than using "lvchange -an", I will try it first. Thanks. Regards Osier

On 06/09/2011 08:33 AM, Jun'ichi Nomura wrote:
Hi
On 06/08/11 17:01, Osier Yang wrote:
This is to address BZ# https://bugzilla.redhat.com/show_bug.cgi?id=702260, though even if with this patch, the user might see error like "Unable to deactivate logical volume",
Can you try the attached patch? (in addition to the upstream lvm fix you mentioned below)
Recent distributions generate udev events *after* the use of devices through "watch" rules. As a result, lvremove/lvchange can race with it and fail to remove/deactivate volume.
I haven't tested the patch but I could fix the similar problem of lvremove by putting 'udevadm settle' before that. So I think it's worth trying.
it could fix the problem if the lv is referred to by another existing LVs, allowing the user remove the lv successfully without seeing error like "Can't remove open logical volume".
For the error "Unable to deactivate logical volume", libvirt can't do more, it's problem of lvm, see BZ#: https://bugzilla.redhat.com/show_bug.cgi?id=570359
And the patch applied to upstream lvm to fix it: https://www.redhat.com/archives/lvm-devel/2011-May/msg00025.html
This lvm patch fixes only the case where lvremove itself generates udev events. You'll still see the problem as the udev events will be generated when a VM finishes using the volume, for example.
I'm worried about using "udevadm settle" will introduce some delays, but it looks to me this is the only workable fix for the bug, as for "lvchange -an $vol; lvremove $vol", it could encount the similiar problem with "lvremove -f", as you can still get the udev processing event when doing "lvchange -an". Anybody has thoughts on this? if no, I'd think Jun'ichi's patch is nice, and push it. Thanks Osier
participants (2)
-
Jun'ichi Nomura
-
Osier Yang