Hi
On 06/08/11 17:01, Osier Yang wrote:
This is to address BZ#
https://bugzilla.redhat.com/show_bug.cgi?id=702260,
though even if with this patch, the user might see error like
"Unable to deactivate logical volume",
Can you try the attached patch?
(in addition to the upstream lvm fix you mentioned below)
Recent distributions generate udev events *after* the use of
devices through "watch" rules.
As a result, lvremove/lvchange can race with it and fail to
remove/deactivate volume.
I haven't tested the patch but I could fix the similar
problem of lvremove by putting 'udevadm settle' before that.
So I think it's worth trying.
it could fix the problem if the
lv is referred to by another existing LVs, allowing the user remove
the lv successfully without seeing error like "Can't remove open logical
volume".
For the error "Unable to deactivate logical volume", libvirt can't do
more, it's problem of lvm, see BZ#:
https://bugzilla.redhat.com/show_bug.cgi?id=570359
And the patch applied to upstream lvm to fix it:
https://www.redhat.com/archives/lvm-devel/2011-May/msg00025.html
This lvm patch fixes only the case where lvremove itself generates udev events.
You'll still see the problem as the udev events will be generated when
a VM finishes using the volume, for example.
--
Jun'ichi Nomura, NEC Corporation
diff --git a/src/storage/storage_backend_logical.c
b/src/storage/storage_backend_logical.c
index 7d5adf9..2a86f43 100644
--- a/src/storage/storage_backend_logical.c
+++ b/src/storage/storage_backend_logical.c
@@ -671,6 +671,8 @@ virStorageBackendLogicalDeleteVol(virConnectPtr conn
ATTRIBUTE_UNUSED,
LVREMOVE, "-f", vol->target.path, NULL
};
+ virFileWaitForDevices();
+
if (virRun(cmdargv, NULL) < 0)
return -1;