[libvirt] [PATCH] logical: Allow [no]overwrite option for poolBuild

https://bugzilla.redhat.com/show_bug.cgi?id=1373711 When using pool-create --build or pool-define and pool-build to create/define and build/initialize the logical pool, the 'pvcreate' command could fail if some previous attempt had been used on the source path previously. So add support for the --override option in order to force usage of the magic override option "-ff" and the option "-y" in order to force creation and answer any questions with yes. NB: Although the reality is part of the process of building the logical pool involves wiping out the first 512 bytes of the disk block to be added (e.g. the partition table) because pvcreate requires that. So, to a degree the device being added is already altered. I suspect the knowlege of whether the disk was in a vg already could be contained outside the range of the first 512 bytes. Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index ca05fe1..9c76156 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -682,13 +682,18 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, virStoragePoolObjPtr pool, unsigned int flags) { - virCommandPtr vgcmd; + virCommandPtr vgcmd = NULL; int fd; char zeros[PV_BLANK_SECTOR_SIZE]; int ret = -1; size_t i; - virCheckFlags(0, -1); + virCheckFlags(VIR_STORAGE_POOL_BUILD_OVERWRITE | + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, ret); + + VIR_EXCLUSIVE_FLAGS_GOTO(VIR_STORAGE_POOL_BUILD_OVERWRITE, + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, + cleanup); memset(zeros, 0, sizeof(zeros)); @@ -731,10 +736,21 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, /* * Initialize the physical volume because vgcreate is not * clever enough todo this for us :-( + * + * If this API is called twice for the same device, then because + * vgcmd adds the device to the volume group, the pvcreate will + * fail since the pv is already part of the vg. Allow use of the + * override option inorder to overrule! */ - pvcmd = virCommandNewArgList(PVCREATE, - pool->def->source.devices[i].path, - NULL); + if (flags & VIR_STORAGE_POOL_BUILD_OVERWRITE) + pvcmd = virCommandNewArgList(PVCREATE, + "-ff", "-y", + pool->def->source.devices[i].path, + NULL); + else + pvcmd = virCommandNewArgList(PVCREATE, + pool->def->source.devices[i].path, + NULL); if (virCommandRun(pvcmd, NULL) < 0) { virCommandFree(pvcmd); goto cleanup; -- 2.7.4

On Tue, Nov 15, 2016 at 04:55:10PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1373711
When using pool-create --build or pool-define and pool-build to create/define and build/initialize the logical pool, the 'pvcreate' command could fail if some previous attempt had been used on the source path previously.
So add support for the --override option in order to force usage of the magic override option "-ff" and the option "-y" in order to force creation and answer any questions with yes.
NB: Although the reality is part of the process of building the logical pool involves wiping out the first 512 bytes of the disk block to be added (e.g. the partition table) because pvcreate requires that. So, to a degree the device being added is already altered. I suspect the knowlege of whether the disk was in a vg already could be contained outside the range of the first 512 bytes.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index ca05fe1..9c76156 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -682,13 +682,18 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, virStoragePoolObjPtr pool, unsigned int flags) { - virCommandPtr vgcmd; + virCommandPtr vgcmd = NULL; int fd; char zeros[PV_BLANK_SECTOR_SIZE]; int ret = -1; size_t i;
- virCheckFlags(0, -1); + virCheckFlags(VIR_STORAGE_POOL_BUILD_OVERWRITE | + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, ret); + + VIR_EXCLUSIVE_FLAGS_GOTO(VIR_STORAGE_POOL_BUILD_OVERWRITE, + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, + cleanup);
memset(zeros, 0, sizeof(zeros));
@@ -731,10 +736,21 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, /* * Initialize the physical volume because vgcreate is not * clever enough todo this for us :-( + * + * If this API is called twice for the same device, then because + * vgcmd adds the device to the volume group, the pvcreate will + * fail since the pv is already part of the vg. Allow use of the + * override option inorder to overrule! */ - pvcmd = virCommandNewArgList(PVCREATE, - pool->def->source.devices[i].path, - NULL); + if (flags & VIR_STORAGE_POOL_BUILD_OVERWRITE) + pvcmd = virCommandNewArgList(PVCREATE, + "-ff", "-y", + pool->def->source.devices[i].path, + NULL); + else + pvcmd = virCommandNewArgList(PVCREATE, + pool->def->source.devices[i].path, + NULL); if (virCommandRun(pvcmd, NULL) < 0) { virCommandFree(pvcmd); goto cleanup; -- 2.7.4
NACK. I'm not really convinced whether we should really do this. I'm not quite familiar with LVM, so I tried a couple usecases. Since a forceful recreation of a physical partition on a device by pvcreate will permanently wipe all the volume group metadata, once you're done with the volume group you cannot remove it in a straightforward manner: [root@f23-a ~]# pvcreate /dev/vdc Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgcreate myvg /dev/vdb /dev/vdc Volume group "myvg" successfully created [root@f23-a ~]# pvcreate /dev/vdc -ff -y WARNING: Forcing physical volume creation on /dev/vdc of volume group "myvg" Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgremove myvg WARNING: Device for PV zConQ1-eSy3-xvQu-5mWv-kVdT-v3yI-LwZBk3 not found or rejected by a filter. Volume group "myvg" not found, is inconsistent or has PVs missing. Now, I also tried removing it forcefully (-f) which worked and which is also something pool-delete does, for me it just doesn't seem right to allow the user to corrupt the volume group by allowing --overwrite, thus -ff to pvcreate. Another thing is that if you do pvcreate whilst having the volume group active, what you get is: [root@f23-a ~]# pvcreate /dev/vdc -ff -y Can't open /dev/vdc exclusively. Mounted filesystem? So, even the -ff isn't enough. Anyway, since you were rather skeptical about this BZ in your comment, I suggest we close it as WONTFIX and the testcase should either be adjusted or dropped completely because as you say, there basically isn't any practical usage for that and the only thing you get out of it is a corrupted LVM anyway. Erik

On 11/16/2016 07:42 AM, Erik Skultety wrote:
On Tue, Nov 15, 2016 at 04:55:10PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1373711
When using pool-create --build or pool-define and pool-build to create/define and build/initialize the logical pool, the 'pvcreate' command could fail if some previous attempt had been used on the source path previously.
So add support for the --override option in order to force usage of the magic override option "-ff" and the option "-y" in order to force creation and answer any questions with yes.
NB: Although the reality is part of the process of building the logical pool involves wiping out the first 512 bytes of the disk block to be added (e.g. the partition table) because pvcreate requires that. So, to a degree the device being added is already altered. I suspect the knowlege of whether the disk was in a vg already could be contained outside the range of the first 512 bytes.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index ca05fe1..9c76156 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -682,13 +682,18 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, virStoragePoolObjPtr pool, unsigned int flags) { - virCommandPtr vgcmd; + virCommandPtr vgcmd = NULL; int fd; char zeros[PV_BLANK_SECTOR_SIZE]; int ret = -1; size_t i;
- virCheckFlags(0, -1); + virCheckFlags(VIR_STORAGE_POOL_BUILD_OVERWRITE | + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, ret); + + VIR_EXCLUSIVE_FLAGS_GOTO(VIR_STORAGE_POOL_BUILD_OVERWRITE, + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, + cleanup);
memset(zeros, 0, sizeof(zeros));
@@ -731,10 +736,21 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, /* * Initialize the physical volume because vgcreate is not * clever enough todo this for us :-( + * + * If this API is called twice for the same device, then because + * vgcmd adds the device to the volume group, the pvcreate will + * fail since the pv is already part of the vg. Allow use of the + * override option inorder to overrule! */ - pvcmd = virCommandNewArgList(PVCREATE, - pool->def->source.devices[i].path, - NULL); + if (flags & VIR_STORAGE_POOL_BUILD_OVERWRITE) + pvcmd = virCommandNewArgList(PVCREATE, + "-ff", "-y", + pool->def->source.devices[i].path, + NULL); + else + pvcmd = virCommandNewArgList(PVCREATE, + pool->def->source.devices[i].path, + NULL); if (virCommandRun(pvcmd, NULL) < 0) { virCommandFree(pvcmd); goto cleanup; -- 2.7.4
NACK.
I'm not really convinced whether we should really do this. I'm not quite familiar with LVM, so I tried a couple usecases. Since a forceful recreation of a physical partition on a device by pvcreate will permanently wipe all the volume group metadata, once you're done with the volume group you cannot remove it in a straightforward manner:
[root@f23-a ~]# pvcreate /dev/vdc Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgcreate myvg /dev/vdb /dev/vdc Volume group "myvg" successfully created [root@f23-a ~]# pvcreate /dev/vdc -ff -y WARNING: Forcing physical volume creation on /dev/vdc of volume group "myvg" Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgremove myvg WARNING: Device for PV zConQ1-eSy3-xvQu-5mWv-kVdT-v3yI-LwZBk3 not found or rejected by a filter. Volume group "myvg" not found, is inconsistent or has PVs missing.
Now, I also tried removing it forcefully (-f) which worked and which is also something pool-delete does, for me it just doesn't seem right to allow the user to corrupt the volume group by allowing --overwrite, thus -ff to pvcreate. Another thing is that if you do pvcreate whilst having the volume group active, what you get is:
[root@f23-a ~]# pvcreate /dev/vdc -ff -y Can't open /dev/vdc exclusively. Mounted filesystem?
So, even the -ff isn't enough. Anyway, since you were rather skeptical about this BZ in your comment, I suggest we close it as WONTFIX and the testcase should either be adjusted or dropped completely because as you say, there basically isn't any practical usage for that and the only thing you get out of it is a corrupted LVM anyway.
Erik
Fair enough - Still the 'Build' process is doing: In a loop for all source devices: dd if=/dev/zero of=/dev/sdN bs=512 count=1 pvcreate /dev/sdN Then once done, all the vgcreate the devices vgcreate $name /dev/sdN [/dev/sdX] Where the 'dd' is always done and the vgcreate fails for this bug. So I wonder if the build code should be augmented to get a list of devices already in the vg and either skip or error if there's an attempt to add it again. We have code virStorageBackendLogicalGetPoolSources to get a list of pv's for all vg's, so we could take the other approach... In fact, if the vg already exists, one would think we shouldn't be building it again. Adding a pv to an existing vg is something left "outside" of libvirt too IIRC. John

On Wed, Nov 16, 2016 at 08:58:50AM -0500, John Ferlan wrote:
On 11/16/2016 07:42 AM, Erik Skultety wrote:
On Tue, Nov 15, 2016 at 04:55:10PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1373711
When using pool-create --build or pool-define and pool-build to create/define and build/initialize the logical pool, the 'pvcreate' command could fail if some previous attempt had been used on the source path previously.
So add support for the --override option in order to force usage of the magic override option "-ff" and the option "-y" in order to force creation and answer any questions with yes.
NB: Although the reality is part of the process of building the logical pool involves wiping out the first 512 bytes of the disk block to be added (e.g. the partition table) because pvcreate requires that. So, to a degree the device being added is already altered. I suspect the knowlege of whether the disk was in a vg already could be contained outside the range of the first 512 bytes.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index ca05fe1..9c76156 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -682,13 +682,18 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, virStoragePoolObjPtr pool, unsigned int flags) { - virCommandPtr vgcmd; + virCommandPtr vgcmd = NULL; int fd; char zeros[PV_BLANK_SECTOR_SIZE]; int ret = -1; size_t i;
- virCheckFlags(0, -1); + virCheckFlags(VIR_STORAGE_POOL_BUILD_OVERWRITE | + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, ret); + + VIR_EXCLUSIVE_FLAGS_GOTO(VIR_STORAGE_POOL_BUILD_OVERWRITE, + VIR_STORAGE_POOL_BUILD_NO_OVERWRITE, + cleanup);
memset(zeros, 0, sizeof(zeros));
@@ -731,10 +736,21 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED, /* * Initialize the physical volume because vgcreate is not * clever enough todo this for us :-( + * + * If this API is called twice for the same device, then because + * vgcmd adds the device to the volume group, the pvcreate will + * fail since the pv is already part of the vg. Allow use of the + * override option inorder to overrule! */ - pvcmd = virCommandNewArgList(PVCREATE, - pool->def->source.devices[i].path, - NULL); + if (flags & VIR_STORAGE_POOL_BUILD_OVERWRITE) + pvcmd = virCommandNewArgList(PVCREATE, + "-ff", "-y", + pool->def->source.devices[i].path, + NULL); + else + pvcmd = virCommandNewArgList(PVCREATE, + pool->def->source.devices[i].path, + NULL); if (virCommandRun(pvcmd, NULL) < 0) { virCommandFree(pvcmd); goto cleanup; -- 2.7.4
NACK.
I'm not really convinced whether we should really do this. I'm not quite familiar with LVM, so I tried a couple usecases. Since a forceful recreation of a physical partition on a device by pvcreate will permanently wipe all the volume group metadata, once you're done with the volume group you cannot remove it in a straightforward manner:
[root@f23-a ~]# pvcreate /dev/vdc Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgcreate myvg /dev/vdb /dev/vdc Volume group "myvg" successfully created [root@f23-a ~]# pvcreate /dev/vdc -ff -y WARNING: Forcing physical volume creation on /dev/vdc of volume group "myvg" Physical volume "/dev/vdc" successfully created [root@f23-a ~]# vgremove myvg WARNING: Device for PV zConQ1-eSy3-xvQu-5mWv-kVdT-v3yI-LwZBk3 not found or rejected by a filter. Volume group "myvg" not found, is inconsistent or has PVs missing.
Now, I also tried removing it forcefully (-f) which worked and which is also something pool-delete does, for me it just doesn't seem right to allow the user to corrupt the volume group by allowing --overwrite, thus -ff to pvcreate. Another thing is that if you do pvcreate whilst having the volume group active, what you get is:
[root@f23-a ~]# pvcreate /dev/vdc -ff -y Can't open /dev/vdc exclusively. Mounted filesystem?
So, even the -ff isn't enough. Anyway, since you were rather skeptical about this BZ in your comment, I suggest we close it as WONTFIX and the testcase should either be adjusted or dropped completely because as you say, there basically isn't any practical usage for that and the only thing you get out of it is a corrupted LVM anyway.
Erik
Fair enough -
Still the 'Build' process is doing:
In a loop for all source devices: dd if=/dev/zero of=/dev/sdN bs=512 count=1 pvcreate /dev/sdN
Wow, didn't know that. Anyway, I tried to partition a dummy device and it turned out that lvm has issues only when there are some existing partitions on the device rather than having problem with the existence of a partition table itself as the commentary in LogicalPoolBuild suggests. So, this should also be something that we focus on, in order to fix it properly.
Then once done, all the vgcreate the devices
vgcreate $name /dev/sdN [/dev/sdX]
Where the 'dd' is always done and the vgcreate fails for this bug.
So I wonder if the build code should be augmented to get a list of devices already in the vg and either skip or error if there's an attempt to add it again.
Well, I think that an error should occur when there's an attempt to add a device which is already in a vg to another vg which we may achieve either by ourselves or somehow leave it to lvm toolset - I don't have a strong preference in here. Erik
We have code virStorageBackendLogicalGetPoolSources to get a list of pv's for all vg's, so we could take the other approach... In fact, if the vg already exists, one would think we shouldn't be building it again. Adding a pv to an existing vg is something left "outside" of libvirt too IIRC.
John
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (2)
-
Erik Skultety
-
John Ferlan