[libvirt] [PATCH v2] storage: Attempt to refresh volume after successful wipe volume

https://bugzilla.redhat.com/show_bug.cgi?id=1270709 When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity. Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results. Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html Changes since v1: * Use the preferred call format from review * Cause error if refreshVol fails * Add wording to docs regarding zero wipe algorithm and truncating the raw file. src/libvirt-storage.c | 9 +++++++-- src/storage/storage_driver.c | 9 ++++++++- tools/virsh.pod | 5 +++++ 3 files changed, 20 insertions(+), 3 deletions(-) diff --git a/src/libvirt-storage.c b/src/libvirt-storage.c index 66dd9f0..62bb999 100644 --- a/src/libvirt-storage.c +++ b/src/libvirt-storage.c @@ -1725,7 +1725,9 @@ virStorageVolDelete(virStorageVolPtr vol, * @vol: pointer to storage volume * @flags: extra flags; not used yet, so callers should always pass 0 * - * Ensure data previously on a volume is not accessible to future reads + * Ensure data previously on a volume is not accessible to future reads. + * Use of this function is equivalent to calling virStorageVolWipePattern + * with the VIR_STORAGE_VOL_WIPE_ALG_ZERO for the algorithm. * * Returns 0 on success, or -1 on error */ @@ -1766,7 +1768,10 @@ virStorageVolWipe(virStorageVolPtr vol, * @flags: future flags, use 0 for now * * Similar to virStorageVolWipe, but one can choose - * between different wiping algorithms. + * between different wiping algorithms. When choosing the + * zero algorithm (VIR_STORAGE_VOL_WIPE_ALG_ZERO), it is + * possible the target storage backend will truncate the + * file rather than writing a stream of zeroes. * * Returns 0 on success, or -1 on error. */ diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index bbf21f6..2531a88 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -2436,7 +2436,14 @@ storageVolWipePattern(virStorageVolPtr obj, goto cleanup; } - ret = backend->wipeVol(obj->conn, pool, vol, algorithm, flags); + if (backend->wipeVol(obj->conn, pool, vol, algorithm, flags) < 0) + goto cleanup; + + if (backend->refreshVol && + backend->refreshVol(obj->conn, pool, vol) < 0) + goto cleanup; + + ret = 0; cleanup: virStoragePoolObjUnlock(pool); diff --git a/tools/virsh.pod b/tools/virsh.pod index 21ae4a3..7f6dc8d 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -3509,6 +3509,11 @@ B<Supported algorithms> B<Note>: The availability of algorithms may be limited by the version of the C<scrub> binary installed on the host. +B<Note>: Use of the zero algorithm for some storage backends may result +in the truncation of the target file instead of writing all zeroes to the +file. The Storage Driver will refresh the volume allocation and capacity +after successful volume wipe completion. + =item B<vol-dumpxml> [I<--pool> I<pool-or-uuid>] I<vol-name-or-key-or-path> Output the volume information as an XML dump to stdout. -- 2.5.0

On Tue, Dec 15, 2015 at 03:03:33PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
Oh yes, side effects, we have many of those regarding volume wiping. For this one, I think you cose the best solution, so even though there are some other hidden things, at least this is cleaned up a bit now. ACK
Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html
Changes since v1: * Use the preferred call format from review * Cause error if refreshVol fails * Add wording to docs regarding zero wipe algorithm and truncating the raw file.
src/libvirt-storage.c | 9 +++++++-- src/storage/storage_driver.c | 9 ++++++++- tools/virsh.pod | 5 +++++ 3 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/src/libvirt-storage.c b/src/libvirt-storage.c index 66dd9f0..62bb999 100644 --- a/src/libvirt-storage.c +++ b/src/libvirt-storage.c @@ -1725,7 +1725,9 @@ virStorageVolDelete(virStorageVolPtr vol, * @vol: pointer to storage volume * @flags: extra flags; not used yet, so callers should always pass 0 * - * Ensure data previously on a volume is not accessible to future reads + * Ensure data previously on a volume is not accessible to future reads. + * Use of this function is equivalent to calling virStorageVolWipePattern + * with the VIR_STORAGE_VOL_WIPE_ALG_ZERO for the algorithm. * * Returns 0 on success, or -1 on error */ @@ -1766,7 +1768,10 @@ virStorageVolWipe(virStorageVolPtr vol, * @flags: future flags, use 0 for now * * Similar to virStorageVolWipe, but one can choose - * between different wiping algorithms. + * between different wiping algorithms. When choosing the + * zero algorithm (VIR_STORAGE_VOL_WIPE_ALG_ZERO), it is + * possible the target storage backend will truncate the + * file rather than writing a stream of zeroes. * * Returns 0 on success, or -1 on error. */ diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index bbf21f6..2531a88 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -2436,7 +2436,14 @@ storageVolWipePattern(virStorageVolPtr obj, goto cleanup; }
- ret = backend->wipeVol(obj->conn, pool, vol, algorithm, flags); + if (backend->wipeVol(obj->conn, pool, vol, algorithm, flags) < 0) + goto cleanup; + + if (backend->refreshVol && + backend->refreshVol(obj->conn, pool, vol) < 0) + goto cleanup; + + ret = 0;
cleanup: virStoragePoolObjUnlock(pool); diff --git a/tools/virsh.pod b/tools/virsh.pod index 21ae4a3..7f6dc8d 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -3509,6 +3509,11 @@ B<Supported algorithms> B<Note>: The availability of algorithms may be limited by the version of the C<scrub> binary installed on the host.
+B<Note>: Use of the zero algorithm for some storage backends may result +in the truncation of the target file instead of writing all zeroes to the +file. The Storage Driver will refresh the volume allocation and capacity +after successful volume wipe completion. + =item B<vol-dumpxml> [I<--pool> I<pool-or-uuid>] I<vol-name-or-key-or-path>
Output the volume information as an XML dump to stdout. -- 2.5.0
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Wed, Dec 16, 2015 at 14:28:03 +0100, Martin Kletzander wrote:
On Tue, Dec 15, 2015 at 03:03:33PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
Since the volume format is not really considered while wiping, this will have also various efects on qcow and other storage formats.
Oh yes, side effects, we have many of those regarding volume wiping. For this one, I think you cose the best solution, so even though there are some other hidden things, at least this is cleaned up a bit now.
Similarly a sparse file will be filled up to the maximum once you use some of the different algorithms that overwrite the file. Additionally the storage format will change in that case. Peter

On Tue, Dec 15, 2015 at 03:03:33PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
The documentation does not belong in this patch. Also, we could intentionally keep it vague so that we don't have to commit to that behavior.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html
Changes since v1: * Use the preferred call format from review
* Cause error if refreshVol fails
If my patch adjusting the return value gets pushed before this one: https://www.redhat.com/archives/libvir-list/2015-December/msg00467.html that change is just cosmetic. Otherwise, I don't think a patch adding refreshVol should be changing the return value. Jan

On 12/16/2015 08:48 AM, Ján Tomko wrote:
On Tue, Dec 15, 2015 at 03:03:33PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
The documentation does not belong in this patch. Also, we could intentionally keep it vague so that we don't have to commit to that behavior.
Adding the documentation was a reaction to your review of v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00464.html where you queried whether we should update the documentation "to reflect that there might not be any pass over the old data". Whether the doc changes are kept or not I suppose then becomes "consensus based". I see value in describing what's done and I see value in being vague enough so that we could change to do something else in the future. The question thus remains, should the current truncate be considered a bug? Or a happy consequence/feature?
Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html
Changes since v1: * Use the preferred call format from review
* Cause error if refreshVol fails
If my patch adjusting the return value gets pushed before this one: https://www.redhat.com/archives/libvir-list/2015-December/msg00467.html
that change is just cosmetic.
yep.
Otherwise, I don't think a patch adding refreshVol should be changing the return value.
So again, your initial review says: More readable as: if (func() < 0) goto cleanup; Which is what I followed. Other than the value of ret being -errno on fdatasync - is the objection based solely on you'd like to see a separate patch to handle the ret differently for the wipeVol call, then a patch for refreshVol? It seemed more straightforward to do it all at once, but if you want 2 patches - that's not a problem. I just want to be clear first... John
Jan

On Wed, Dec 16, 2015 at 09:19:07AM -0500, John Ferlan wrote:
On 12/16/2015 08:48 AM, Ján Tomko wrote:
On Tue, Dec 15, 2015 at 03:03:33PM -0500, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
The documentation does not belong in this patch. Also, we could intentionally keep it vague so that we don't have to commit to that behavior.
Adding the documentation was a reaction to your review of v1:
http://www.redhat.com/archives/libvir-list/2015-December/msg00464.html
where you queried whether we should update the documentation "to reflect that there might not be any pass over the old data".
I was thinking out loud and I did not mean that the documentation changes should be a part of this patch. Sorry if I was not clear enough.
Whether the doc changes are kept or not I suppose then becomes "consensus based". I see value in describing what's done and I see value in being vague enough so that we could change to do something else in the future.
The question thus remains, should the current truncate be considered a bug? Or a happy consequence/feature?
I have a slight preference for treating it as a feature and not adjusting the docs.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html
Changes since v1: * Use the preferred call format from review
* Cause error if refreshVol fails
If my patch adjusting the return value gets pushed before this one: https://www.redhat.com/archives/libvir-list/2015-December/msg00467.html
that change is just cosmetic.
yep.
Otherwise, I don't think a patch adding refreshVol should be changing the return value.
So again, your initial review says:
More readable as: if (func() < 0) goto cleanup;
Which is what I followed. Other than the value of ret being -errno on fdatasync - is the objection based solely on you'd like to see a separate patch to handle the ret differently for the wipeVol call, then a patch for refreshVol?
The patch I linked ajdusts the return value for the only code path that did not follow the 0/-1 convention. So: ACK to the non-documenation hunk of this patch, in its entirety. I would prefer if you could wait for my patch adjusting the return values before pushing this one, but that's not a hard prerequisite. Jan

On 12/15/2015 03:03 PM, John Ferlan wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
Since the documentation doesn't mention this side effect of the zero algorithm for a raw file volume, update the various documentation to describe the results.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00085.html
Changes since v1: * Use the preferred call format from review * Cause error if refreshVol fails * Add wording to docs regarding zero wipe algorithm and truncating the raw file.
src/libvirt-storage.c | 9 +++++++-- src/storage/storage_driver.c | 9 ++++++++- tools/virsh.pod | 5 +++++ 3 files changed, 20 insertions(+), 3 deletions(-)
As requested reviewed Jan's series, saw his push, adjusted this to just be the storage_driver.c changes, and pushed. Tks John
participants (4)
-
John Ferlan
-
Ján Tomko
-
Martin Kletzander
-
Peter Krempa