[libvirt] [PATCH 0/2] Error out on unsupported formats for vol-wipe

Historically, vol-wipe was created to rewrite volume with either zeros or some algorithm. However for non-raw file-based images rewriting or erasing and recreating them destroys their format. The solution seemed easy -- just delete and rebuild the volume. However we don't store the flags used during creation so we can't safely ensure that the new volume will be the same as the original one. Moreover since commit eec91958b4b0abdbb7ab769f540bcfa72a107c9b we document that it's probably not what users are trying to do. Instead of screwing up the format, let's just disable all the formats that are not supported now. Martin Kletzander (2): storage: Don't delete Ploop volumes twice storage: Forbid wiping formatted volume types that are not supported src/storage/storage_backend.c | 59 +++++++++++++++++++++++++++---------------- 1 file changed, 37 insertions(+), 22 deletions(-) -- 2.9.0

When reinitializing Ploop volumes we also went through the rutine of the normal wipe, effectively removing the root.hds file twice. Since we'll hopefully add support for other formats as well, split the function with a switch into which we can cleanly add formats in the future. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/storage/storage_backend.c | 49 ++++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 22 deletions(-) diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 4b0b19c45ca5..eff6a2f581a1 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -2245,32 +2245,19 @@ virStorageBackendVolWipePloop(virStorageVolDefPtr vol) return ret; } -int -virStorageBackendVolWipeLocal(virConnectPtr conn ATTRIBUTE_UNUSED, - virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, - virStorageVolDefPtr vol, - unsigned int algorithm, - unsigned int flags) +static int +virStorageBackendVolWipeLocalDefault(virStorageVolDefPtr vol, + unsigned int algorithm) { int ret = -1, fd = -1; const char *alg_char = NULL; struct stat st; virCommandPtr cmd = NULL; - char *path = NULL; - char *target_path = vol->target.path; - - virCheckFlags(0, -1); VIR_DEBUG("Wiping volume with path '%s' and algorithm %u", vol->target.path, algorithm); - if (vol->target.format == VIR_STORAGE_FILE_PLOOP) { - if (virAsprintf(&path, "%s/root.hds", vol->target.path) < 0) - goto cleanup; - target_path = path; - } - - fd = open(target_path, O_RDWR); + fd = open(vol->target.path, O_RDWR); if (fd == -1) { virReportSystemError(errno, _("Failed to open storage volume with path '%s'"), @@ -2327,7 +2314,7 @@ virStorageBackendVolWipeLocal(virConnectPtr conn ATTRIBUTE_UNUSED, if (algorithm != VIR_STORAGE_VOL_WIPE_ALG_ZERO) { cmd = virCommandNew(SCRUB); virCommandAddArgList(cmd, "-f", "-p", alg_char, - target_path, NULL); + vol->target.path, NULL); if (virCommandRun(cmd, NULL) < 0) goto cleanup; @@ -2346,17 +2333,35 @@ virStorageBackendVolWipeLocal(virConnectPtr conn ATTRIBUTE_UNUSED, goto cleanup; } - if (vol->target.format == VIR_STORAGE_FILE_PLOOP) - ret = virStorageBackendVolWipePloop(vol); - cleanup: virCommandFree(cmd); - VIR_FREE(path); VIR_FORCE_CLOSE(fd); return ret; } +int +virStorageBackendVolWipeLocal(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + virStorageVolDefPtr vol, + unsigned int algorithm, + unsigned int flags) +{ + int ret = -1; + + virCheckFlags(0, -1); + + VIR_DEBUG("Wiping volume with path '%s'", vol->target.path); + + if (vol->target.format == VIR_STORAGE_FILE_PLOOP) + ret = virStorageBackendVolWipePloop(vol); + else + ret = virStorageBackendVolWipeLocalDefault(vol, algorithm); + + return ret; +} + + #ifdef GLUSTER_CLI int virStorageBackendFindGlusterPoolSources(const char *host, -- 2.9.0

On Thu, Jul 14, 2016 at 02:27:40PM +0200, Martin Kletzander wrote:
When reinitializing Ploop volumes we also went through the rutine of the normal wipe, effectively removing the root.hds file twice.
The file was wiped with the selected algorithm first (without deletion), then reinitialized to make sure you can delete it via libvirt later. We could get rid of the reinitialization if we make sure libvirt can operate on the volume (after wiping, pretty much only delete makes sense), but removing the actual wiping is wrong. Jan
Since we'll hopefully add support for other formats as well, split the function with a switch into which we can cleanly add formats in the future.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/storage/storage_backend.c | 49 ++++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 22 deletions(-)

On Fri, Jul 15, 2016 at 09:41:11AM +0200, Ján Tomko wrote:
On Thu, Jul 14, 2016 at 02:27:40PM +0200, Martin Kletzander wrote:
When reinitializing Ploop volumes we also went through the rutine of the normal wipe, effectively removing the root.hds file twice.
The file was wiped with the selected algorithm first (without deletion), then reinitialized to make sure you can delete it via libvirt later.
You're right, I missed that what I was describing only happened with VIR_STORAGE_VOL_WIPE_ALG_ZERO. Anyway since the description for vol-wipe is: "Ensure data previously on a volume is not accessible to future reads" wiping algorithm does not really make sense for file-based storage. That's kind of the whole point of this series.
We could get rid of the reinitialization if we make sure libvirt can operate on the volume (after wiping, pretty much only delete makes sense), but removing the actual wiping is wrong.
Jan
Since we'll hopefully add support for other formats as well, split the function with a switch into which we can cleanly add formats in the future.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/storage/storage_backend.c | 49 ++++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 22 deletions(-)

On 15/07/16 11:37, Martin Kletzander wrote:
On Fri, Jul 15, 2016 at 09:41:11AM +0200, Ján Tomko wrote:
On Thu, Jul 14, 2016 at 02:27:40PM +0200, Martin Kletzander wrote:
When reinitializing Ploop volumes we also went through the rutine of the normal wipe, effectively removing the root.hds file twice.
The file was wiped with the selected algorithm first (without deletion), then reinitialized to make sure you can delete it via libvirt later.
You're right, I missed that what I was describing only happened with VIR_STORAGE_VOL_WIPE_ALG_ZERO. Anyway since the description for vol-wipe is:
"Ensure data previously on a volume is not accessible to future reads"
wiping algorithm does not really make sense for file-based storage. That's kind of the whole point of this series. Actually, virStorageBackendVolWipePloop only deletes root.hds and DiscDescriptor.xml. So the data on block device can still be accessible. To prevent this we used little path/to/volume manipulation and wiped root.hds firstly and only then called virStorageBackendVolWipePloop. It is incorrect to call only this function for ploop.
We could get rid of the reinitialization if we make sure libvirt can operate on the volume (after wiping, pretty much only delete makes sense), but removing the actual wiping is wrong.
Jan
Since we'll hopefully add support for other formats as well, split the function with a switch into which we can cleanly add formats in the future.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/storage/storage_backend.c | 49 ++++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 22 deletions(-)
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Fri, Jul 15, 2016 at 02:51:21PM +0300, Olga Krishtal wrote:
On 15/07/16 11:37, Martin Kletzander wrote:
On Fri, Jul 15, 2016 at 09:41:11AM +0200, Ján Tomko wrote:
On Thu, Jul 14, 2016 at 02:27:40PM +0200, Martin Kletzander wrote:
When reinitializing Ploop volumes we also went through the rutine of the normal wipe, effectively removing the root.hds file twice.
The file was wiped with the selected algorithm first (without deletion), then reinitialized to make sure you can delete it via libvirt later.
You're right, I missed that what I was describing only happened with VIR_STORAGE_VOL_WIPE_ALG_ZERO. Anyway since the description for vol-wipe is:
"Ensure data previously on a volume is not accessible to future reads"
wiping algorithm does not really make sense for file-based storage. That's kind of the whole point of this series. Actually, virStorageBackendVolWipePloop only deletes root.hds and DiscDescriptor.xml. So the data on block device can still be accessible. To prevent this we used little path/to/volume manipulation and wiped root.hds firstly and only then called virStorageBackendVolWipePloop. It is incorrect to call only this function for ploop.
We could get rid of the reinitialization if we make sure libvirt can operate on the volume (after wiping, pretty much only delete makes sense), but removing the actual wiping is wrong.
Oh, I totally misunderstood how the volume is stored then. Thanks for the info, I'll try to repost this in order for it not to just fix this but to suit, hopefully, most people as well. Have a nice day, Martin

Until now we allowed that to happen, however the only thing we supported was either rewiting the file or truncating it. That however doesn't keep the format of that file, so QCOWs, VDIs and all others just became RAW with arbitrary size. Not to mention any domain using such volume could not start anymore. Instead of dealing with the recreation of every single possible file that we have (and possibly failing due to create_tool capabilities) just forbid it for now. We even state in our documentation that it has no value for file-backed volumes. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=868771 Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/storage/storage_backend.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index eff6a2f581a1..600539967430 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -2353,6 +2353,16 @@ virStorageBackendVolWipeLocal(virConnectPtr conn ATTRIBUTE_UNUSED, VIR_DEBUG("Wiping volume with path '%s'", vol->target.path); + if (vol->type == VIR_STORAGE_VOL_FILE && + vol->target.format != VIR_STORAGE_FILE_PLOOP && + vol->target.format != VIR_STORAGE_FILE_RAW) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("Wiping file volume with format '%s' is unsupported, " + "consider deleting and re-creating the volume"), + virStorageFileFormatTypeToString(vol->target.format)); + return -1; + } + if (vol->target.format == VIR_STORAGE_FILE_PLOOP) ret = virStorageBackendVolWipePloop(vol); else -- 2.9.0

On Thu, Jul 14, 2016 at 02:27:41PM +0200, Martin Kletzander wrote:
Until now we allowed that to happen, however the only thing we supported was either rewiting the file or truncating it. That however doesn't keep the format of that file, so QCOWs, VDIs and all others just became RAW with arbitrary size.
Yes, wiping wipes the format as well. Nothing wrong with that.
Not to mention any domain using such volume could not start anymore. Instead of dealing with the recreation of every single possible file that we have (and possibly failing due to create_tool capabilities) just forbid it for now.
We even state in our documentation that it has no value for file-backed volumes.
Where? Also note, that depending on the actual volume representation, this call may not really overwrite the physical location of the volume. For instance, files stored journaled, log structured, copy-on-write, versioned, and network file systems are known to be problematic. http://libvirt.org/html/libvirt-libvirt-storage.html#virStorageVolWipe This only says that it might not work, not that it's completely useless. I think we have a precedent for supporting marginally useful features by still supporting qcow encryption.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=868771
NACK to breaking functionality in order to resolve a 4-year-old synthetic QE-filed bug. I suggest WONTFIX or NOTABUG. Jan

On Fri, Jul 15, 2016 at 09:46:50AM +0200, Ján Tomko wrote:
On Thu, Jul 14, 2016 at 02:27:41PM +0200, Martin Kletzander wrote:
Until now we allowed that to happen, however the only thing we supported was either rewiting the file or truncating it. That however doesn't keep the format of that file, so QCOWs, VDIs and all others just became RAW with arbitrary size.
Yes, wiping wipes the format as well. Nothing wrong with that.
It is not? Even though QEMU will not start? Also consider this: $ virsh vol-info asdf.img default Name: asdf.img Type: file Capacity: 10.00 GiB Allocation: 196.00 KiB $ virsh vol-wipe asdf.img default Vol asdf.img wiped $ virsh vol-info asdf.img default Name: asdf.img Type: file Capacity: 196.00 KiB Allocation: 196.00 KiB Does that seem right?
Not to mention any domain using such volume could not start anymore. Instead of dealing with the recreation of every single possible file that we have (and possibly failing due to create_tool capabilities) just forbid it for now.
We even state in our documentation that it has no value for file-backed volumes.
Where?
OK, my bad, can't find it. Anyway it should be there. "Ensure data previously on a volume is not accessible to future reads." for me, personally, means that you cannot get any data back by issuing read() on the volume. So if it is a file, I think: truncate(fd, 0); truncate(fd, size); is enough. Algorithm makes sense for partitions and other block (or basically non-file backed) storage.
Also note, that depending on the actual volume representation, this call may not really overwrite the physical location of the volume. For instance, files stored journaled, log structured, copy-on-write, versioned, and network file systems are known to be problematic.
http://libvirt.org/html/libvirt-libvirt-storage.html#virStorageVolWipe
This only says that it might not work, not that it's completely useless.
I think we have a precedent for supporting marginally useful features by still supporting qcow encryption.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=868771
NACK to breaking functionality in order to resolve a 4-year-old synthetic QE-filed bug.
How about the fact that it boils my blood (I have no idea how to say "pisses me off" politely) when I see that we have an API not doing it's one job correctly and we can fix it with almost trivial patch? Could you elaborate on what functionality is being broken here?
I suggest WONTFIX or NOTABUG.
No, it is a bug and I don't see a reason why we wouldn't fix it.
Jan
participants (3)
-
Ján Tomko
-
Martin Kletzander
-
Olga Krishtal