On 05/26/2015 07:15 AM, Ján Tomko wrote:
On Sat, May 23, 2015 at 09:39:58AM -0400, John Ferlan wrote:
>
https://bugzilla.redhat.com/show_bug.cgi?id=1224018
>
> The disk pool recalculates the pool allocation, capacity, and available
> values each time through processing a newly created disk partition.
> However, there were two issues with doing so. The process generally is
> to read the partition table via virStorageBackendDiskReadPartitions and
> process the volume. This is called during create and refresh backend
> API's with the only difference being create passes a specific volume
> to be processed while refresh processes all volumes (passing NULL for
> the volume value).
>
> The first issue via virStorageBackendDiskCreateVol was that the pool's
> available value was cleared, so when virStorageBackendDiskMakeVol
> processes the (new) volume it will ignore any other partitions on the
> disk, thus after returning the pool allocation would only include the
> newly added volume.
>
> The second is while processing a partition, the adjustment of the
> available value depends on the type of partition. For "primary" and
> "logical" partitions, the available value was adjusted. However, for
> "extended" partitions, the available value wasn't adjusted. Since the
> calling function(s) storageVolCreateXML[From] will only adjust the
> pool available value when they determine that the backend code didn't
> this resulted in the available value being incorrectly adjusted by
> that code. If a pool refresh was executed, the "correct" value showed up.
>
> To fix the first issue, only clear the available value when we're
> being processed from the refresh path (vol will be NULL). To fix the
> second issue, increment available by 1 so that the calling function
> won't adjust after we're done. That could leave the appearance of a
> single byte being used by a pool that only has an extended partition,
> but that's better than having it show up as the size of the partition
> until someone refreshes.
It is marginally better because '1' is obviously wrong, but not
adjusting the value at all would be the real fix.
> The refresh path will also nick by the same
> value so it'll be consistent.
Two issues deserve two separate patches, especially if the explanation
is that long.
That's fine - looks like an adjustment from a previous patch/bug will
need to be made though... Although if wordiness were a criterion for
the number of patches to generate - I think Laine and I would become the
patch submission leaders really quickly ;-)
>
> Signed-off-by: John Ferlan <jferlan(a)redhat.com>
> ---
> src/storage/storage_backend_disk.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/src/storage/storage_backend_disk.c b/src/storage/storage_backend_disk.c
> index 6394dac..bc14aab 100644
> --- a/src/storage/storage_backend_disk.c
> +++ b/src/storage/storage_backend_disk.c
> @@ -164,8 +164,18 @@ virStorageBackendDiskMakeDataVol(virStoragePoolObjPtr pool,
> return -1;
> }
>
> + /* For "data", adjust the pool's allocation by the size of the
volume
> + * For "metadata" (aka "extended" volume), we haven't
yet used the space;
> + * however, the calling createVol path will adjust the pool allocation
> + * by the volume allocation if it determines a storage backend doesn't
> + * adjust the pool's allocation value. So bump the allocation to avoid
> + * having the calling code misrepresent the value. The refresh path
> + * would resolve/change the value.
I think the calling code should not be interpreting the value at all -
either it should be the backend's responsibility to update the values
and the caller should not touch it at all, or the caller should only
update them for those backends that are known not to do it themselves.
That way it's clear which code should be responsible for updating the
values.
Sure that's another way around the issue - although I think the disk
backend is the only one affected... I do note sheepdog has some
specific size checks for refresh, but must not need that for the create
and delete paths.
So that's checks in Create, CreateXML, and Delete in order to make a
single check to not update for the disk backend since we "know" it'll do
the pool updates. The one concern I have there is that those checks
were part of commit id '2ac0e647b' and were meant to be "more generic"
(even though they were specific to a disk backend problem) as opposed to
specific check(s) for each (or any) backend which could/should update
the pool values on create/delete since the "real" size is known at
volume create/delete time.
> + */
> if (STRNEQ(groups[2], "metadata"))
> pool->def->allocation += vol->target.allocation;
> + else
> + pool->def->allocation++;
> if (vol->source.extents[0].end > pool->def->capacity)
> pool->def->capacity = vol->source.extents[0].end;
>
> @@ -309,7 +319,13 @@ virStorageBackendDiskReadPartitions(virStoragePoolObjPtr pool,
> pool->def->source.devices[0].path,
> NULL);
>
> - pool->def->allocation = pool->def->capacity =
pool->def->available = 0;
> + /* The reload/restart/refresh path will recalculate everything;
> + * otherwise, keep allocation as is as capacity and available
> + * will be adjusted
> + */
'refresh path' is ambiguous here - technically we're refreshing a
volume if one was passed. And it's not clear to me from the comment if
everything will be recalculated by virStorageBackendDiskMakeVol, or by
some other function called after this function on the (pool) refresh path.
/* If a volume is passed, virStorageBackendDiskMakeVol only updates the pool
* allocation for that single volume.
*/
Sure - that works for me too.
John