[libvirt] [PATCH v3 0/4] Some logical pool/volume changes

v2: http://www.redhat.com/archives/libvir-list/2016-January/msg01249.html Adjustments here Patches 1-3 are based off of review comments from patch2 of v2. The changes for patch2 of v2 are dropped. Patch 4 is a combination of patches 3 and 6 (mostly).... A couple of minor adjustments to utilize "attrs[0] == 'V'" and checking whether the 'devices' field to be parsed is neither NULL nor an empty string. Patches 4&5 from v2 are dropped. The resulting output will then be: <volume type='block'> <name>lv_test_thin</name> ... <source> </source> ... John Ferlan (4): logical: Use VIR_APPEND_ELEMENT instead of VIR_REALLOC_N logical: Use 'stripes' value for mirror/raid segtype logical: Clean up allocation when building regex on the fly logical: Display thin lv's found in a libvirt managed volume group src/conf/storage_conf.h | 2 +- src/storage/storage_backend_logical.c | 60 ++++++++++++++++++++++------------- 2 files changed, 39 insertions(+), 23 deletions(-) -- 2.5.0

Rather than preallocating a set number of elements, then walking through the extents and adjusting the specific element in place, use the APPEND macros to handle that chore. Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/conf/storage_conf.h | 2 +- src/storage/storage_backend_logical.c | 20 ++++++++++---------- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index f1dc62b..31b45be 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -50,7 +50,7 @@ struct _virStorageVolSourceExtent { typedef struct _virStorageVolSource virStorageVolSource; typedef virStorageVolSource *virStorageVolSourcePtr; struct _virStorageVolSource { - int nextent; + size_t nextent; virStorageVolSourceExtentPtr extents; int partType; /* virStorageVolTypeDisk, only used by disk diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index 7c05b6a..bb02d8a 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -84,6 +84,9 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, size_t i; int err, nvars; unsigned long long offset, size, length; + virStorageVolSourceExtent extent; + + memset(&extent, 0, sizeof(extent)); nextents = 1; if (STREQ(groups[4], VIR_STORAGE_VOL_LOGICAL_SEGTYPE_STRIPED)) { @@ -94,11 +97,6 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, } } - /* Allocate and fill in extents information */ - if (VIR_REALLOC_N(vol->source.extents, - vol->source.nextent + nextents) < 0) - goto cleanup; - if (virStrToLong_ull(groups[6], NULL, 10, &length) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("malformed volume extent length value")); @@ -162,7 +160,7 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, len = vars[j].rm_eo - vars[j].rm_so; p[vars[j].rm_eo] = '\0'; - if (VIR_STRNDUP(vol->source.extents[vol->source.nextent].path, + if (VIR_STRNDUP(extent.path, p + vars[j].rm_so, len) < 0) goto cleanup; @@ -176,12 +174,13 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, VIR_FREE(offset_str); goto cleanup; } - VIR_FREE(offset_str); + extent.start = offset * size; + extent.end = (offset * size) + length; - vol->source.extents[vol->source.nextent].start = offset * size; - vol->source.extents[vol->source.nextent].end = (offset * size) + length; - vol->source.nextent++; + if (VIR_APPEND_ELEMENT(vol->source.extents, vol->source.nextent, + extent) < 0) + goto cleanup; } ret = 0; @@ -190,6 +189,7 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, VIR_FREE(regex); VIR_FREE(reg); VIR_FREE(vars); + VIR_FREE(extent.path); return ret; } -- 2.5.0

On Mon, Feb 01, 2016 at 03:29:50PM -0500, John Ferlan wrote:
Rather than preallocating a set number of elements, then walking through the extents and adjusting the specific element in place, use the APPEND macros to handle that chore.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/conf/storage_conf.h | 2 +- src/storage/storage_backend_logical.c | 20 ++++++++++---------- 2 files changed, 11 insertions(+), 11 deletions(-)
ACK

The 'stripes' value is described as the "Number of stripes or mirrors in a logical volume". So add "mirror" and anything that starts with "raid" to the list of segtypes that can have an 'nextents' value greater than one. Use of raid segtypes (raid1, raid4, raid5*, raid6*, and raid10) is favored over mirror in more recent lvm code. Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index bb02d8a..cd0fec0 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -65,6 +65,8 @@ virStorageBackendLogicalSetActive(virStoragePoolObjPtr pool, #define VIR_STORAGE_VOL_LOGICAL_SEGTYPE_STRIPED "striped" +#define VIR_STORAGE_VOL_LOGICAL_SEGTYPE_MIRROR "mirror" +#define VIR_STORAGE_VOL_LOGICAL_SEGTYPE_RAID "raid" struct virStorageBackendLogicalPoolVolData { virStoragePoolObjPtr pool; @@ -88,8 +90,17 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, memset(&extent, 0, sizeof(extent)); + /* Assume 1 extent (the regex for 'devices' is "(\\S+)") and only + * check the 'stripes' field if we have a striped, mirror, or one of + * the raid (raid1, raid4, raid5*, raid6*, or raid10) segtypes in which + * case the stripes field will denote the number of lv's within the + * 'devices' field in order to generate the proper regex to decode + * the field + */ nextents = 1; - if (STREQ(groups[4], VIR_STORAGE_VOL_LOGICAL_SEGTYPE_STRIPED)) { + if (STREQ(groups[4], VIR_STORAGE_VOL_LOGICAL_SEGTYPE_STRIPED) || + STREQ(groups[4], VIR_STORAGE_VOL_LOGICAL_SEGTYPE_MIRROR) || + STRPREFIX(groups[4], VIR_STORAGE_VOL_LOGICAL_SEGTYPE_RAID)) { if (virStrToLong_i(groups[5], NULL, 10, &nextents) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("malformed volume extent stripes value")); -- 2.5.0

On Mon, Feb 01, 2016 at 03:29:51PM -0500, John Ferlan wrote:
The 'stripes' value is described as the "Number of stripes or mirrors in a logical volume". So add "mirror" and anything that starts with "raid" to the list of segtypes that can have an 'nextents' value greater than one. Use of raid segtypes (raid1, raid4, raid5*, raid6*, and raid10) is favored over mirror in more recent lvm code.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
ACK

Rather than a loop reallocating space to build the regex, just allocate it once up front, then if there's more than 1 nextent, append a comma and another regex_unit string. Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index cd0fec0..eb22fd0 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -120,12 +120,11 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, goto cleanup; } - if (VIR_STRDUP(regex, regex_unit) < 0) + /* Allocate space for 'nextents' regex_unit strings plus a comma for each */ + if (VIR_ALLOC_N(regex, nextents * (strlen(regex_unit) + 1) + 1) < 0) goto cleanup; - + strncat(regex, regex_unit, strlen(regex_unit)); for (i = 1; i < nextents; i++) { - if (VIR_REALLOC_N(regex, strlen(regex) + strlen(regex_unit) + 2) < 0) - goto cleanup; /* "," is the separator of "devices" field */ strcat(regex, ","); strncat(regex, regex_unit, strlen(regex_unit)); -- 2.5.0

On Mon, Feb 01, 2016 at 03:29:52PM -0500, John Ferlan wrote:
Rather than a loop reallocating space to build the regex, just allocate it once up front, then if there's more than 1 nextent, append a comma and another regex_unit string.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
ACK

Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more). Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work. Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use. NB: Based on a proposal authored by Joe Harvell <joe.harvell@tekcomms.com>, but with much intervening rework, the resulting patch is changed from the original concept. About all that remains is changing the regex and checking for NULL/empty field during parse. Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c index eb22fd0..601a896 100644 --- a/src/storage/storage_backend_logical.c +++ b/src/storage/storage_backend_logical.c @@ -88,9 +88,13 @@ virStorageBackendLogicalParseVolExtents(virStorageVolDefPtr vol, unsigned long long offset, size, length; virStorageVolSourceExtent extent; + /* If the devices field is NULL or empty, then there's nothing to do */ + if (!groups[3] || !*groups[3]) + return 0; + memset(&extent, 0, sizeof(extent)); - /* Assume 1 extent (the regex for 'devices' is "(\\S+)") and only + /* Assume 1 extent (since we checked for NULL or empty above) and only * check the 'stripes' field if we have a striped, mirror, or one of * the raid (raid1, raid4, raid5*, raid6*, or raid10) segtypes in which * case the stripes field will denote the number of lv's within the @@ -257,13 +261,15 @@ virStorageBackendLogicalMakeVol(char **const groups, goto cleanup; } - /* Mark the (s) sparse/snapshot lv, e.g. the lv created using - * the --virtualsize/-V option. We've already ignored the (t)hin - * pool definition. In the manner libvirt defines these, the - * thin pool is hidden to the lvs output, except as the name - * in brackets [] described for the groups[1] (backingStore). + /* Mark the (s) sparse/snapshot or the (V) thin/thin-pool member lv, + * e.g. the lv created using the --virtualsize/-V option to ensure + * the volume wipe algorithm doesn't overwrite sparse/thin volumes. + * We've already ignored the (t)hin pool definition. In the manner + * libvirt defines these, the thin pool is hidden to the lvs output, + * except as the name in brackets [] described for the groups[1] + * (backingStore). */ - if (attrs[0] == 's') + if (attrs[0] == 's' || attrs[0] == 'V') vol->target.sparse = true; /* Skips the backingStore of lv created with "--virtualsize", @@ -342,7 +348,7 @@ virStorageBackendLogicalFindLVs(virStoragePoolObjPtr pool, * striped, so "," is not a suitable separator either (rhbz 727474). */ const char *regexes[] = { - "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S+)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$" + "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S*)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$" }; int vars[] = { 10 -- 2.5.0

On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
NB: Based on a proposal authored by Joe Harvell <joe.harvell@tekcomms.com>, but with much intervening rework, the resulting patch is changed from the original concept. About all that remains is changing the regex and checking for NULL/empty field during parse.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-)
@@ -342,7 +348,7 @@ virStorageBackendLogicalFindLVs(virStoragePoolObjPtr pool, * striped, so "," is not a suitable separator either (rhbz 727474). */ const char *regexes[] = { - "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S+)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$" + "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S*)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$"
This regex and changes to it would be much more readable split into multiple lines, like VIR_LOG_REGEX. Jan

On 02/02/2016 03:11 AM, Ján Tomko wrote:
On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
NB: Based on a proposal authored by Joe Harvell <joe.harvell@tekcomms.com>, but with much intervening rework, the resulting patch is changed from the original concept. About all that remains is changing the regex and checking for NULL/empty field during parse.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/storage/storage_backend_logical.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-)
@@ -342,7 +348,7 @@ virStorageBackendLogicalFindLVs(virStoragePoolObjPtr pool, * striped, so "," is not a suitable separator either (rhbz 727474). */ const char *regexes[] = { - "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S+)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$" + "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S*)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$"
This regex and changes to it would be much more readable split into multiple lines, like VIR_LOG_REGEX.
OK - so if I create a 5th patch with something like, is that what you'd like to see: +#define VIR_STORAGE_VOL_LOGICAL_PREFIX_REGEX "^\\s*" +#define VIR_STORAGE_VOL_LOGICAL_LV_NAME_REGEX "(\\S+)#" +#define VIR_STORAGE_VOL_LOGICAL_ORIGIN_REGEX "(\\S*)#" +#define VIR_STORAGE_VOL_LOGICAL_UUID_REGEX "(\\S+)#" +#define VIR_STORAGE_VOL_LOGICAL_DEVICES_REGEX "(\\S*)#" +#define VIR_STORAGE_VOL_LOGICAL_SEGTYPE_REGEX "(\\S+)#" +#define VIR_STORAGE_VOL_LOGICAL_STRIPES_REGEX "([0-9]+)#" +#define VIR_STORAGE_VOL_LOGICAL_SEG_SIZE_REGEX "(\\S+)#" +#define VIR_STORAGE_VOL_LOGICAL_VG_EXTENT_SIZE_REGEX "([0-9]+)#" +#define VIR_STORAGE_VOL_LOGICAL_SIZE_REGEX "([0-9]+)#" +#define VIR_STORAGE_VOL_LOGICAL_LV_ATTR_REGEX "(\\S+)#" +#define VIR_STORAGE_VOL_LOGICAL_SUFFIX_REGEX "?\\s*$" + +#define VIR_STORAGE_VOL_LOGICAL_REGEX_COUNT 10 +#define VIR_STORAGE_VOL_LOGICAL_REGEX \ + VIR_STORAGE_VOL_LOGICAL_PREFIX_REGEX \ + VIR_STORAGE_VOL_LOGICAL_LV_NAME_REGEX \ + VIR_STORAGE_VOL_LOGICAL_ORIGIN_REGEX \ + VIR_STORAGE_VOL_LOGICAL_UUID_REGEX \ + VIR_STORAGE_VOL_LOGICAL_DEVICES_REGEX \ + VIR_STORAGE_VOL_LOGICAL_SEGTYPE_REGEX \ + VIR_STORAGE_VOL_LOGICAL_STRIPES_REGEX \ + VIR_STORAGE_VOL_LOGICAL_SEG_SIZE_REGEX \ + VIR_STORAGE_VOL_LOGICAL_VG_EXTENT_SIZE_REGEX \ + VIR_STORAGE_VOL_LOGICAL_SIZE_REGEX \ + VIR_STORAGE_VOL_LOGICAL_LV_ATTR_REGEX \ + VIR_STORAGE_VOL_LOGICAL_SUFFIX_REGEX + and const char *regexes[] = { - "^\\s*(\\S+)#(\\S*)#(\\S+)#(\\S*)#(\\S+)#([0-9]+)#(\\S+)#([0-9]+)#([0-9]+)#(\\S+)#?\\s*$" + VIR_STORAGE_VOL_LOGICAL_REGEX }; int vars[] = { - 10 + VIR_STORAGE_VOL_LOGICAL_REGEX_COUNT };

On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG. Pavel

On 02/02/2016 08:30 AM, Pavel Hrdina wrote:
On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG.
If LVM allows it, then who are we to say it cannot or shouldn't work or how it should be configured? Thin pools were introduced in LVM2 (2.02.89) according to what I've found. That's a few releases behind the 2.02.132 I have on my f23 laptop. Nothing in the description creating a thin-pool requires it's own volume group although sure some "best practices" indicate creating vg's as the container for a thin-pool. I can certainly understand and agree in principle to creating a "strong preference" to have a thin-pool be a libvirt pool because it simplifies things such as allocated/capacity management and physical device source listing. So should I assume then the desire from the review viewpoint is to not support thin lv's at this point in time? And that would be because we want them to be configured a specific way? FWIW: if someone did that - we still won't be able to support it without this patch. Any thoughts on the VIR_STORAGE_VOL_LOGICAL*_REGEX's? (depending on the answer to the prior question with the obvious change of using S+ instead of S* for devices)? John

On Tue, Feb 02, 2016 at 11:14:01AM -0500, John Ferlan wrote:
On 02/02/2016 08:30 AM, Pavel Hrdina wrote:
On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG.
If LVM allows it, then who are we to say it cannot or shouldn't work or how it should be configured?
Even if they are mixed in the same VG, we could show the "thick" LVs in the type='logical' pool and the thin ones in type='thin'.
Thin pools were introduced in LVM2 (2.02.89) according to what I've found. That's a few releases behind the 2.02.132 I have on my f23 laptop. Nothing in the description creating a thin-pool requires it's own volume group although sure some "best practices" indicate creating vg's as the container for a thin-pool.
I can certainly understand and agree in principle to creating a "strong preference" to have a thin-pool be a libvirt pool because it simplifies things such as allocated/capacity management and physical device source listing.
This is my reasoning - with all the thick and thin volumes grouped together, having free space in one does not mean there is free space for another.
So should I assume then the desire from the review viewpoint is to not support thin lv's at this point in time?
s/time/code/ But I cannot speak for the silent majority.
And that would be because we want them to be configured a specific way? FWIW: if someone did that - we still won't be able to support it without this patch.
How so? This patch displays the thin ('V') volumes in type='logical' pools. With a new pool type, the 'logical' pool could keep ignoring the thin pools and volumes, just like we skip directories in 'fs' pools.
Any thoughts on the VIR_STORAGE_VOL_LOGICAL*_REGEX's? (depending on the answer to the prior question with the obvious change of using S+ instead of S* for devices)?
They do look more readable to me. Jan

On 02/03/2016 03:23 AM, Ján Tomko wrote:
On Tue, Feb 02, 2016 at 11:14:01AM -0500, John Ferlan wrote:
On 02/02/2016 08:30 AM, Pavel Hrdina wrote:
On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
On Mon, Feb 01, 2016 at 03:29:53PM -0500, John Ferlan wrote:
Modify the regex for the 'devices' (a/k/a 'extents') from "(\\S+)" (e.g., 1 or more) to "(\\S*)" (e.g., zero or more).
Then for any "thin" lv's found, mark the volume as a sparse volume so that the volume wipe algorithm doesn't work.
Since a "thin" segtype has no devices, this will result in any "thin" lv part of some thin-pool within a volume group used as a libvirt pool to be displayed as a possible volume to use.
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG.
If LVM allows it, then who are we to say it cannot or shouldn't work or how it should be configured?
Even if they are mixed in the same VG, we could show the "thick" LVs in the type='logical' pool and the thin ones in type='thin'.
We could if our logic to create/build the pool was overhauled or perhaps support was added to have two volume groups use the same source device. Since it's possible to place a thin_pool in the same vg as a snapshot and a "thick" lv, should we be in the business of splitting hairs and trying to define how things should be viewed? Also libvirt doesn't provide the tools to expand the thin pool or add more devices/extents for the volume group. The management of that is done via the storage management software - we are hands off. So I think with respect to defining how a vg can be populated, we should remain neutral as well.
Thin pools were introduced in LVM2 (2.02.89) according to what I've found. That's a few releases behind the 2.02.132 I have on my f23 laptop. Nothing in the description creating a thin-pool requires it's own volume group although sure some "best practices" indicate creating vg's as the container for a thin-pool.
I can certainly understand and agree in principle to creating a "strong preference" to have a thin-pool be a libvirt pool because it simplifies things such as allocated/capacity management and physical device source listing.
This is my reasoning - with all the thick and thin volumes grouped together, having free space in one does not mean there is free space for another.
I think we allow LVM to take care of space management. We can can request creation of a specific set of sizes and then display from what lvs tells us. A thin volume provides its 'data_percent' full, so we could calculate 'available' based on that. In fact that field is also useful for snapshot lv's... I think we may be displaying it wrong now. For capacity, the libvirt output shows a calculated size based on an lseek of the target device path. The lvs output shows essentially the virtualsize. Perhaps what is not logical (sorry) between the 'pool-info' and 'vol-list --details' output is having the calculation of all capacity values in a vol-list be more than the capacity of the pool-info.
So should I assume then the desire from the review viewpoint is to not support thin lv's at this point in time?
s/time/code/ But I cannot speak for the silent majority.
If a tree falls in the forest, the only ones that hear it are the ones listening. So, how do we know who is paying attention to this upstream discussion if they don't speak? In this case, is silence golden? Does it mean being complicit or does it mean indifference? I'm all for getting a solution, but it doesn't necessarily help being silent until patches are posted or pushed.
And that would be because we want them to be configured a specific way? FWIW: if someone did that - we still won't be able to support it without this patch.
How so? This patch displays the thin ('V') volumes in type='logical' pools. With a new pool type, the 'logical' pool could keep ignoring the thin pools and volumes, just like we skip directories in 'fs' pools.
Perhaps a too literal comment - in order to display thin lv's we'd need the "(\\S*)" if the same exact regex was used to find all lv's in the pool. Alternatively one could not care about devices in a different/new regex, but would still need to filter based on "pool_lv". I don't see the benefit behind requiring the user to decide whether to have a libvirt pool view either thin lv's or non-thin lv's of their vg or requiring their vg to essentially be one thin-pool in order to support thin lv's, when we could support whatever configuration they've already developed. If more work is done in LV pools - are we going to create new pool types for other lvm segtypes ("mirror", "stripe", "raid", etc.)? In order to display/fetch interesting or specific things about them? Although there is an RFE/BZ to support pools of thin lvm (1060287 open since 1/31/14) - it doesn't appear it's been deemed important enough to get into any recent release. Also rather than clutter this discussion up (and to not lose some current thoughts), I updated the bz with lots of thoughts.
Any thoughts on the VIR_STORAGE_VOL_LOGICAL*_REGEX's? (depending on the answer to the prior question with the obvious change of using S+ instead of S* for devices)?
They do look more readable to me.
I'll post a patch in a bit... John

On Wed, Feb 03, 2016 at 03:51:30PM -0500, John Ferlan wrote:
On 02/03/2016 03:23 AM, Ján Tomko wrote:
On Tue, Feb 02, 2016 at 11:14:01AM -0500, John Ferlan wrote:
On 02/02/2016 08:30 AM, Pavel Hrdina wrote:
On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG.
If LVM allows it, then who are we to say it cannot or shouldn't work or how it should be configured?
Even if they are mixed in the same VG, we could show the "thick" LVs in the type='logical' pool and the thin ones in type='thin'.
We could if our logic to create/build the pool was overhauled or perhaps support was added to have two volume groups use the same source device.
Since it's possible to place a thin_pool in the same vg as a snapshot and a "thick" lv, should we be in the business of splitting hairs and trying to define how things should be viewed?
We already are. E.g. in the 'fs' pool, we ignore fifos and subdirectories.
I don't see the benefit behind requiring the user to decide whether to have a libvirt pool view either thin lv's or non-thin lv's of their vg or requiring their vg to essentially be one thin-pool in order to support thin lv's, when we could support whatever configuration they've already developed.
The thin pool would view the thin volumes in a specific thin pool, not all thin pools in that VG.
If more work is done in LV pools - are we going to create new pool types for other lvm segtypes ("mirror", "stripe", "raid", etc.)? In order to display/fetch interesting or specific things about them?
AFAIK none of these create another pool on top of the VG. Jan

On 02/04/2016 04:53 AM, Ján Tomko wrote:
On Wed, Feb 03, 2016 at 03:51:30PM -0500, John Ferlan wrote:
On 02/03/2016 03:23 AM, Ján Tomko wrote:
On Tue, Feb 02, 2016 at 11:14:01AM -0500, John Ferlan wrote:
On 02/02/2016 08:30 AM, Pavel Hrdina wrote:
On Tue, Feb 02, 2016 at 09:11:51AM +0100, Ján Tomko wrote:
A thin pool is another layer on top of some of the LVs in the VG. I think it deserves a separate pool type.
I must agree with Jan, and also already wrote the same for v2 patch. The thin pool and thin LV shouldn't be mixed with normal logical pool even they share the same VG.
If LVM allows it, then who are we to say it cannot or shouldn't work or how it should be configured?
Even if they are mixed in the same VG, we could show the "thick" LVs in the type='logical' pool and the thin ones in type='thin'.
We could if our logic to create/build the pool was overhauled or perhaps support was added to have two volume groups use the same source device.
Since it's possible to place a thin_pool in the same vg as a snapshot and a "thick" lv, should we be in the business of splitting hairs and trying to define how things should be viewed?
We already are. E.g. in the 'fs' pool, we ignore fifos and subdirectories.
True we don't follow any subdir for an 'fs' pool (or 'dir' pool); however, a 'thin-pool' and 'thin' lv are both in a VG. The hierarchy is managed by LVM. If you looked at the on disk structure you'd only find the 'thin' subdir, but there is no 'thin-pool' subdir.
I don't see the benefit behind requiring the user to decide whether to have a libvirt pool view either thin lv's or non-thin lv's of their vg or requiring their vg to essentially be one thin-pool in order to support thin lv's, when we could support whatever configuration they've already developed.
The thin pool would view the thin volumes in a specific thin pool, not all thin pools in that VG.
Sure I understand the goal - a 1-to-1 relationship between a thin-pool in a volume group and the libvirt pool. Then a 1-to-n relationship between the pool and it's thin lv's. If a VG had two thin-pools, then each would have to be it's own libvirt pool. Configuring the libvirt specific pool is an extra step or two. The libvirt pool would list the thin-pool size as capacity and the allocation could be based on the data_percent. Perhaps the one benefit I can see from this model. I do see a lot of extra code, documentation, and configuration steps.
If more work is done in LV pools - are we going to create new pool types for other lvm segtypes ("mirror", "stripe", "raid", etc.)? In order to display/fetch interesting or specific things about them?
AFAIK none of these create another pool on top of the VG.
True - not in the same manner as a thin-pool and thin lv have an overt relationship. However, thin lv's and thin-pool's are listed within a VG not on top of a VG. The thin-pool is just the mechanism used to define the physical storage extents from which the virtual 'thin' extents can be drawn from. If the thin-pool runs out of extents, the admin must come along and extend it. That is - libvirt is not involved. No different than if a VG (without a thin-pool) ran out of space. The admin would need to extend it via some LVM command. Again libvirt is hands off. I see creating a separate pool as a needless hierarchical step for the primary benefit of being able to display the capacity of the thin-pool. I think it's easier to describe that a logical pool that contains thin lv's can appear to be over-subscribed. Management of the thin-pool's from which those thin lv's draw from is left to LVM similar to whatever configuration steps are required to create a device path for an 'fs' pool to use. John
participants (3)
-
John Ferlan
-
Ján Tomko
-
Pavel Hrdina