[libvirt] [PATCH v2 0/1] storage: vstorage support

The patch supports pool and volume managment using Vistuozzo Storage (vstorage) as a backend. To define pool use: virsh -c qemu+unix:///system pool-define-as --name VZ --type vstorage --source-name vz7-vzstorage --target /vzstorage_pool The resulting XML: <pool type='vstorage'> <name>VZ</name> <uuid>5f45665b-66fa-4b18-84d1-248774cff3a1</uuid> <capacity unit='bytes'>107374182400</capacity> <allocation unit='bytes'>1441144832</allocation> <available unit='bytes'>105933037568</available> <source> <name>vz7-vzstorage</name> </source> <target> <path>/vzstorage_pool</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> For the vstorage pool the only obligatory parameter, which stores cluster name, is --source-name. v2: - maximum code reusage - fixed name issue - we use vstorage - simplified findPoolSources Olga Krishtal (1): storage: vz storage pool support configure.ac | 28 ++++++++++ docs/schemas/storagepool.rng | 13 +++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 114 ++++++++++++++++++++++++++++++++++++-- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 181 insertions(+), 8 deletions(-) -- 1.8.3.1

This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw. Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-) diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check]) if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi +if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define> + <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define> + <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param> diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags; int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage") VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, }; @@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */ VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE) # define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL }; diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */ +#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { }; #endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup; diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n"); vshPrint(ctl, "%s", _(" Miscellaneous:")); -- 1.8.3.1

On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that? Tks - John FWIW: It compiled, but failed syntax-check. Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check])
if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi
+if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define>
+ <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define>
+ <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param> diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags;
int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage")
VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, };
@@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */
VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE)
# define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL };
diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */
+#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { };
#endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif
typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup;
diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n");
vshPrint(ctl, "%s", _(" Miscellaneous:"));

On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks -
John
FWIW: It compiled, but failed syntax-check. Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage At the moment we use this name while working with the storage,
Hello, no, this won't be dropped. We want Vortuozzo storage to be to one of storage pools backend's. this is likely to change and I will update it as soon as the name will be completely defined.
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check])
if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi
+if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define>
+ <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define>
+ <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param> diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags;
int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage")
VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, };
@@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */
VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE)
# define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL };
diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */
+#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { };
#endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif
typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup;
diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n");
vshPrint(ctl, "%s", _(" Miscellaneous:"));

On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks -
No, we will not drop it. We want to have Virtuozzo Storage as backend for storage pool.
John
FWIW: It compiled, but failed syntax-check.
I have checked it once again - and in my case -syntax-check is passed. Can you tell me what is wrong in your case? (The patch is ok for upstream)
Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
All tools we use to manage Virtuozzo storage starts with vstorage*. However, I can use vzstorage instead.
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check])
if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi
+if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define>
+ <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define>
+ <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param> diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags;
int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage")
VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, };
@@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */
VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE)
# define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL };
diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */
+#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { };
#endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif
typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup;
diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n");
vshPrint(ctl, "%s", _(" Miscellaneous:"));
-- Best regards, Olga

On 12/02/2016 12:09 PM, Olga Krishtal wrote:
On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks -
No, we will not drop it. We want to have Virtuozzo Storage as backend for storage pool.
To be more specific this code is getting wired in with the 'fs' backend rather than creating it's own... Don't like that. Similar to what I recently reviewed for the Veritas/vxfs changes - there needs to be a separate storage_backend_vstorage.c in order to handle the vstorage pool. Don't lump it in with the "fs" pool/backend. No different than what is required for rbd, gluster, mpath, logical, disk, scsi, etc. Don't forget things need to be as separable as possible. Add the pool code, add the storage conf code, add the virsh code, etc. One long patch is just tough to review.
John
FWIW: It compiled, but failed syntax-check.
I have checked it once again - and in my case -syntax-check is passed. Can you tell me what is wrong in your case? (The patch is ok for upstream)
Digging around I found my old review branch (been far too long to remember) - it was 571 commits old, but still able to rebase without merges (that's good)... My make syntax-check returns: preprocessor_indentation cppi: src/storage/storage_backend_fs.h: line 33: not properly indented cppi: src/storage/storage_backend_fs.h: line 35: not properly indented maint.mk: incorrect preprocessor indentation cfg.mk:684: recipe for target 'sc_preprocessor_indentation' failed make: *** [sc_preprocessor_indentation] Error 1 This is something you would see on your system because: #if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; #endif needs have need to change to # if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; # endif But it doesn't belong there anyway
Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
All tools we use to manage Virtuozzo storage starts with vstorage*. However, I can use vzstorage instead.
I suppose vstorage would be OK - I'm letting you know what I thought when I saw it. John
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check]) if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi +if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define> + <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define> + <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags; int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage") VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, }; @@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */ VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE) # define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL }; diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */ +#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { }; #endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup; diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n"); vshPrint(ctl, "%s", _(" Miscellaneous:"));

On 06/12/16 02:59, John Ferlan wrote:
On 12/02/2016 12:09 PM, Olga Krishtal wrote:
On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks - No, we will not drop it. We want to have Virtuozzo Storage as backend for storage pool.
To be more specific this code is getting wired in with the 'fs' backend rather than creating it's own... Don't like that.
Similar to what I recently reviewed for the Veritas/vxfs changes - there needs to be a separate storage_backend_vstorage.c in order to handle the vstorage pool. Don't lump it in with the "fs" pool/backend. No different than what is required for rbd, gluster, mpath, logical, disk, scsi, etc.
I thought of it first, however, after implementing some part of functionality - I saw that for vstorage backend I need to implement only 2 callbacks: virStorageBackendVzStart virStorageBackendVzfindPoolSources All volume types and operations are the same as in pool/fs backend, Basically what we need to use storage is to find it: vstorage discover - finds all clusters or domain using DNS, zeroconf, etc Than, we need to mount this cluster (I assume, that the authorization will be done before pool creation): vstorage-mount -c cluster-name mount_point. Then we have an access to this cluster as a filesystem, we can upload-download image files, etc. To avoid copy/paste of all vol-ops and nearly half of pool-ops I've choosen pool/fs_backend. Do you still think that it is necessary to have separate backend?
Don't forget things need to be as separable as possible. Add the pool code, add the storage conf code, add the virsh code, etc. One long patch is just tough to review.
I will do splitting.
John
FWIW: It compiled, but failed syntax-check. I have checked it once again - and in my case -syntax-check is passed. Can you tell me what is wrong in your case? (The patch is ok for upstream)
Digging around I found my old review branch (been far too long to remember) - it was 571 commits old, but still able to rebase without merges (that's good)... My make syntax-check returns:
preprocessor_indentation cppi: src/storage/storage_backend_fs.h: line 33: not properly indented cppi: src/storage/storage_backend_fs.h: line 35: not properly indented maint.mk: incorrect preprocessor indentation cfg.mk:684: recipe for target 'sc_preprocessor_indentation' failed make: *** [sc_preprocessor_indentation] Error 1
This is something you would see on your system because:
#if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; #endif
needs have need to change to
# if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; # endif
But it doesn't belong there anyway
Thanks a lot.
Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
All tools we use to manage Virtuozzo storage starts with vstorage*. However, I can use vzstorage instead.
I suppose vstorage would be OK - I'm letting you know what I thought when I saw it.
John
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check]) if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi +if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define> + <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define> + <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags; int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage") VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, }; @@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */ VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE) # define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \ (VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL }; diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */ +#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { }; #endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool, cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup; diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n"); vshPrint(ctl, "%s", _(" Miscellaneous:"));
-- Best regards, Olga

On 12/06/2016 06:10 AM, Olga Krishtal wrote:
On 06/12/16 02:59, John Ferlan wrote:
On 12/02/2016 12:09 PM, Olga Krishtal wrote:
On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks - No, we will not drop it. We want to have Virtuozzo Storage as backend for storage pool.
To be more specific this code is getting wired in with the 'fs' backend rather than creating it's own... Don't like that.
Similar to what I recently reviewed for the Veritas/vxfs changes - there needs to be a separate storage_backend_vstorage.c in order to handle the vstorage pool. Don't lump it in with the "fs" pool/backend. No different than what is required for rbd, gluster, mpath, logical, disk, scsi, etc.
I thought of it first, however, after implementing some part of functionality - I saw that for vstorage backend I need to implement only 2 callbacks: virStorageBackendVzStart virStorageBackendVzfindPoolSources All volume types and operations are the same as in pool/fs backend, Basically what we need to use storage is to find it: vstorage discover - finds all clusters or domain using DNS, zeroconf, etc Than, we need to mount this cluster (I assume, that the authorization will be done before pool creation): vstorage-mount -c cluster-name mount_point. Then we have an access to this cluster as a filesystem, we can upload-download image files, etc. To avoid copy/paste of all vol-ops and nearly half of pool-ops I've choosen pool/fs_backend. Do you still think that it is necessary to have separate backend?
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order. Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done. John
Don't forget things need to be as separable as possible. Add the pool code, add the storage conf code, add the virsh code, etc. One long patch is just tough to review.
I will do splitting.
John
FWIW: It compiled, but failed syntax-check. I have checked it once again - and in my case -syntax-check is passed. Can you tell me what is wrong in your case? (The patch is ok for upstream)
Digging around I found my old review branch (been far too long to remember) - it was 571 commits old, but still able to rebase without merges (that's good)... My make syntax-check returns:
preprocessor_indentation cppi: src/storage/storage_backend_fs.h: line 33: not properly indented cppi: src/storage/storage_backend_fs.h: line 35: not properly indented maint.mk: incorrect preprocessor indentation cfg.mk:684: recipe for target 'sc_preprocessor_indentation' failed make: *** [sc_preprocessor_indentation] Error 1
This is something you would see on your system because:
#if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; #endif
needs have need to change to
# if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; # endif
But it doesn't belong there anyway
Thanks a lot.
Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
All tools we use to manage Virtuozzo storage starts with vstorage*. However, I can use vzstorage instead.
I suppose vstorage would be OK - I'm letting you know what I thought when I saw it.
John
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check]) if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi +if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define> + <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define> + <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags; int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage") VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, }; @@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */ VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE) # define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \
(VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL }; diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */ +#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { }; #endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool,
cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup; diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n"); vshPrint(ctl, "%s", _(" Miscellaneous:"));

On 06/12/16 22:26, John Ferlan wrote:
On 12/06/2016 06:10 AM, Olga Krishtal wrote:
On 12/02/2016 12:09 PM, Olga Krishtal wrote:
On 20/09/16 23:30, John Ferlan wrote:
On 07/14/2016 01:13 PM, Olga Krishtal wrote:
This patch supports pool and volume management within Virtuozzo Storage. Virtuozzo Storage is a highly-available distributed software defined storage with built-in replication and disaster recovery. From client's point of view it looks like network attached storage (NFS or GlusterFS). More information about vzstorage can be found here: https://openvz.org/Virtuozzo_Storage It supports the same volume formats as directory, nfs, etc. Default format is raw.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com> --- configure.ac | 31 +++++++++++ docs/schemas/storagepool.rng | 29 ++++++++++ include/libvirt/libvirt-storage.h | 1 + src/conf/storage_conf.c | 16 +++++- src/conf/storage_conf.h | 4 +- src/storage/storage_backend.c | 3 + src/storage/storage_backend_fs.c | 112 +++++++++++++++++++++++++++++++++++++- src/storage/storage_backend_fs.h | 3 + src/storage/storage_driver.c | 2 + tools/virsh-pool.c | 2 + tools/virsh.c | 3 + 11 files changed, 203 insertions(+), 3 deletions(-)
In an effort to go through older on list bugs I'm wondering about the relative importance of this compared to the fspool series. Shall I assume this is dropped in favor of that?
Tks - No, we will not drop it. We want to have Virtuozzo Storage as backend for storage pool.
To be more specific this code is getting wired in with the 'fs' backend rather than creating it's own... Don't like that.
Similar to what I recently reviewed for the Veritas/vxfs changes - there needs to be a separate storage_backend_vstorage.c in order to handle the vstorage pool. Don't lump it in with the "fs" pool/backend. No different than what is required for rbd, gluster, mpath, logical, disk, scsi, etc. I thought of it first, however, after implementing some part of functionality - I saw that for vstorage backend I need to implement only 2 callbacks: virStorageBackendVzStart virStorageBackendVzfindPoolSources All volume types and operations are the same as in pool/fs backend, Basically what we need to use storage is to find it: vstorage discover - finds all clusters or domain using DNS, zeroconf, etc Than, we need to mount this cluster (I assume, that the authorization will be done before pool creation): vstorage-mount -c cluster-name mount_point. Then we have an access to
On 06/12/16 02:59, John Ferlan wrote: this cluster as a filesystem, we can upload-download image files, etc. To avoid copy/paste of all vol-ops and nearly half of pool-ops I've choosen pool/fs_backend. Do you still think that it is necessary to have separate backend?
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa).
Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution.
I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done.
I know. Will fix.
John
Don't forget things need to be as separable as possible. Add the pool code, add the storage conf code, add the virsh code, etc. One long patch is just tough to review. I will do splitting.
John
FWIW: It compiled, but failed syntax-check. I have checked it once again - and in my case -syntax-check is passed. Can you tell me what is wrong in your case? (The patch is ok for upstream)
Digging around I found my old review branch (been far too long to remember) - it was 571 commits old, but still able to rebase without merges (that's good)... My make syntax-check returns:
preprocessor_indentation cppi: src/storage/storage_backend_fs.h: line 33: not properly indented cppi: src/storage/storage_backend_fs.h: line 35: not properly indented maint.mk: incorrect preprocessor indentation cfg.mk:684: recipe for target 'sc_preprocessor_indentation' failed make: *** [sc_preprocessor_indentation] Error 1
This is something you would see on your system because:
#if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; #endif
needs have need to change to
# if WITH_STORAGE_VSTORAGE extern virStorageBackend virStorageBackendVstorage; # endif
But it doesn't belong there anyway Thanks a lot.
Also the name "vstorage" would seemingly be a bit too generic. I saw and thought virtual storage not Virtuozzo Storage
All tools we use to manage Virtuozzo storage starts with vstorage*. However, I can use vzstorage instead.
I suppose vstorage would be OK - I'm letting you know what I thought when I saw it.
John
diff --git a/configure.ac b/configure.ac index 2c81c95..2ee8b8c 100644 --- a/configure.ac +++ b/configure.ac @@ -1692,6 +1692,10 @@ AC_ARG_WITH([storage-zfs], [AS_HELP_STRING([--with-storage-zfs], [with ZFS backend for the storage driver @<:@default=check@:>@])], [],[with_storage_zfs=check]) +AC_ARG_WITH([storage-vstorage], + [AS_HELP_STRING([--with-storage-vstorage], + [with VZ backend for the storage driver @<:@default=check@:>@])], + [],[with_storage_vstorage=check]) if test "$with_libvirtd" = "no"; then with_storage_dir=no @@ -1705,6 +1709,7 @@ if test "$with_libvirtd" = "no"; then with_storage_sheepdog=no with_storage_gluster=no with_storage_zfs=no + with_storage_vstorage=no fi if test "$with_storage_dir" = "yes" ; then AC_DEFINE_UNQUOTED([WITH_STORAGE_DIR], 1, [whether directory backend for storage driver is enabled]) @@ -1963,6 +1968,31 @@ if test "$with_storage_fs" = "yes" || fi fi +if test "$with_storage_vstorage" = "yes" || test "$with_storage_vstorage" = "check"; then + AC_PATH_PROG([VSTORAGE], [vstorage], [], [$PATH:/sbin:/usr/bin]) + AC_PATH_PROG([VSTORAGE_MOUNT], [vstorage-mount], [], [$PATH:/sbin/usr/bin]) + if test "$with_storage_vstorage" = "yes"; then + if test -z "$VSTORAGE" ; then AC_MSG_ERROR([We need vstorage tools for virtuozzo storage driver]); fi + if test -z "$VSTORAGE_MOUNT" ; then AC_MSG_ERROR([We need vstorage mount tool for virtuozzo storage driver]); fi + if test "$with_storage_fs" = "no"; then AC_MSG_ERROR([We need fs backend for vstorage pool]); fi + else + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test -z "$VSTORAGE" ; then with_storage_vstorage=no; fi + if test "$with_storage_fs"= "no"; then with_storage_vstorage=no; fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi + fi + if test "$with_storage_vstorage" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_STORAGE_VSTORAGE], 1, [whether virtuozzo storage backend for storage driver is enabled]) + AC_DEFINE_UNQUOTED([VSTORAGE], ["$VSTORAGE"], [Location or name of vstorage program]) + AC_DEFINE_UNQUOTED([VSTORAGE_MOUNT], ["$VSTORAGE_MOUNT"], [Location or name of vstorage-mount program]) + AC_DEFINE_UNQUOTED([with_storage_fs], ["$VSTORAGE_MOUNT"], [must be on]) + fi + + if test "$with_storage_vstorage" = "check" ; then with_storage_vstorage=yes ; fi +fi +AM_CONDITIONAL([WITH_STORAGE_VSTORAGE], [test "$with_storage_vstorage" = "yes"]) + LIBPARTED_CFLAGS= LIBPARTED_LIBS= if test "$with_storage_disk" = "yes" || @@ -2759,6 +2789,7 @@ AC_MSG_NOTICE([ RBD: $with_storage_rbd]) AC_MSG_NOTICE([Sheepdog: $with_storage_sheepdog]) AC_MSG_NOTICE([ Gluster: $with_storage_gluster]) AC_MSG_NOTICE([ ZFS: $with_storage_zfs]) +AC_MSG_NOTICE([Vstorage: $with_storage_vstorage]) AC_MSG_NOTICE([]) AC_MSG_NOTICE([Security Drivers]) AC_MSG_NOTICE([]) diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng index 49d212f..8ad5616 100644 --- a/docs/schemas/storagepool.rng +++ b/docs/schemas/storagepool.rng @@ -24,6 +24,7 @@ <ref name='poolsheepdog'/> <ref name='poolgluster'/> <ref name='poolzfs'/> + <ref name='poolvstorage'/> </choice> </element> </define> @@ -173,6 +174,18 @@ </interleave> </define> + <define name='poolvstorage'> + <attribute name='type'> + <value>vz</value> + </attribute> + <interleave> + <ref name='commonmetadata'/> + <ref name='sizing'/> + <ref name='sourcevstorage'/> + <ref name='target'/> + </interleave> + </define> + <define name='sourceinfovendor'> <interleave> <optional> @@ -609,6 +622,22 @@ </element> </define> + <define name='clustername'> + <interleave> + <element name='name'> + <ref name='genericName'/> + </element> + </interleave> + </define> + + <define name='sourcevstorage'> + <element name='source'> + <interleave> + <ref name='clustername'/> + </interleave> + </element> + </define> + <define name='IscsiQualifiedName'> <data type='string'> <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/include/libvirt/libvirt-storage.h b/include/libvirt/libvirt-storage.h index 0974f6e..28babf7 100644 --- a/include/libvirt/libvirt-storage.h +++ b/include/libvirt/libvirt-storage.h @@ -232,6 +232,7 @@ typedef enum { VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG = 1 << 15, VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER = 1 << 16, VIR_CONNECT_LIST_STORAGE_POOLS_ZFS = 1 << 17, + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE = 1 << 18, } virConnectListAllStoragePoolsFlags; int virConnectListAllStoragePools(virConnectPtr conn, diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c index 05a1a49..e3c8ac1 100644 --- a/src/conf/storage_conf.c +++ b/src/conf/storage_conf.c @@ -60,7 +60,7 @@ VIR_ENUM_IMPL(virStoragePool, "dir", "fs", "netfs", "logical", "disk", "iscsi", "scsi", "mpath", "rbd", - "sheepdog", "gluster", "zfs") + "sheepdog", "gluster", "zfs", "vstorage") VIR_ENUM_IMPL(virStoragePoolFormatFileSystem, VIR_STORAGE_POOL_FS_LAST, @@ -274,6 +274,16 @@ static virStoragePoolTypeInfo poolTypeInfo[] = { .defaultFormat = VIR_STORAGE_FILE_RAW, }, }, + {.poolType = VIR_STORAGE_POOL_VSTORAGE, + .poolOptions = { + .flags = VIR_STORAGE_POOL_SOURCE_NAME, + }, + .volOptions = { + .defaultFormat = VIR_STORAGE_FILE_RAW, + .formatFromString = virStorageVolumeFormatFromString, + .formatToString = virStorageFileFormatTypeToString, + }, + }, }; @@ -2588,6 +2598,10 @@ virStoragePoolSourceFindDuplicate(virConnectPtr conn, /* Only one mpath pool is valid per host */ matchpool = pool; break; + case VIR_STORAGE_POOL_VSTORAGE: + if (STREQ(pool->def->source.name, def->source.name)) + matchpool = pool; + break; case VIR_STORAGE_POOL_RBD: case VIR_STORAGE_POOL_LAST: break; diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h index 185ae5e..bf33b5f 100644 --- a/src/conf/storage_conf.h +++ b/src/conf/storage_conf.h @@ -95,6 +95,7 @@ typedef enum { VIR_STORAGE_POOL_SHEEPDOG, /* Sheepdog device */ VIR_STORAGE_POOL_GLUSTER, /* Gluster device */ VIR_STORAGE_POOL_ZFS, /* ZFS */ + VIR_STORAGE_POOL_VSTORAGE, /* Virtuozzo Storage */ VIR_STORAGE_POOL_LAST, } virStoragePoolType; @@ -545,7 +546,8 @@ VIR_ENUM_DECL(virStoragePartedFs) VIR_CONNECT_LIST_STORAGE_POOLS_RBD | \ VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG | \ VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER | \ - VIR_CONNECT_LIST_STORAGE_POOLS_ZFS) + VIR_CONNECT_LIST_STORAGE_POOLS_ZFS | \ + VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE) # define VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ALL \
(VIR_CONNECT_LIST_STORAGE_POOLS_FILTERS_ACTIVE | \ diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c index 97f6ffe..181d3f5 100644 --- a/src/storage/storage_backend.c +++ b/src/storage/storage_backend.c @@ -135,6 +135,9 @@ static virStorageBackendPtr backends[] = { #if WITH_STORAGE_ZFS &virStorageBackendZFS, #endif +#if WITH_STORAGE_VSTORAGE + &virStorageBackendVstorage, +#endif NULL }; diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c index 0a12ecb..b89b3ae 100644 --- a/src/storage/storage_backend_fs.c +++ b/src/storage/storage_backend_fs.c @@ -371,7 +371,13 @@ virStorageBackendFileSystemIsValid(virStoragePoolObjPtr pool) "%s", _("missing source path")); return -1; } - } else { + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (!pool->def->source.name) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing source cluster name")); + return -1; + } + } else{ if (pool->def->source.ndevice != 1) { if (pool->def->source.ndevice == 0) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -411,6 +417,9 @@ virStorageBackendFileSystemGetPoolSource(virStoragePoolObjPtr pool) pool->def->source.dir) < 0) return NULL; } + } else if (pool->def->type == VIR_STORAGE_POOL_VSTORAGE) { + if (virAsprintf(&src, "vstorage://%s", pool->def->source.name) < 0) + return NULL; } else { if (VIR_STRDUP(src, pool->def->source.devices[0].path) < 0) return NULL; @@ -546,7 +555,9 @@ virStorageBackendFileSystemMount(virStoragePoolObjPtr pool) VIR_FREE(src); return ret; } +#endif /* WITH_STORAGE_FS */ +#if WITH_STORAGE_FS /** * @pool storage pool to unmount * @@ -1654,3 +1665,102 @@ virStorageFileBackend virStorageFileBackendDir = { }; #endif /* WITH_STORAGE_FS */ + +#if WITH_STORAGE_VSTORAGE +static int +virStorageBackendVzStart(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool) +{ + int ret = -1; + virCommandPtr cmd = NULL; + + cmd = virCommandNewArgList(VSTORAGE_MOUNT, "-c", pool->def->source.name, + pool->def->target.path, NULL); + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + ret = 0; + + cleanup: + virCommandFree(cmd); + return ret; +} + +static char* +virStorageBackendVzfindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + + virCommandPtr cmd = NULL; + char *buf = NULL; + char *ret = NULL; + char **clusters = NULL; + size_t clusters_num = 0; + size_t i = 0; + virStoragePoolSourceList vzcluster_list = { + .type = VIR_STORAGE_POOL_VSTORAGE, + .nsources = 0, + .sources = NULL + }; + + virCheckFlags(0, NULL); + + cmd = virCommandNewArgList(VSTORAGE, "discover", NULL); + virCommandSetOutputBuffer(cmd, &buf); + + if (virCommandRun(cmd, NULL) < 0) + goto cleanup; + + if (!(clusters = virStringSplitCount(buf, "\n", 0, &clusters_num))) + goto cleanup; + + if (clusters_num > 0) { + vzcluster_list.nsources = clusters_num - 1; + + if (VIR_ALLOC_N(vzcluster_list.sources, vzcluster_list.nsources) < 0) + goto cleanup; + + for (; i < vzcluster_list.nsources; i++) { + if (VIR_ALLOC(vzcluster_list.sources[i].devices) < 0) + goto cleanup; + if (VIR_STRDUP(vzcluster_list.sources[i].name, clusters[i]) < 0) + goto cleanup; + } + } + + if (!(ret = virStoragePoolSourceListFormat(&vzcluster_list))) + goto cleanup; + + cleanup: + for (i = 0; i < vzcluster_list.nsources; i++) { + virStoragePoolSourceClear(&vzcluster_list.sources[i]); + VIR_FREE(clusters[i]); + } + VIR_FREE(vzcluster_list.sources); + VIR_FREE(buf); + virCommandFree(cmd); + return ret; + +} +virStorageBackend virStorageBackendVstorage = { + .type = VIR_STORAGE_POOL_VSTORAGE, + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + +}; +#endif /* WITH_STORAGE_VSTORAGE */ diff --git a/src/storage/storage_backend_fs.h b/src/storage/storage_backend_fs.h index 347ea9b..23cff66 100644 --- a/src/storage/storage_backend_fs.h +++ b/src/storage/storage_backend_fs.h @@ -30,6 +30,9 @@ extern virStorageBackend virStorageBackendFileSystem; extern virStorageBackend virStorageBackendNetFileSystem; # endif +#if WITH_STORAGE_VSTORAGE +extern virStorageBackend virStorageBackendVstorage; +#endif typedef enum { FILESYSTEM_PROBE_FOUND, diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index cb9d578..3b6c7b4 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -1618,6 +1618,7 @@ storageVolLookupByPath(virConnectPtr conn, case VIR_STORAGE_POOL_ISCSI: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_MPATH: + case VIR_STORAGE_POOL_VSTORAGE: stable_path = virStorageBackendStablePath(pool,
cleanpath, false); @@ -3485,6 +3486,7 @@ virStorageTranslateDiskSourcePool(virConnectPtr conn, case VIR_STORAGE_POOL_DISK: case VIR_STORAGE_POOL_SCSI: case VIR_STORAGE_POOL_ZFS: + case VIR_STORAGE_POOL_VSTORAGE: if (!(def->src->path = virStorageVolGetPath(vol))) goto cleanup; diff --git a/tools/virsh-pool.c b/tools/virsh-pool.c index 6045331..6ea2424 100644 --- a/tools/virsh-pool.c +++ b/tools/virsh-pool.c @@ -1166,6 +1166,8 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) case VIR_STORAGE_POOL_ZFS: flags |= VIR_CONNECT_LIST_STORAGE_POOLS_ZFS; break; + case VIR_STORAGE_POOL_VSTORAGE: + flags |= VIR_CONNECT_LIST_STORAGE_POOLS_VSTORAGE; case VIR_STORAGE_POOL_LAST: break; } diff --git a/tools/virsh.c b/tools/virsh.c index 5dc482d..61c8e2f 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -682,6 +682,9 @@ virshShowVersion(vshControl *ctl ATTRIBUTE_UNUSED) #ifdef WITH_STORAGE_ZFS vshPrint(ctl, " ZFS"); #endif +#ifdef WITH_STORAGE_VSTORAGE + vshPrint(ctl, " Vstorage"); +#endif vshPrint(ctl, "\n"); vshPrint(ctl, "%s", _(" Miscellaneous:"));
-- Best regards, Olga

[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa).
Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution.
I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files. "ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately. When you create a vstorage "file" is that done via touch? or edit some path or some other "common" OS command? Or is there a vstorage command that needs to be used. If the former, then using the common storage_backend API's should be possible. John
Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done.
I know. Will fix.
[...]

08-Dec-16 02:22, John Ferlan пишет:
[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately.
Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
When you create a vstorage "file" is that done via touch? or edit some path or some other "common" OS command? Or is there a vstorage command that needs to be used. If the former, then using the common storage_backend API's should be possible.
vstorage is just another "remote filesystem" type of distributed software defined storage. In terms of starting to use it, it doesn't differ from nfs - you should mount it and then you can use any POSIX calls to control files and directories resided on it. Maxim
John
Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done. I know. Will fix.
[...]

On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:
[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately.
Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
So if it's more like NFS, then how does one ensure that the local userid X is the same as the remote userid X? NFS has a root-squashing concept that results in numerous shall we say "interesting" issues. Check out the virFileOpen*, virDirCreate, and virFileRemove... Also what about viFileIsShareFSType? And security_selinux.c code for NFS? If you use cscope, just search on NFS. In the virStorageBackendVzStart, I see: VSTORAGE_MOUNT -c $pool.source.name $pool.target.path where VSTORAGE_MOUNT is a build (configure.ac) definition that is the "Location or name of vstorage-mount program" which would only be set if the proper package was installed. In the virStorageBackendVzfindPoolSources, I see: VSTORAGE discover which I assume generates some list of remote "services" (for lack of a better term) which can be used as/for pool.source.name in order to be well mounted by the VSTORAGE_MOUNT program. Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
When you create a vstorage "file" is that done via touch? or edit some path or some other "common" OS command? Or is there a vstorage command that needs to be used. If the former, then using the common storage_backend API's should be possible.
vstorage is just another "remote filesystem" type of distributed software defined storage. In terms of starting to use it, it doesn't differ from nfs - you should mount it and then you can use any POSIX calls to control files and directories resided on it.
Here's a sample nfs pool xml I have - I changed the source/target path and didn't define a host. <pool type='netfs'> <name>nfs</name> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <host name='localhost'/> <dir path='/path/to/source'/> <format type='nfs'/> </source> <target> <path>/path/to/target</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> That is vastly different than what was in the cover: <pool type='vstorage'> <name>VZ</name> <uuid>5f45665b-66fa-4b18-84d1-248774cff3a1</uuid> <capacity unit='bytes'>107374182400</capacity> <allocation unit='bytes'>1441144832</allocation> <available unit='bytes'>105933037568</available> <source> <name>vz7-vzstorage</name> </source> <target> <path>/vzstorage_pool</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> What causes "vz7-vzstorage" to be defined? Something from the 'VSTORAGE' command. I would think that is that essentially similar to how glusterfs, rbd, or sheepdog uses a source <name> field. Note that each of those include a <host name='$host' [port='#']/> definition, although this vstorage XML doesn't. Thus it seems vzstorage is really not a "local" filesystem, correct? If so, then should one really be allowing "local" things like chown, chmod, etc. to be executed? What kind of "configuration and/or validation of trust" takes place via vstorage provided tools in order to allow a user on the local host to access the storage on the remote host. John
Maxim
John
Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done. I know. Will fix.
[...]

08-Dec-16 15:17, John Ferlan пишет:
On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:
[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately. Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
So if it's more like NFS, then how does one ensure that the local userid X is the same as the remote userid X? NFS has a root-squashing concept that results in numerous shall we say "interesting" issues.
Vstorage doesn't have users concept. Authentication is made by a password per node just once. If authentication passes, a key is stored in /etc/vstorage/clusters/CLUSTER_NAME/auth_digest.key Then, permissions are set to a mount point during mounting with -u USER -g GROUP -m MODE options provided to vstorage-mount command.
Check out the virFileOpen*, virDirCreate, and virFileRemove...
Also what about viFileIsShareFSType? And security_selinux.c code for NFS? If you use cscope, just search on NFS.
In the virStorageBackendVzStart, I see:
VSTORAGE_MOUNT -c $pool.source.name $pool.target.path
This call certainly lacks user/group/mode parameters and should be fixed in the next series.
where VSTORAGE_MOUNT is a build (configure.ac) definition that is the "Location or name of vstorage-mount program" which would only be set if the proper package was installed.
In the virStorageBackendVzfindPoolSources, I see:
VSTORAGE discover
which I assume generates some list of remote "services" (for lack of a better term) which can be used as/for pool.source.name in order to be well mounted by the VSTORAGE_MOUNT program.
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right? Maxim [snip]

Hi all, I see here a lot of discussions and explanations about virtuozzo storage. So, I will just add some information. Initially I have planned, that admin or user has properly configured vstorage somewhere over the network. (It can be done via DNS records or zeroconf.) At the host where pool will be stored user/admin do cluster discovery and authorization. As Maxim has mentioned before - vstorage does not use the concept of users and groups, providing specific users and groups with access to specific parts of a cluster. So, anyone authorized to access a cluster can access all its data. However, you can use additional parameters during mounting to define mount owner usr name, group name and acecss mode (so, you can mount cluster as read-only) This means that performing chown/chmod and etc - will have same effect as in nfs case. Of course to perform this operations you need vstorage-client to be installed on the host. On 08/12/16 16:47, Maxim Nestratov wrote:
08-Dec-16 15:17, John Ferlan пишет:
On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:
[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately. Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
So if it's more like NFS, then how does one ensure that the local userid X is the same as the remote userid X? NFS has a root-squashing concept that results in numerous shall we say "interesting" issues.
Vstorage doesn't have users concept. Authentication is made by a password per node just once. If authentication passes, a key is stored in /etc/vstorage/clusters/CLUSTER_NAME/auth_digest.key Then, permissions are set to a mount point during mounting with -u USER -g GROUP -m MODE options provided to vstorage-mount command.
Check out the virFileOpen*, virDirCreate, and virFileRemove...
Also what about viFileIsShareFSType? And security_selinux.c code for NFS? If you use cscope, just search on NFS.
In the virStorageBackendVzStart, I see:
VSTORAGE_MOUNT -c $pool.source.name $pool.target.path
This call certainly lacks user/group/mode parameters and should be fixed in the next series. Do you mean fix the default behavior for non-root users?
where VSTORAGE_MOUNT is a build (configure.ac) definition that is the "Location or name of vstorage-mount program" which would only be set if the proper package was installed.
In the virStorageBackendVzfindPoolSources, I see:
VSTORAGE discover
which I assume generates some list of remote "services" (for lack of a better term) which can be used as/for pool.source.name in order to be well mounted by the VSTORAGE_MOUNT program.
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Maxim
[snip]
-- Best regards, Olga

On 08/12/16 19:39, Olga Krishtal wrote:
Hi all, I see here a lot of discussions and explanations about virtuozzo storage. So, I will just add some information. Initially I have planned, that admin or user has properly configured vstorage somewhere over the network. (It can be done via DNS records or zeroconf.) At the host where pool will be stored user/admin do cluster discovery and authorization. As Maxim has mentioned before - vstorage does not use the concept of users and groups, providing specific users and groups with access to specific parts of a cluster. So, anyone authorized to access a cluster can access all its data. However, you can use additional parameters during mounting to define mount owner usr name, group name and acecss mode (so, you can mount cluster as read-only) Mistake, performing chown/chmod and etc does nothing. This means that performing chown/chmod and etc - will have same effect as in nfs case.
Of course to perform this operations you need vstorage-client to be installed on the host.
On 08/12/16 16:47, Maxim Nestratov wrote:
08-Dec-16 15:17, John Ferlan пишет:
On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:
[...]
> I see what you mean; however, IMO vstorage should be separate. > Maybe > there's another opinion out there, but since you're requiring > "something" else to be installed in order to get the WITH_VSTORAGE > to be > set to 1, then a separate file is in order. > > Not sure they're comparable, but zfs has its own. Having separated > vstorage reduces the chance that some day some incompatible > logic is > added/altered in the *fs.c (or vice versa). Ok. I will try.
> I think you should consider the *_fs.c code to be the "default" of > sorts. That is default file/dir structure with netfs added in. The > vstorage may just be some file system, but it's not something > (yet) on > "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately. Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
So if it's more like NFS, then how does one ensure that the local userid X is the same as the remote userid X? NFS has a root-squashing concept that results in numerous shall we say "interesting" issues.
Vstorage doesn't have users concept. Authentication is made by a password per node just once. If authentication passes, a key is stored in /etc/vstorage/clusters/CLUSTER_NAME/auth_digest.key Then, permissions are set to a mount point during mounting with -u USER -g GROUP -m MODE options provided to vstorage-mount command.
Check out the virFileOpen*, virDirCreate, and virFileRemove...
Also what about viFileIsShareFSType? And security_selinux.c code for NFS? If you use cscope, just search on NFS.
In the virStorageBackendVzStart, I see:
VSTORAGE_MOUNT -c $pool.source.name $pool.target.path
This call certainly lacks user/group/mode parameters and should be fixed in the next series. Do you mean fix the default behavior for non-root users?
where VSTORAGE_MOUNT is a build (configure.ac) definition that is the "Location or name of vstorage-mount program" which would only be set if the proper package was installed.
In the virStorageBackendVzfindPoolSources, I see:
VSTORAGE discover
which I assume generates some list of remote "services" (for lack of a better term) which can be used as/for pool.source.name in order to be well mounted by the VSTORAGE_MOUNT program.
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Maxim
[snip]
-- Best regards, Olga

Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Not sure what you're inferring by copy and paste - other than perhaps having to decide for the vstorage backend which storage backend API's it needs or should support. The list of API's as I see from the path are: + + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, + Other than startPool and findPoolSources, the code will reuse/call the virStorageBackendFileSystem* API's - so the only copy/paste would be copyrights, some #include's that would be required only for vstorage, the VIR_FROM_THIS definition, VIR_LOG_INIT... Nothing any other backend wouldn't have to do. Although I do question "checkPool" - I would think for vstorage that should check if the environment is "available" somehow *if* you want pool autostart Also for stopPool the code will essentially call unmount. So is that "expected" for vstorage? Going through each API callout is how you'll be able to tell me/us that taking that path will work for the vstorage environment. John

Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is. To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Not sure what you're inferring by copy and paste - other than perhaps having to decide for the vstorage backend which storage backend API's it needs or should support.
The list of API's as I see from the path are:
+ + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, +
Other than startPool and findPoolSources, the code will reuse/call the virStorageBackendFileSystem* API's - so the only copy/paste would be copyrights, some #include's that would be required only for vstorage, the VIR_FROM_THIS definition, VIR_LOG_INIT... Nothing any other backend wouldn't have to do. By reusing you mean to export virStorageBackendFileSystem* and call
On 08/12/16 22:56, John Ferlan wrote: them from staorage_backen_vstorage? Otherwise, may be I could do vstorage stuff only under #ifdef without touching any of virStorageBackendFileSystem* ?
Although I do question "checkPool" - I would think for vstorage that should check if the environment is "available" somehow *if* you want pool autostart Thanks, you are right.
Also for stopPool the code will essentially call unmount. So is that "expected" for vstorage? yes, just umount.
Going through each API callout is how you'll be able to tell me/us that taking that path will work for the vstorage environment.
John
-- Best regards, Olga

On 12/16/2016 06:14 AM, Olga Krishtal wrote:
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is. To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Not sure what you're inferring by copy and paste - other than perhaps having to decide for the vstorage backend which storage backend API's it needs or should support.
The list of API's as I see from the path are:
+ + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, +
Other than startPool and findPoolSources, the code will reuse/call the virStorageBackendFileSystem* API's - so the only copy/paste would be copyrights, some #include's that would be required only for vstorage, the VIR_FROM_THIS definition, VIR_LOG_INIT... Nothing any other backend wouldn't have to do. By reusing you mean to export virStorageBackendFileSystem* and call
On 08/12/16 22:56, John Ferlan wrote: them from staorage_backen_vstorage? Otherwise, may be I could do vstorage stuff only under #ifdef without touching any of virStorageBackendFileSystem* ?
From the other POV though - the more #ifdef code in storage_backend_fs
I can see why the desire is to use storage_backend_fs because it already has a bunch of things you'll need, so I understand why it's desired to be included there. the 'harder' it is to separate things (conceptually). Finally, I also have floating around is the fspool adjustments and wonder what kind of overlap exists. Still seeing progress is good too and I guess unless I or someone else volunteers to split up storage_backend_fs.c into multiple backends (fs, nfs, dir, etc.) I guess we're stuck with what we have. John
Although I do question "checkPool" - I would think for vstorage that should check if the environment is "available" somehow *if* you want pool autostart Thanks, you are right.
Also for stopPool the code will essentially call unmount. So is that "expected" for vstorage? yes, just umount.
Going through each API callout is how you'll be able to tell me/us that taking that path will work for the vstorage environment.
John

On 20/12/16 20:55, John Ferlan wrote:
On 12/16/2016 06:14 AM, Olga Krishtal wrote:
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is. To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?
Not sure what you're inferring by copy and paste - other than perhaps having to decide for the vstorage backend which storage backend API's it needs or should support.
The list of API's as I see from the path are:
+ + .startPool = virStorageBackendVzStart, + .checkPool = virStorageBackendFileSystemCheck, + .stopPool = virStorageBackendFileSystemStop, + .findPoolSources = virStorageBackendVzfindPoolSources, + .buildPool = virStorageBackendFileSystemBuild, + .deletePool = virStorageBackendFileSystemDelete, + .refreshPool = virStorageBackendFileSystemRefresh, + .buildVol = virStorageBackendFileSystemVolBuild, + .buildVolFrom = virStorageBackendFileSystemVolBuildFrom, + .createVol = virStorageBackendFileSystemVolCreate, + .refreshVol = virStorageBackendFileSystemVolRefresh, + .deleteVol = virStorageBackendFileSystemVolDelete, + .resizeVol = virStorageBackendFileSystemVolResize, + .uploadVol = virStorageBackendVolUploadLocal, + .downloadVol = virStorageBackendVolDownloadLocal, + .wipeVol = virStorageBackendVolWipeLocal, +
Other than startPool and findPoolSources, the code will reuse/call the virStorageBackendFileSystem* API's - so the only copy/paste would be copyrights, some #include's that would be required only for vstorage, the VIR_FROM_THIS definition, VIR_LOG_INIT... Nothing any other backend wouldn't have to do. By reusing you mean to export virStorageBackendFileSystem* and call
On 08/12/16 22:56, John Ferlan wrote: them from staorage_backen_vstorage? Otherwise, may be I could do vstorage stuff only under #ifdef without touching any of virStorageBackendFileSystem* ? I can see why the desire is to use storage_backend_fs because it already has a bunch of things you'll need, so I understand why it's desired to be included there.
From the other POV though - the more #ifdef code in storage_backend_fs the 'harder' it is to separate things (conceptually).
Than I will move vstorage backend to separate file with the copy of virStorageBackendFileSystem* and etc. And if the fs backend code will be splited or smth. - we will change vstorage backend accordingly.
Finally, I also have floating around is the fspool adjustments and wonder what kind of overlap exists.
Let's not mix this fspool driver and backend for storage pool. There is no overlap. At the moment I want vstorage to be a backend for storage pools, so we can store images in there (raw, ploop, qcow, etc).
Still seeing progress is good too and I guess unless I or someone else volunteers to split up storage_backend_fs.c into multiple backends (fs, nfs, dir, etc.) I guess we're stuck with what we have.
John
Although I do question "checkPool" - I would think for vstorage that should check if the environment is "available" somehow *if* you want pool autostart Thanks, you are right. Also for stopPool the code will essentially call unmount. So is that "expected" for vstorage? yes, just umount. Going through each API callout is how you'll be able to tell me/us that taking that path will work for the vstorage environment.
John
-- Best regards, Olga

On 08.12.2016 15:17, John Ferlan wrote:
On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:
[...]
I see what you mean; however, IMO vstorage should be separate. Maybe there's another opinion out there, but since you're requiring "something" else to be installed in order to get the WITH_VSTORAGE to be set to 1, then a separate file is in order.
Not sure they're comparable, but zfs has its own. Having separated vstorage reduces the chance that some day some incompatible logic is added/altered in the *fs.c (or vice versa). Ok. I will try.
I think you should consider the *_fs.c code to be the "default" of sorts. That is default file/dir structure with netfs added in. The vstorage may just be some file system, but it's not something (yet) on "every" distribution. I did not understand actually, what you mean "be the "default" of sorts." As I have understood - what I need to do is to create backend_vstorage.c with all create/delete/* functionality.
Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or "dir" (on Windows) and get a list of files.
"ls" and "dir" are inherent to the OS, while in this case vstorage commands are installed separately.
Once you mounted your vstorage cluster to a local filesystem you can also "ls" it. Thus, I can't see much difference from nfs here.
So if it's more like NFS, then how does one ensure that the local userid X is the same as the remote userid X? NFS has a root-squashing concept that results in numerous shall we say "interesting" issues.
Check out the virFileOpen*, virDirCreate, and virFileRemove...
Also what about viFileIsShareFSType? And security_selinux.c code for NFS? If you use cscope, just search on NFS.
In the virStorageBackendVzStart, I see:
VSTORAGE_MOUNT -c $pool.source.name $pool.target.path
where VSTORAGE_MOUNT is a build (configure.ac) definition that is the "Location or name of vstorage-mount program" which would only be set if the proper package was installed.
In the virStorageBackendVzfindPoolSources, I see:
VSTORAGE discover
which I assume generates some list of remote "services" (for lack of a better term) which can be used as/for pool.source.name in order to be well mounted by the VSTORAGE_MOUNT program.
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
But netfs pool type is not only for NFS, it also can be used to provide cifs and FUSE glusterfs mounts. These three just as vstorage have very little difference from local filesystems from pool POV after they are mounted that's why I guess they implemented so tightly.
When you create a vstorage "file" is that done via touch? or edit some path or some other "common" OS command? Or is there a vstorage command that needs to be used. If the former, then using the common storage_backend API's should be possible.
vstorage is just another "remote filesystem" type of distributed software defined storage. In terms of starting to use it, it doesn't differ from nfs - you should mount it and then you can use any POSIX calls to control files and directories resided on it.
Here's a sample nfs pool xml I have - I changed the source/target path and didn't define a host.
<pool type='netfs'> <name>nfs</name> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <host name='localhost'/> <dir path='/path/to/source'/> <format type='nfs'/> </source> <target> <path>/path/to/target</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool>
That is vastly different than what was in the cover:
<pool type='vstorage'> <name>VZ</name> <uuid>5f45665b-66fa-4b18-84d1-248774cff3a1</uuid> <capacity unit='bytes'>107374182400</capacity> <allocation unit='bytes'>1441144832</allocation> <available unit='bytes'>105933037568</available> <source> <name>vz7-vzstorage</name> </source> <target> <path>/vzstorage_pool</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool>
What causes "vz7-vzstorage" to be defined? Something from the 'VSTORAGE' command. I would think that is that essentially similar to how glusterfs, rbd, or sheepdog uses a source <name> field. Note that each of those include a <host name='$host' [port='#']/> definition, although this vstorage XML doesn't.
Thus it seems vzstorage is really not a "local" filesystem, correct? If so, then should one really be allowing "local" things like chown, chmod, etc. to be executed? What kind of "configuration and/or validation of trust" takes place via vstorage provided tools in order to allow a user on the local host to access the storage on the remote host.
John
Maxim
John
Also I forgot to mention yesterday - you need to update the docs/formatstorage.html.in at the very least and also the storage driver page docs/storage.html.in. In addition there are storagepool tests (xml2xml) that would need to be updated to validate the new storage pool type. The tests would "show" how the pool XML would appear and validate whatever parsing has been done. I know. Will fix.
[...]

[...]
Compare that to NFS, which uses mount which is included in well every distro I can think of. That's a big difference. Also let's face it, NFS has been the essential de facto goto tool to access remote storage for a long time. Personally, I'd rather see the NFS code split out of the *_fs.c backend, but I don't have the desire/time to do it - so it stays as is.
But netfs pool type is not only for NFS, it also can be used to provide cifs and FUSE glusterfs mounts. These three just as vstorage have very little difference from local filesystems from pool POV after they are mounted that's why I guess they implemented so tightly.
Sure and they all pass through virStorageBackendFileSystemMount in order to MOUNT something essentially via the mount command and usage of specific qualifiers. Which differs from using VSTORAGE_MOUNT a utility not provided on every distro AFAIK. John
participants (4)
-
John Ferlan
-
Maxim Nestratov
-
Nikolay Shirokovskiy
-
Olga Krishtal