[libvirt] [PATCH] storage pool discovery

Hi - The attached patch implements virConnectDiscoverStoragePools mostly as specified by Daniel Berrange in: http://www.redhat.com/archives/libvir-list/2008-February/msg00107.html Daniel wasn't happy with the interface for this function because it wasn't sufficiently general, so it currently doesn't exist. I've attempted to generalize his proposed interface by changing the "hostname" argument to a "srcSpec" argument, which is intended to be a XML storage pool <source> element. Note that srcSpec is not necessary for some (local) pool types. For example, to discover existing LVM volume groups, there's no need to specify a source: virsh # pool-discover logical While discovering nfs shares requires a <source> document to specify the host to query: virsh # pool-discover netfs "<source><host name='share' /></source>" Currently (i.e., in this patch) I've implemented discovery support only for the "logical" and "netfs" pools. I plan on covering the other types once we've agreed upon the API. While this patch is rather large, the vast majority of it is "generic" (not specific to any particular type of storage pool) code, which I mostly took from Daniel's submission. I could easily break the storage-pool-type-specific pieces into separate patches, but those changes are already segregated into the appropriate storage_backend_ files. * Known Issues * I made the virsh interface take a XML string rather than a filename. I found it convenient, but AFAIK the rest of the commands that take XML take it from a file, so perhaps that should be changed (just let me know what you'd prefer). I realize the description of srcSpec is kind of fuzzy. For instance, for netfs discovery, the <source> must have a <host> element (to specify the host to query), but shouldn't have a <dir> element since the exported dirs are exactly what we're trying to discover in this case. So this really needs to be documented accurately for each storage pool type. (Where?) Finally, there's an underspecification issue. The XML pool descriptions returned are supposed to be usable as valid input to virStoragePoolDefineXML. But these descriptions include some data (pool name, target path) that isn't specified by the discover input or the discovered resources. For now, I'm making up a somewhat arbitrary pool name (logical: VolGroup name, netfs: last component of export path). And I don't even specify <target><path> in the "netfs" discovery output (which means it's not valid input to virStoragePoolDefineXML until a target path is added). Dave

On Wed, Jul 30, 2008 at 04:30:01PM -0400, David Lively wrote:
For example, to discover existing LVM volume groups, there's no need to specify a source: virsh # pool-discover logical
While discovering nfs shares requires a <source> document to specify the host to query: virsh # pool-discover netfs "<source><host name='share' /></source>"
Currently (i.e., in this patch) I've implemented discovery support only for the "logical" and "netfs" pools. I plan on covering the other types once we've agreed upon the API. While this patch is rather large, the vast majority of it is "generic" (not specific to any particular type of storage pool) code, which I mostly took from Daniel's submission. I could easily break the storage-pool-type-specific pieces into separate patches, but those changes are already segregated into the appropriate storage_backend_ files.
* Known Issues *
I made the virsh interface take a XML string rather than a filename. I found it convenient, but AFAIK the rest of the commands that take XML take it from a file, so perhaps that should be changed (just let me know what you'd prefer).
I'd rather this took at filename, and then we add a separate pool-discover-as command which takes a list of flags and turns them into XML. eg, so you could do virsh pool-discover-as --host share netfs and thus avoid the user needing to deal with XML at all here.
I realize the description of srcSpec is kind of fuzzy. For instance, for netfs discovery, the <source> must have a <host> element (to specify the host to query), but shouldn't have a <dir> element since the exported dirs are exactly what we're trying to discover in this case. So this really needs to be documented accurately for each storage pool type. (Where?)
We've got some documentation on the XML required for each type of storage pool here, so its natural to add more info in this page http://libvirt.org/storage.html It could also be detailed in the virsh manpage for 'pool-discover' command.
Finally, there's an underspecification issue. The XML pool descriptions returned are supposed to be usable as valid input to virStoragePoolDefineXML. But these descriptions include some data (pool name, target path) that isn't specified by the discover input or the discovered resources. For now, I'm making up a somewhat arbitrary pool name (logical: VolGroup name, netfs: last component of export path). And I don't even specify <target><path> in the "netfs" discovery output (which means it's not valid input to virStoragePoolDefineXML until a target path is added).
Yep, my original intention was that you could send the XML straight back into the PoolDefineXML api. One alternative I've thought of though, is that it perhaps it might be nicer to return all the discovered pools as one XML doc <poolList type='netfs'> <source> <host name='someserver'/> <dir path='/exports/home'/> </source> <source> <host name='someserver'/> <dir path='/exports/web'/> </source> <source> <host name='someserver'/> <dir path='/exports/ftp'/> </source> </poolList> And just let the caller decide how they want to use this - they may not even want to define pools from them immediately - instead they might just want to display list of NFS exports to the user. It'd be easier than having to make up data for the <name> and <target> elements which is really something the client app wants to decide upon.
diff --git a/include/libvirt/libvirt.h b/include/libvirt/libvirt.h index 8980bc2..463cf2b 100644 --- a/include/libvirt/libvirt.h +++ b/include/libvirt/libvirt.h @@ -890,6 +890,15 @@ int virConnectListDefinedStoragePools(virConnectPtr conn, int maxnames);
/* + * Query a host for storage pools of a particular type + */ +int virConnectDiscoverStoragePools(virConnectPtr conn, + const char *type, + const char *srcSpec, + unsigned int flags, + char ***xmlDesc); + +/* * Lookup pool by name or uuid */ virStoragePoolPtr virStoragePoolLookupByName (virConnectPtr conn, @@ -979,6 +988,13 @@ char * virStorageVolGetXMLDesc (virStorageVolPtr pool,
char * virStorageVolGetPath (virStorageVolPtr vol);
+typedef struct _virStringList virStringList; + +struct _virStringList { + char *val; + struct _virStringList *next; +};
This typedef doesn't appear to be needed in the public API, so can be moved internally.
diff --git a/python/generator.py b/python/generator.py index 1514c02..66cb3cc 100755 --- a/python/generator.py +++ b/python/generator.py @@ -313,6 +313,7 @@ skip_impl = ( 'virStorageVolGetInfo', 'virStoragePoolGetAutostart', 'virStoragePoolListVolumes', + 'virConnectDiscoverStoragePools' )
Ok, guess we'll need to code this up in python manually. Although if we changed to only returning one XML doc with all discovered poool the API contract would be simpler char *virStoragePoolDiscover(virConnectPtr conn, char *type, char *srcSpec) and that would be able to work with the auto-generator no trouble.
diff --git a/qemud/remote.c b/qemud/remote.c index b5a6ec9..f2814e9 100644 --- a/qemud/remote.c +++ b/qemud/remote.c @@ -2958,6 +2958,41 @@ remoteDispatchListStoragePools (struct qemud_server *server ATTRIBUTE_UNUSED, }
static int +remoteDispatchDiscoverStoragePools (struct qemud_server *server ATTRIBUTE_UNUSED, + struct qemud_client *client, + remote_message_header *req, + remote_discover_storage_pools_args *args, + remote_discover_storage_pools_ret *ret) +{ + CHECK_CONN(client); + char **xmlDesc = NULL; + + /* Allocate return buffer. */ + ret->xml.xml_len = + virConnectDiscoverStoragePools (client->conn, + args->type, + args->srcSpec ? *args->srcSpec : NULL, + args->flags, + &xmlDesc); + if (ret->xml.xml_len == -1) return -1; + + if (ret->xml.xml_len > REMOTE_STORAGE_POOL_NAME_LIST_MAX) { + int i; + for (i = 0 ; i < ret->xml.xml_len ; i++) + free(xmlDesc[i]); + free(xmlDesc);
Recently we've switched to using VIR_FREE instead - see memory.h or HACKING for not details.
diff --git a/src/storage_backend_fs.c b/src/storage_backend_fs.c index f195034..75013e8 100644 --- a/src/storage_backend_fs.c +++ b/src/storage_backend_fs.c @@ -35,12 +35,18 @@ #include <byteswap.h> #include <mntent.h> #include <string.h> +#include "c-ctype.h" + +#include <libxml/parser.h> +#include <libxml/tree.h> +#include <libxml/xpath.h>
#include "internal.h" #include "storage_backend_fs.h" #include "storage_conf.h" #include "util.h" #include "memory.h" +#include "xml.h"
enum { VIR_STORAGE_POOL_FS_AUTO = 0, @@ -442,6 +448,149 @@ static int virStorageBackendProbeFile(virConnectPtr conn, }
#if WITH_STORAGE_FS +struct _virNetfsDiscoverState { + const char *host; + virStringList *list; +}; + +typedef struct _virNetfsDiscoverState virNetfsDiscoverState; + +static int +virStorageBackendFileSystemNetDiscoverPoolsFunc(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + char **const groups, + void *data) +{ + virNetfsDiscoverState *state = data; + virStringList *newItem; + const char *name, *path; + int len; + + path = groups[0]; + + name = rindex(path, '/'); + if (name == NULL) { + virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, "%s", _("no / in path?")); + return -1; + }
There's some emacs rules in the HACKING file which will make sure it does correct indentation for you. BTW, anyone fancy adding VIM equivalents...
+ name += 1; + + /* Append new XML desc to list */ + + if (VIR_ALLOC(newItem) != 0) { + virStorageReportError(conn, VIR_ERR_NO_MEMORY, "%s", _("new xml desc")); + return -1; + } + + + /* regexp pattern is too greedy, so we may have trailing spaces to trim.
Does the regex engine allow .*? to specify non-greedy matches ?
+static int +virStorageBackendFileSystemNetDiscoverPools(virConnectPtr conn, + const char *srcSpec, + unsigned int flags ATTRIBUTE_UNUSED, + char ***xmlDescs) +{ + /* + * # showmount --no-headers -e HOSTNAME + * /tmp * + * /A dir demo1.foo.bar,demo2.foo.bar + * + * Extract directory name (including possible interior spaces ...). + */ + + const char *regexes[] = { + "^(/.*) +\\S+$" + }; + int vars[] = { + 1 + }; + xmlDocPtr doc = NULL; + xmlXPathContextPtr xpath_ctxt = NULL; + virNetfsDiscoverState state = { .host = NULL, .list = NULL }; + const char *prog[] = { SHOWMOUNT, "--no-headers", "--exports", NULL, NULL }; + int exitstatus, n_descs = -1; + virStringList *p; + + doc = xmlReadDoc((const xmlChar *)srcSpec, "srcSpec.xml", NULL, + XML_PARSE_NOENT | XML_PARSE_NONET | + XML_PARSE_NOERROR | XML_PARSE_NOWARNING); + if (doc == NULL) { + virStorageReportError(conn, VIR_ERR_XML_ERROR, "%s", _("bad <source> spec")); + goto cleanup;
Whitespace issue - and others following - i won't list them all In general the patch looks good to me. The impl for LVM and NFS were nice & straightforward - easier than I expected in fact. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

Daniel, When this gets in; would it get more easy to lookup a file->storagepool. In order to make the alternative configuration I proposed for domains working? (Basically providing pool/volume instead of filename) Stefan

On Fri, Aug 01, 2008 at 11:46:14AM +0200, Stefan de Konink wrote:
Daniel,
When this gets in; would it get more easy to lookup a file->storagepool. In order to make the alternative configuration I proposed for domains working? (Basically providing pool/volume instead of filename)
I don't think it'll make any significant difference - we can do the alternate XML syntax you proposed already. I think it might be nice to use a different approach to the Xen impl of it though. Previously have had done the poool/volume -> filename conversion in libvirt before passing the data to XenD. This means that when you don't the XML you get the filename instead of pool/volume details, and doesn't deal with potentially different filenames upon migration. So I'd like to see if its possible to write a custom script for xen to go in /etc/xen/scripts. So we can pass in a pool/volume to Xend and have it auto-lookup the file at time of hotplug. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

On Fri, 1 Aug 2008, Daniel P. Berrange wrote:
On Fri, Aug 01, 2008 at 11:46:14AM +0200, Stefan de Konink wrote:
Daniel,
When this gets in; would it get more easy to lookup a file->storagepool. In order to make the alternative configuration I proposed for domains working? (Basically providing pool/volume instead of filename)
I don't think it'll make any significant difference - we can do the alternate XML syntax you proposed already. I think it might be nice to use a different approach to the Xen impl of it though. Previously have had done the poool/volume -> filename conversion in libvirt before passing the data to XenD. This means that when you don't the XML you get the filename instead of pool/volume details, and doesn't deal with potentially different filenames upon migration. So I'd like to see if its possible to write a custom script for xen to go in /etc/xen/scripts. So we can pass in a pool/volume to Xend and have it auto-lookup the file at time of hotplug.
As you know I wrote such thing for iSCSI/Solaris, iSCSI/Netapp so... if you want such script I can fabricate something, maybe it would be interesting to not create a script, but instead a binary that links to libvirt and is able to lookup the connection url and creates the xenstore stuff. But as you look at it now, this gets very close to what XenD is doing, so if we implement it in this way, why not do everything that is required ;) Stefan

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Daniel, Stefan de Konink schreef:
As you know I wrote such thing for iSCSI/Solaris, iSCSI/Netapp so... if you want such script I can fabricate something, maybe it would be interesting to not create a script, but instead a binary that links to libvirt and is able to lookup the connection url and creates the xenstore stuff.
I have some code now that basically parses the XENBUS_PATH/param and looks it up in libvirt. Now I have some questions: - - Is it possible to reuse the hypervisor connection from within libvirt? I would like to make this working: xenUnifiedPrivatePtr priv = (xenUnifiedPrivatePtr) conn->privateData; - - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk) Stefan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEAREKAAYFAkiVFgIACgkQYH1+F2Rqwn2RcgCfQCXeZA6OTqGsoWYMH6yW20dy dL4AnAtilh4ezQLtyulGeuoAt4XXrCJ4 =6S5u -----END PGP SIGNATURE-----

On Sun, Aug 03, 2008 at 04:20:50AM +0200, Stefan de Konink wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Daniel,
Stefan de Konink schreef:
As you know I wrote such thing for iSCSI/Solaris, iSCSI/Netapp so... if you want such script I can fabricate something, maybe it would be interesting to not create a script, but instead a binary that links to libvirt and is able to lookup the connection url and creates the xenstore stuff.
I have some code now that basically parses the XENBUS_PATH/param and looks it up in libvirt. Now I have some questions:
- - Is it possible to reuse the hypervisor connection from within libvirt? I would like to make this working: xenUnifiedPrivatePtr priv = (xenUnifiedPrivatePtr) conn->privateData;
You are only allowed to cast to xenUnifiedPrivatePtr if you are running within the Xen driver code. It is forbidden in all other places.
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk)
My idea was to have a script in /etc/xen/scripts/ Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

On Mon, 4 Aug 2008, Daniel P. Berrange wrote:
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk)
My idea was to have a script in /etc/xen/scripts/
Me too, but in order to 'fetch' the actual configuration it is required to contact libvirt. And query about the pool/volume location. In this way it would be actually a 'redirection' to blktap or adding a devicepath. So this script is now written in plain C, but I want to know how you imagine the driver selection based on connection uri. Stefan

On Mon, Aug 04, 2008 at 11:02:33AM +0200, Stefan de Konink wrote:
On Mon, 4 Aug 2008, Daniel P. Berrange wrote:
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk)
My idea was to have a script in /etc/xen/scripts/
Me too, but in order to 'fetch' the actual configuration it is required to contact libvirt. And query about the pool/volume location. In this way it would be actually a 'redirection' to blktap or adding a devicepath.
So this script is now written in plain C, but I want to know how you imagine the driver selection based on connection uri. You can simply use xen:/// as the URI. There is no need for configurable URIs since thisis a xen specific script.
Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

Daniel P. Berrange schreef:
On Mon, Aug 04, 2008 at 11:02:33AM +0200, Stefan de Konink wrote:
On Mon, 4 Aug 2008, Daniel P. Berrange wrote:
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk) My idea was to have a script in /etc/xen/scripts/ Me too, but in order to 'fetch' the actual configuration it is required to contact libvirt. And query about the pool/volume location. In this way it would be actually a 'redirection' to blktap or adding a devicepath.
So this script is now written in plain C, but I want to know how you imagine the driver selection based on connection uri. You can simply use xen:/// as the URI. There is no need for configurable URIs since thisis a xen specific script.
You don't get the issue. In order to run a specific script for example a block device, it should have an unique prefix. That will make the executable that is called so virt:// will call /etc/xen/scripts/block-virt as script. As you might notice here, the common file://, tap:aio:// or psy:// is not present, so if a pool is file based or device based it should some how inform this 'script' how to redirect the parameters to the appropriate script. Stefan

On Mon, Aug 04, 2008 at 11:19:25AM +0200, Stefan de Konink wrote:
Daniel P. Berrange schreef:
On Mon, Aug 04, 2008 at 11:02:33AM +0200, Stefan de Konink wrote:
On Mon, 4 Aug 2008, Daniel P. Berrange wrote:
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk) My idea was to have a script in /etc/xen/scripts/ Me too, but in order to 'fetch' the actual configuration it is required to contact libvirt. And query about the pool/volume location. In this way it would be actually a 'redirection' to blktap or adding a devicepath.
So this script is now written in plain C, but I want to know how you imagine the driver selection based on connection uri. You can simply use xen:/// as the URI. There is no need for configurable URIs since thisis a xen specific script.
You don't get the issue. In order to run a specific script for example a block device, it should have an unique prefix. That will make the executable that is called so virt:// will call /etc/xen/scripts/block-virt as script.
As you might notice here, the common file://, tap:aio:// or psy:// is not present, so if a pool is file based or device based it should some how inform this 'script' how to redirect the parameters to the appropriate script.
This is what the <driver> tag in the XML can be used for. It can include a driver name (file, phys, tap) and subtype (raw, aio, qcow, etc). You just need to decide how to encode this info when passing it to Xend. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

Daniel P. Berrange schreef:
On Mon, Aug 04, 2008 at 11:19:25AM +0200, Stefan de Konink wrote:
Daniel P. Berrange schreef:
On Mon, Aug 04, 2008 at 11:02:33AM +0200, Stefan de Konink wrote:
On Mon, 4 Aug 2008, Daniel P. Berrange wrote:
- - For iSCSI and related stuff everything was relatively easy, because this would just mean to write the right /dev/blabla to the xenstore. What is your idea to get different drivers working via: virt://pool/volume (so basically blktap vs file vs disk) My idea was to have a script in /etc/xen/scripts/ Me too, but in order to 'fetch' the actual configuration it is required to contact libvirt. And query about the pool/volume location. In this way it would be actually a 'redirection' to blktap or adding a devicepath.
So this script is now written in plain C, but I want to know how you imagine the driver selection based on connection uri. You can simply use xen:/// as the URI. There is no need for configurable URIs since thisis a xen specific script. You don't get the issue. In order to run a specific script for example a block device, it should have an unique prefix. That will make the executable that is called so virt:// will call /etc/xen/scripts/block-virt as script.
As you might notice here, the common file://, tap:aio:// or psy:// is not present, so if a pool is file based or device based it should some how inform this 'script' how to redirect the parameters to the appropriate script.
This is what the <driver> tag in the XML can be used for. It can include a driver name (file, phys, tap) and subtype (raw, aio, qcow, etc). You just need to decide how to encode this info when passing it to Xend.
Yes... and this was exactly my question ;) So if we have a pool and want to send it to xen it would look something like: virt://driver/pool/volume, now the 'tiny' problem is that I don't want to reimplement every possible driver. So I'll do some research in order to pass a connection url exported by libvirt to the 'default' scripts, while maintaining the virt:// in xen. Stefan

On Fri, 2008-08-01 at 10:40 +0100, Daniel P. Berrange wrote:
On Wed, Jul 30, 2008 at 04:30:01PM -0400, David Lively wrote:
Finally, there's an underspecification issue. The XML pool descriptions returned are supposed to be usable as valid input to virStoragePoolDefineXML. But these descriptions include some data (pool name, target path) that isn't specified by the discover input or the discovered resources. For now, I'm making up a somewhat arbitrary pool name (logical: VolGroup name, netfs: last component of export path). And I don't even specify <target><path> in the "netfs" discovery output (which means it's not valid input to virStoragePoolDefineXML until a target path is added).
Yep, my original intention was that you could send the XML straight back into the PoolDefineXML api. One alternative I've thought of though, is that it perhaps it might be nicer to return all the discovered pools as one XML doc
<poolList type='netfs'> <source> <host name='someserver'/> <dir path='/exports/home'/> </source> <source> <host name='someserver'/> <dir path='/exports/web'/> </source> <source> <host name='someserver'/> <dir path='/exports/ftp'/> </source> </poolList>
And just let the caller decide how they want to use this - they may not even want to define pools from them immediately - instead they might just want to display list of NFS exports to the user. It'd be easier than having to make up data for the <name> and <target> elements which is really something the client app wants to decide upon.
This is really two suggestions rolled into one: (1) just return <source> xml (not whole <pool> xml), and (2) instead of returning an array of strings, return a single (xml) string wrapped with a <poolList> element Either (1) or (2) could be done independently of the other, and I think they're both worth considering. I like the fact that (2) lets us auto-generate the python binding, but I don't have strong feelings either way. I like (1) a lot, but it doesn't really work correctly for logical storage pools in their current incarnation. The <source> element for a logical pool currently consists only of the device paths to the physical volumes backing the volume group. In particular, it doesn't specify the volume group name (which seems to be assumed to be the same as the libvirt pool name??). This could be fixed by allowing the <source> element for an existing volume group to specify (only) the volume group name (which might be different from the pool name). In this way, logical pool discovery can return <source> elements that fully specify existing pools. And the client app can decide on pool names and target paths as you propose. [Note I'm proposing *extending* logical pool creation, so existing XML should continue to work. Specifically I'm proposing we extend <source> with an optional <name> element which the logical storage backend will interpret as volume group name. For back-compatibility, the backend will default the source name to the pool name when the source name isn't specified. The source name is used to instantiate existing volume groups as well as to name new volume groups (whose <source> must also specify the backing physical devices).] Does this sound reasonable?
There's some emacs rules in the HACKING file which will make sure it does correct indentation for you. BTW, anyone fancy adding VIM equivalents...
Sorry, I'm an emacs guy :-) [What we *really* need is an emacs mode ("indent-assimilation-mode") that will figure out the indentation settings for each file from the code that's already there ... (assuming it's consistent ...). I suppose it wouldn't be that hard to at least get c-basic-offset, c-indent-level, and indent-tabs-mode right. (I know you can embed emacs directives in source comments, but that's kind of ugly ...)] In any case, thanks much for your comments. I'm finally getting back to working on this, so I should be able to submit my next take soon. Thanks, Dave

Hi David, I spotted a few things that would be good to change: David Lively <dlively@virtualiron.com> wrote:
diff --git a/configure.in b/configure.in index 8e04f14..5ef0384 100644 --- a/configure.in +++ b/configure.in @@ -660,6 +660,10 @@ if test "$with_storage_fs" = "yes" -o "$with_storage_fs" = "check"; then fi fi AM_CONDITIONAL([WITH_STORAGE_FS], [test "$with_storage_fs" = "yes"]) +if test "$with_storage_fs" = "yes"; then + AC_PATH_PROG([SHOWMOUNT], [showmount], [], [$PATH:/sbin:/usr/sbin]) + AC_DEFINE_UNQUOTED([SHOWMOUNT], ["$SHOWMOUNT"], [Location or name of the showmount program])
Please split long lines: AC_DEFINE_UNQUOTED([SHOWMOUNT], ["$SHOWMOUNT"], [Location or name of the showmount program]) ...
diff --git a/src/storage_backend_fs.c b/src/storage_backend_fs.c ... +static int +virStorageBackendFileSystemNetDiscoverPoolsFunc(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + char **const groups, + void *data) +{ + virNetfsDiscoverState *state = data; + virStringList *newItem; + const char *name, *path; + int len; + + path = groups[0]; + + name = rindex(path, '/');
rindex is deprecated. Using it causes portability problems. Use strrchr instead.
+ if (name == NULL) { + virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, "%s", _("no / in path?"));
If you include the offending string in the diagnostic and add a word of description, it'll be more useful: virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, _("invalid netfs path (no slash): %s"), path);
+ return -1; + } + name += 1; + + /* Append new XML desc to list */ + + if (VIR_ALLOC(newItem) != 0) { + virStorageReportError(conn, VIR_ERR_NO_MEMORY, "%s", _("new xml desc")); + return -1; + } + + + /* regexp pattern is too greedy, so we may have trailing spaces to trim. + * NB showmount output ambiguous for (evil) exports with names w/trailing whitespace
Too long.
+ */ + len = 0; + while (name[len]) + len++;
This is equivalent to the three lines above: len = strlen (name);
+ while (c_isspace(name[len-1])) + len--;
It is customary to trim spaces and TABs (but less so CR, FF, NL), so c_isblank might be better than c_isspace here. ...
+static int +virStorageBackendFileSystemNetDiscoverPools(virConnectPtr conn, ... + cleanup: + if (doc) + xmlFreeDoc(doc); + if (xpath_ctxt)
You can remove this "if" test. It's unnecessary. BTW, "make syntax-check" should spot this and fail because of it.
+ xmlXPathFreeContext(xpath_ctxt); + VIR_FREE(state.host); + while (state.list) { + p = state.list->next; + VIR_FREE(state.list); + state.list = p; + } + + return n_descs; +} ... diff --git a/src/storage_backend_logical.c b/src/storage_backend_logical.c index 9a0c27f..3c16d20 100644 --- a/src/storage_backend_logical.c +++ b/src/storage_backend_logical.c ... @@ -257,6 +258,90 @@ virStorageBackendLogicalRefreshPoolFunc(virConnectPtr conn ATTRIBUTE_UNUSED,
static int +virStorageBackendLogicalDiscoverPoolsFunc(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + size_t n_tokens, + char **const tokens, + void *data) +{ + virStringList **rest = data; + virStringList *newItem; + const char *name; + int len; + + /* Append new XML desc to list */ + + if (VIR_ALLOC(newItem) != 0) { + virStorageReportError(conn, VIR_ERR_NO_MEMORY, "%s", _("new xml desc")); + return -1; + } + + if (n_tokens != 1) { + virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, "%s", _("n_tokens should be 1")); + return -1; + } + + + name = tokens[0]; + virSkipSpaces(&name);
Is is necessary to skip leading backslashes and carriage returns here?
+ len = 0; + while (name[len]) + len++; + while (c_isspace(name[len-1])) + len--;
A zero-length or all-"space" name would lead to referencing name[-1]. You can use this instead: while (len && c_isspace(name[len-1])) len--; The similar code above isn't vulnerable like this due to its guarantee that there is a "/".

Thanks much for your comments, Jim. They all look reasonable (though I may be intentionally trimming a NL in one of these cases -- I'm checking). And I'll start following the HACKING file recommendations before submitting my next attempt :-) (Though note I'm not yet submitting tests since we haven't really finished hashing out the API - at least some crucial XML details ...) Dave On Mon, 2008-08-04 at 08:25 +0200, Jim Meyering wrote:
Hi David,
I spotted a few things that would be good to change:
David Lively <dlively@virtualiron.com> wrote:
diff --git a/configure.in b/configure.in index 8e04f14..5ef0384 100644 --- a/configure.in +++ b/configure.in @@ -660,6 +660,10 @@ if test "$with_storage_fs" = "yes" -o "$with_storage_fs" = "check"; then fi fi AM_CONDITIONAL([WITH_STORAGE_FS], [test "$with_storage_fs" = "yes"]) +if test "$with_storage_fs" = "yes"; then + AC_PATH_PROG([SHOWMOUNT], [showmount], [], [$PATH:/sbin:/usr/sbin]) + AC_DEFINE_UNQUOTED([SHOWMOUNT], ["$SHOWMOUNT"], [Location or name of the showmount program])
Please split long lines:
AC_DEFINE_UNQUOTED([SHOWMOUNT], ["$SHOWMOUNT"], [Location or name of the showmount program])
...
diff --git a/src/storage_backend_fs.c b/src/storage_backend_fs.c ... +static int +virStorageBackendFileSystemNetDiscoverPoolsFunc(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + char **const groups, + void *data) +{ + virNetfsDiscoverState *state = data; + virStringList *newItem; + const char *name, *path; + int len; + + path = groups[0]; + + name = rindex(path, '/');
rindex is deprecated. Using it causes portability problems. Use strrchr instead.
+ if (name == NULL) { + virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, "%s", _("no / in path?"));
If you include the offending string in the diagnostic and add a word of description, it'll be more useful:
virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, _("invalid netfs path (no slash): %s"), path);
+ return -1; + } + name += 1; + + /* Append new XML desc to list */ + + if (VIR_ALLOC(newItem) != 0) { + virStorageReportError(conn, VIR_ERR_NO_MEMORY, "%s", _("new xml desc")); + return -1; + } + + + /* regexp pattern is too greedy, so we may have trailing spaces to trim. + * NB showmount output ambiguous for (evil) exports with names w/trailing whitespace
Too long.
+ */ + len = 0; + while (name[len]) + len++;
This is equivalent to the three lines above: len = strlen (name);
+ while (c_isspace(name[len-1])) + len--;
It is customary to trim spaces and TABs (but less so CR, FF, NL), so c_isblank might be better than c_isspace here.
...
+static int +virStorageBackendFileSystemNetDiscoverPools(virConnectPtr conn, ... + cleanup: + if (doc) + xmlFreeDoc(doc); + if (xpath_ctxt)
You can remove this "if" test. It's unnecessary. BTW, "make syntax-check" should spot this and fail because of it.
+ xmlXPathFreeContext(xpath_ctxt); + VIR_FREE(state.host); + while (state.list) { + p = state.list->next; + VIR_FREE(state.list); + state.list = p; + } + + return n_descs; +} ... diff --git a/src/storage_backend_logical.c b/src/storage_backend_logical.c index 9a0c27f..3c16d20 100644 --- a/src/storage_backend_logical.c +++ b/src/storage_backend_logical.c ... @@ -257,6 +258,90 @@ virStorageBackendLogicalRefreshPoolFunc(virConnectPtr conn ATTRIBUTE_UNUSED,
static int +virStorageBackendLogicalDiscoverPoolsFunc(virConnectPtr conn ATTRIBUTE_UNUSED, + virStoragePoolObjPtr pool ATTRIBUTE_UNUSED, + size_t n_tokens, + char **const tokens, + void *data) +{ + virStringList **rest = data; + virStringList *newItem; + const char *name; + int len; + + /* Append new XML desc to list */ + + if (VIR_ALLOC(newItem) != 0) { + virStorageReportError(conn, VIR_ERR_NO_MEMORY, "%s", _("new xml desc")); + return -1; + } + + if (n_tokens != 1) { + virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, "%s", _("n_tokens should be 1")); + return -1; + } + + + name = tokens[0]; + virSkipSpaces(&name);
Is is necessary to skip leading backslashes and carriage returns here?
+ len = 0; + while (name[len]) + len++; + while (c_isspace(name[len-1])) + len--;
A zero-length or all-"space" name would lead to referencing name[-1]. You can use this instead:
while (len && c_isspace(name[len-1])) len--;
The similar code above isn't vulnerable like this due to its guarantee that there is a "/".
participants (4)
-
Daniel P. Berrange
-
David Lively
-
Jim Meyering
-
Stefan de Konink