On 11/12/2013 09:58 AM, Daniel P. Berrange wrote:
On Mon, Nov 11, 2013 at 09:19:30PM -0700, Eric Blake wrote:
> Actually put gfapi to use, by allowing the creation of a gluster
> pool. Right now, all volumes are treated as raw; further patches
> will allow peering into files to allow for qcow2 files and backing
> chains, and reporting proper volume allocation.
>
> + /* FIXME: Currently hard-coded to tcp transport; XML needs
to be
> + * extended to allow alternate transport */
> + if (VIR_ALLOC(ret->uri) < 0 ||
> + VIR_STRDUP(ret->uri->scheme, "gluster") < 0 ||
> + VIR_STRDUP(ret->uri->server, pool->def->source.hosts[0].name)
< 0 ||
> + virAsprintf(&ret->uri->path, "%s%s",
> + *pool->def->source.name == '/' ? "" :
"/",
> + pool->def->source.name) < 0)
> + goto error;
I'm not a huge fan of chaining together statements like this,
since some gdb versions are pretty awful at telling you which
one of these 4 statements has the problem on segv.
Easy enough to split; so consider it done.
> + /* Silently skip directories, including '.' and
'..'. FIXME:
> + * should non-'.' subdirectories be listed as type dir? */
What behaviour does the 'fs' pool have in for non-'.' subdirs ?
Seems we perhaps ought to match that.
Peter asked the same question - guess I'm stuck doing that :)
> +
> + /* FIXME - must open files to determine if they are non-raw */
> + vol->type = VIR_STORAGE_VOL_NETWORK;
> + vol->target.format = VIR_STORAGE_FILE_RAW;
> + vol->capacity = vol->allocation = st->st_size;
So gluster has no concept of sparse volumes ?
Gluster has sparse support; I need to flesh this out a bit more, based
on how directory pools do it (both pools have a struct stat to work
with); I also need to populate ownership details.
> +
> + if (!(state = virStorageBackendGlusterOpen(pool)))
> + goto cleanup;
What kind of overhead is there in opening / closing gluster connections ?
Is it something we should just do in the pool start/stop methods, instead
of on every operation like refresh.
I was just copying after rbd. In gdb, I see several threads spawned for
every refresh operation; but I can certainly try to rework things to
open at pool start instead. Should I do it on this patch, or save it
for a followup?
The other consideration is that eventually, the
src/util/virstoragefile.c code will need to call into the
src/storage/storage_driver.c code (indirectly, via the virConnectPtr)
when doing recursive backing chain checking; in that case, I'm
envisioning a new callback (internal-only, no public API needed) that
roughly says: "given this URI for a backing file, tell me if you have a
backend for parsing the file, and if so, grab the header from that
file"; which means my ultimate goal is to have a reusable method that
both the pool XML (given host and name, [or host, volume, subdir, based
on review of 1/4]) and the qemu URI (given a string
"gluster://host/vol/..."). So while pool start may be better than pool
refresh for opening a glfs connection, the ability to open a glfs
connection also needs to be modular enough to reuse for one-off parsing
of a URI.
> + if (glfs_statvfs(state->vol, state->dir, &sb) <
0) {
> + virReportSystemError(errno, _("cannot statvfs path '%s'"),
pool->def->source.name);
Seems you lost the line break here.
Yep, Peter caught it too.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org