On 10/18/2013 11:30 AM, Eric Blake wrote:
This addresses the review from RFCv1 for splitting the glfs checks
to be different from the storage backend checks:
https://www.redhat.com/archives/libvir-list/2013-October/msg00645.html
I'm still just in RFC stage; wanting to make sure I have the
documentation correct for sample XML prior to actually coding
up the use of <glfs.h> from storage_backend_gluster.c.
Note that qemu's block/gluster.c documents that qemu expects
that when using glfs to bypass the file system, the image will
always be specified as:
* file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
which means that there is no way to pass a direct gluster volume
as a raw block device to qemu (it is always a file embedded within
the gluster volume); hence my choice of making a single gluster
volume act as the storage pool.
In contrast, it might someday be nice to have a storage pool that wraps
commands such as 'gluster volume list', where the overall gluster
namespace is the pool, and where the libvirt volume creation API can
then map underlying commands that piece together bricks to form new
gluster volumes. It's just that I don't see qemu currently being able
to use an entire raw gluster volume as a disk image (qemu just uses
files within the filesystem that gluster already imposes within its
volumes).
If we ever do that, then maybe a 'gluster' pool is the way to build
gluster volumes by wrapping the 'gluster volume' CLI, while _this_ patch
should be named the 'glusterfs' pool to expose the filesystem within a
single gluster volume via library calls through glfs.h. I'm okay with
renaming s/gluster/glusterfs/ for this particular pool, if that makes
more sense.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org