On 11/12/13 05:19, Eric Blake wrote:
Add support for a new <pool type='gluster'>, similar
to
RBD and Sheepdog. Terminology wise, a gluster volume
forms a libvirt storage pool, within the gluster volume,
individual files are treated as libvirt storage volumes.
* docs/schemas/storagepool.rng (poolgluster): New pool type.
* docs/formatstorage.html.in: Document gluster.
* docs/storage.html.in: Likewise, and contrast it with netfs.
* tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
* tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
* tests/storagepoolxml2xmltest.c (mymain): Likewise.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
docs/formatstorage.html.in | 11 ++--
docs/schemas/storagepool.rng | 21 +++++++
docs/storage.html.in | 90 +++++++++++++++++++++++++++-
tests/storagepoolxml2xmlin/pool-gluster.xml | 8 +++
tests/storagepoolxml2xmlout/pool-gluster.xml | 11 ++++
tests/storagepoolxml2xmltest.c | 1 +
6 files changed, 136 insertions(+), 6 deletions(-)
create mode 100644 tests/storagepoolxml2xmlin/pool-gluster.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-gluster.xml
diff --git a/docs/formatstorage.html.in b/docs/formatstorage.html.in
index 90eeaa3..e74ad27 100644
--- a/docs/formatstorage.html.in
+++ b/docs/formatstorage.html.in
@@ -21,8 +21,10 @@
<code>iscsi</code>, <code>logical</code>,
<code>scsi</code>
(all <span class="since">since 0.4.1</span>),
<code>mpath</code>
(<span class="since">since 0.7.1</span>),
<code>rbd</code>
- (<span class="since">since 0.9.13</span>), or
<code>sheepdog</code>
- (<span class="since">since 0.10.0</span>). This corresponds
to the
+ (<span class="since">since 0.9.13</span>),
<code>sheepdog</code>
+ (<span class="since">since 0.10.0</span>),
+ or <code>gluster</code> (<span class="since">since
+ 1.1.4</span>). This corresponds to the
Now 1.1.5.
storage backend drivers listed further along in this
document.
</p>
<h3><a name="StoragePoolFirst">General
metadata</a></h3>
...
diff --git a/docs/schemas/storagepool.rng
b/docs/schemas/storagepool.rng
index 66d3c22..17a3ae8 100644
--- a/docs/schemas/storagepool.rng
+++ b/docs/schemas/storagepool.rng
@@ -21,6 +21,7 @@
<ref name='poolmpath'/>
<ref name='poolrbd'/>
<ref name='poolsheepdog'/>
+ <ref name='poolgluster'/>
</choice>
</element>
</define>
@@ -145,6 +146,17 @@
</interleave>
</define>
+ <define name='poolgluster'>
+ <attribute name='type'>
+ <value>gluster</value>
+ </attribute>
+ <interleave>
+ <ref name='commonmetadata'/>
+ <ref name='sizing'/>
+ <ref name='sourcegluster'/>
+ </interleave>
+ </define>
+
<define name='sourceinfovendor'>
<interleave>
<optional>
@@ -555,6 +567,15 @@
</element>
</define>
+ <define name='sourcegluster'>
+ <element name='source'>
+ <interleave>
+ <ref name='sourceinfohost'/>
+ <ref name='sourceinfoname'/>
+ </interleave>
+ </element>
+ </define>
+
<define name='IscsiQualifiedName'>
<data type='string'>
<param
name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/docs/storage.html.in b/docs/storage.html.in
index 1181444..339759d 100644
--- a/docs/storage.html.in
+++ b/docs/storage.html.in
@@ -114,6 +114,9 @@
<li>
<a href="#StorageBackendSheepdog">Sheepdog backend</a>
</li>
+ <li>
+ <a href="#StorageBackendGluster">Gluster backend</a>
+ </li>
</ul>
<h2><a name="StorageBackendDir">Directory
pool</a></h2>
@@ -275,10 +278,12 @@
<code>nfs</code>
</li>
<li>
- <code>glusterfs</code>
+ <code>glusterfs</code> - use the glusterfs FUSE file system
+ (to bypass the file system completely, see
+ the <a href="#StorageBackendGluster">gluster</a> pool).
</li>
<li>
- <code>cifs</code>
+ <code>cifs</code> - use the SMB (samba) or CIFS file system
</li>
</ul>
@@ -647,5 +652,86 @@
The Sheepdog pool does not use the volume format type element.
</p>
+ <h2><a name="StorageBackendGluster">Gluster
pools</a></h2>
+ <p>
+ This provides a pool based on native Gluster access. Gluster is
+ a distributed file system that can be exposed to the user via
+ FUSE, NFS or SMB (see the <a
href="#StorageBackendNetfs">netfs</a>
+ pool for that usage); but for minimal overhead, the ideal access
+ is via native access (only possible for QEMU/KVM compiled with
+ libgfapi support).
+
+ The cluster and storage volume must already be running, and it
+ is recommended that the volume be configured with <code>gluster
+ volume set $volname storage.owner-uid=$uid</code>
+ and <code>gluster volume set $volname
+ storage.owner-gid=$gid</code> for the uid and gid that qemu will
+ be run as. It may also be necessary to
+ set <code>rpc-auth-allow-insecure on</code> for the glusterd
+ service, as well as <code>gluster set $volname
+ server.allow-insecure on</code>, to allow access to the gluster
+ volume.
+
+ <span class="since">Since 1.1.4</span>
1.1.5
+ </p>
+
ACK with release number changed.
Peter