[libvirt] [RFC PATCH v1 0/2] Qemu/Gluster support in Libvirt

This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html TODO: - Add support for IPv6 format based server addr - Support for transport types other than socket. Harsh Prateek Bora (2): Qemu/Gluster: Add Gluster protocol as supported network disk formats. tests: Add tests for gluster protocol based network disks support docs/schemas/domaincommon.rng | 8 ++ src/conf/domain_conf.c | 14 ++- src/conf/domain_conf.h | 3 +- src/qemu/qemu_command.c | 123 +++++++++++++++++++++ tests/qemuargv2xmltest.c | 1 + .../qemuxml2argv-disk-drive-network-gluster.args | 1 + .../qemuxml2argv-disk-drive-network-gluster.xml | 33 ++++++ tests/qemuxml2argvtest.c | 2 + 8 files changed, 182 insertions(+), 3 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml -- 1.7.11.2

Qemu accepts gluster protocol as supported storage backend beside others. This patch allows users to specify disks on gluster backends like this: <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='gluster' name='volume/image'> <host name='example.org' port='6000' transport='socket'/> </source> <target dev='vda' bus='virtio'/> </disk> Note: In the <host> element above, transport is an optional attribute. Valid transport types for a network based disk can be socket, unix or rdma. TODO: - Add support for IPv6 format based server addr - Support for transport types other than socket. Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com> --- docs/schemas/domaincommon.rng | 8 +++ src/conf/domain_conf.c | 14 ++++- src/conf/domain_conf.h | 3 +- src/qemu/qemu_command.c | 123 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 145 insertions(+), 3 deletions(-) diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 145caf7..30c0d8c 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1029,6 +1029,7 @@ <value>nbd</value> <value>rbd</value> <value>sheepdog</value> + <value>gluster</value> </choice> </attribute> <optional> @@ -1042,6 +1043,13 @@ <attribute name="port"> <ref name="unsignedInt"/> </attribute> + <attribute name="transport"> + <choice> + <value>socket</value> + <value>unix</value> + <value>rdma</value> + </choice> + </attribute> </element> </zeroOrMore> <empty/> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 419088c..c89035e 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -200,7 +200,8 @@ VIR_ENUM_IMPL(virDomainDiskErrorPolicy, VIR_DOMAIN_DISK_ERROR_POLICY_LAST, VIR_ENUM_IMPL(virDomainDiskProtocol, VIR_DOMAIN_DISK_PROTOCOL_LAST, "nbd", "rbd", - "sheepdog") + "sheepdog", + "gluster") VIR_ENUM_IMPL(virDomainDiskSecretType, VIR_DOMAIN_DISK_SECRET_TYPE_LAST, "none", @@ -994,6 +995,7 @@ void virDomainDiskHostDefFree(virDomainDiskHostDefPtr def) VIR_FREE(def->name); VIR_FREE(def->port); + VIR_FREE(def->transport); } void virDomainControllerDefFree(virDomainControllerDefPtr def) @@ -3489,6 +3491,7 @@ virDomainDiskDefParseXML(virCapsPtr caps, } hosts[nhosts].name = NULL; hosts[nhosts].port = NULL; + hosts[nhosts].transport = NULL; nhosts++; hosts[nhosts - 1].name = virXMLPropString(child, "name"); @@ -3503,6 +3506,8 @@ virDomainDiskDefParseXML(virCapsPtr caps, "%s", _("missing port for host")); goto error; } + /* transport can be socket, unix, rdma, etc. */ + hosts[nhosts - 1].transport = virXMLPropString(child, "transport"); } child = child->next; } @@ -11479,8 +11484,13 @@ virDomainDiskDefFormat(virBufferPtr buf, for (i = 0; i < def->nhosts; i++) { virBufferEscapeString(buf, " <host name='%s'", def->hosts[i].name); - virBufferEscapeString(buf, " port='%s'/>\n", + virBufferEscapeString(buf, " port='%s'", def->hosts[i].port); + if (def->hosts[i].transport) { + virBufferEscapeString(buf, " transport='%s'", + def->hosts[i].transport); + } + virBufferAddLit(buf, "/>\n"); } virBufferAddLit(buf, " </source>\n"); } diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 0c3824e..67e023f 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -442,7 +442,7 @@ enum virDomainDiskProtocol { VIR_DOMAIN_DISK_PROTOCOL_NBD, VIR_DOMAIN_DISK_PROTOCOL_RBD, VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG, - + VIR_DOMAIN_DISK_PROTOCOL_GLUSTER, VIR_DOMAIN_DISK_PROTOCOL_LAST }; @@ -467,6 +467,7 @@ typedef virDomainDiskHostDef *virDomainDiskHostDefPtr; struct _virDomainDiskHostDef { char *name; char *port; + char *transport; }; enum virDomainDiskIo { diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ca62f0c..c8a0f27 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -2068,6 +2068,86 @@ no_memory: return -1; } +static int qemuParseGlusterString(virDomainDiskDefPtr def) +{ + char *port, *volimg, *transp, *marker; + + marker = strchr(def->src, ':'); + if (marker) { + /* port found */ + port = marker; + *port++ = '\0'; + marker = port; + } else { + /* port not given, assume port = 0 */ + port = NULL; + marker = def->src; + } + + volimg = strchr(marker, '/'); + if (!volimg) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("cannot parse gluster filename '%s'"), def->src); + return -1; + } + *volimg++ = '\0'; + transp = strchr(volimg, '?'); + if (transp) { + *transp++ = '\0'; + transp = strchr(transp, '='); + transp++; + } + if (VIR_ALLOC(def->hosts) < 0) { + virReportOOMError(); + return -1; + } + def->nhosts = 1; + def->hosts->name = def->src; + if (port) { + def->hosts->port = strdup(port); + } else { + def->hosts->port = strdup("0"); + } + if (transp) { + def->hosts->transport = strdup(transp); + if (!def->hosts->transport) { + virReportOOMError(); + return -1; + } + } else { + def->hosts->transport = NULL; + } + if (!def->hosts->port) { + virReportOOMError(); + return -1; + } + def->src = strdup(volimg); + if (!def->src) { + virReportOOMError(); + return -1; + } + + return 0; +} + +static int +qemuBuildGlusterString(virDomainDiskDefPtr disk, virBufferPtr opt) +{ + int ret = 0; + virBufferAddLit(opt, "file="); + if (disk->nhosts != 1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("gluster accepts only one host")); + ret = -1; + } else { + virBufferAsprintf(opt, "gluster://%s:%s/%s", + disk->hosts->name, disk->hosts->port, disk->src); + if (disk->hosts->transport) + virBufferAsprintf(opt, "?transport=%s", disk->hosts->transport); + } + return ret; +} + char * qemuBuildDriveStr(virConnectPtr conn ATTRIBUTE_UNUSED, virDomainDiskDefPtr disk, @@ -2209,6 +2289,12 @@ qemuBuildDriveStr(virConnectPtr conn ATTRIBUTE_UNUSED, goto error; virBufferAddChar(&opt, ','); break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: + if (qemuBuildGlusterString(disk, &opt) < 0) + goto error; + virBufferAddChar(&opt, ','); + break; + case VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG: if (disk->nhosts == 0) { virBufferEscape(&opt, ',', ",", "file=sheepdog:%s,", @@ -5135,6 +5221,18 @@ qemuBuildCommandLine(virConnectPtr conn, file = virBufferContentAndReset(&opt); } break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: + { + virBuffer opt = VIR_BUFFER_INITIALIZER; + if (qemuBuildGlusterString(disk, &opt) < 0) + goto error; + if (virBufferError(&opt)) { + virReportOOMError(); + goto error; + } + file = virBufferContentAndReset(&opt); + } + break; case VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG: if (disk->nhosts == 0) { if (virAsprintf(&file, "sheepdog:%s,", disk->src) < 0) { @@ -6811,6 +6909,21 @@ qemuParseCommandLineDisk(virCapsPtr caps, goto cleanup; VIR_FREE(p); + } else if (STRPREFIX(def->src, "gluster://")) { + char *p = def->src; + + def->type = VIR_DOMAIN_DISK_TYPE_NETWORK; + def->protocol = VIR_DOMAIN_DISK_PROTOCOL_GLUSTER; + def->src = strdup(p + strlen("gluster://")); + if (!def->src) { + virReportOOMError(); + goto cleanup; + } + + if (qemuParseGlusterString(def) < 0) + goto cleanup; + + VIR_FREE(p); } else if (STRPREFIX(def->src, "sheepdog:")) { char *p = def->src; char *port, *vdi; @@ -7976,6 +8089,10 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps, disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_RBD; val += strlen("rbd:"); + } else if (STRPREFIX(val, "gluster://")) { + disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; + disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_GLUSTER; + val += strlen("gluster://"); } else if (STRPREFIX(val, "sheepdog:")) { disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG; @@ -8061,6 +8178,12 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps, goto no_memory; } break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: + + if (qemuParseGlusterString(disk) < 0) + goto error; + + break; } } -- 1.7.11.2

On Thu, Aug 23, 2012 at 16:31:51 +0530, Harsh Prateek Bora wrote:
Qemu accepts gluster protocol as supported storage backend beside others. This patch allows users to specify disks on gluster backends like this:
<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='gluster' name='volume/image'> <host name='example.org' port='6000' transport='socket'/> </source> <target dev='vda' bus='virtio'/> </disk>
Note: In the <host> element above, transport is an optional attribute. Valid transport types for a network based disk can be socket, unix or rdma.
TODO: - Add support for IPv6 format based server addr - Support for transport types other than socket.
Overall, this patch set looks fine. See my comments inline.
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com> --- docs/schemas/domaincommon.rng | 8 +++ src/conf/domain_conf.c | 14 ++++- src/conf/domain_conf.h | 3 +- src/qemu/qemu_command.c | 123 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 145 insertions(+), 3 deletions(-)
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 145caf7..30c0d8c 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1029,6 +1029,7 @@ <value>nbd</value> <value>rbd</value> <value>sheepdog</value> + <value>gluster</value> </choice> </attribute> <optional> @@ -1042,6 +1043,13 @@ <attribute name="port"> <ref name="unsignedInt"/> </attribute> + <attribute name="transport"> + <choice> + <value>socket</value> + <value>unix</value> + <value>rdma</value>
This could be a bit confusing as socket is too generic, after all unix is also a socket. Could we change the values "tcp", "unix", "rdma" or something similar depending on what "socket" was supposed to mean?
+ </choice> + </attribute> </element> </zeroOrMore> <empty/> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 419088c..c89035e 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -200,7 +200,8 @@ VIR_ENUM_IMPL(virDomainDiskErrorPolicy, VIR_DOMAIN_DISK_ERROR_POLICY_LAST, VIR_ENUM_IMPL(virDomainDiskProtocol, VIR_DOMAIN_DISK_PROTOCOL_LAST, "nbd", "rbd", - "sheepdog") + "sheepdog", + "gluster")
We want to define a new enum for transport attribute in the same way we have it for protocol and use that enum instead of any free from string we parse from the XML or qemu command line.
VIR_ENUM_IMPL(virDomainDiskSecretType, VIR_DOMAIN_DISK_SECRET_TYPE_LAST, "none", @@ -994,6 +995,7 @@ void virDomainDiskHostDefFree(virDomainDiskHostDefPtr def)
VIR_FREE(def->name); VIR_FREE(def->port); + VIR_FREE(def->transport);
Then, there's no need to free it here.
}
void virDomainControllerDefFree(virDomainControllerDefPtr def) @@ -3489,6 +3491,7 @@ virDomainDiskDefParseXML(virCapsPtr caps, } hosts[nhosts].name = NULL; hosts[nhosts].port = NULL; + hosts[nhosts].transport = NULL; nhosts++;
hosts[nhosts - 1].name = virXMLPropString(child, "name"); @@ -3503,6 +3506,8 @@ virDomainDiskDefParseXML(virCapsPtr caps, "%s", _("missing port for host")); goto error; } + /* transport can be socket, unix, rdma, etc. */ + hosts[nhosts - 1].transport = virXMLPropString(child, "transport");
We would need to change this into calling the appropriate TypeFromString().
} child = child->next; } @@ -11479,8 +11484,13 @@ virDomainDiskDefFormat(virBufferPtr buf, for (i = 0; i < def->nhosts; i++) { virBufferEscapeString(buf, " <host name='%s'", def->hosts[i].name); - virBufferEscapeString(buf, " port='%s'/>\n", + virBufferEscapeString(buf, " port='%s'", def->hosts[i].port); + if (def->hosts[i].transport) { + virBufferEscapeString(buf, " transport='%s'", + def->hosts[i].transport);
Call the appropriate TypeToString(def->hosts[i].transport) instead and align it with the first character after "virBufferEscapeString(".
+ } + virBufferAddLit(buf, "/>\n"); } virBufferAddLit(buf, " </source>\n"); } diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 0c3824e..67e023f 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -442,7 +442,7 @@ enum virDomainDiskProtocol { VIR_DOMAIN_DISK_PROTOCOL_NBD, VIR_DOMAIN_DISK_PROTOCOL_RBD, VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG, - + VIR_DOMAIN_DISK_PROTOCOL_GLUSTER, VIR_DOMAIN_DISK_PROTOCOL_LAST };
Do no remove the empty line above *PROTOCOL_LAST. Just add the new item above it.
@@ -467,6 +467,7 @@ typedef virDomainDiskHostDef *virDomainDiskHostDefPtr; struct _virDomainDiskHostDef { char *name; char *port; + char *transport;
This would be int rather than char *.
};
enum virDomainDiskIo { diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ca62f0c..c8a0f27 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -2068,6 +2068,86 @@ no_memory: return -1; }
+static int qemuParseGlusterString(virDomainDiskDefPtr def) +{ + char *port, *volimg, *transp, *marker; + + marker = strchr(def->src, ':'); + if (marker) { + /* port found */ + port = marker; + *port++ = '\0'; + marker = port; + } else { + /* port not given, assume port = 0 */ + port = NULL; + marker = def->src; + } + + volimg = strchr(marker, '/'); + if (!volimg) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("cannot parse gluster filename '%s'"), def->src); + return -1; + } + *volimg++ = '\0'; + transp = strchr(volimg, '?'); + if (transp) { + *transp++ = '\0'; + transp = strchr(transp, '='); + transp++; + } + if (VIR_ALLOC(def->hosts) < 0) { + virReportOOMError(); + return -1; + } + def->nhosts = 1; + def->hosts->name = def->src; + if (port) { + def->hosts->port = strdup(port); + } else { + def->hosts->port = strdup("0"); + } + if (transp) { + def->hosts->transport = strdup(transp); + if (!def->hosts->transport) { + virReportOOMError(); + return -1; + } + } else { + def->hosts->transport = NULL; + }
Again, call the right TypeFromString() instead of just copying what we got.
+ if (!def->hosts->port) { + virReportOOMError(); + return -1; + }
Also this check for non-NULL port should go above your new code.
+ def->src = strdup(volimg); + if (!def->src) { + virReportOOMError(); + return -1; + } + + return 0; +} + +static int +qemuBuildGlusterString(virDomainDiskDefPtr disk, virBufferPtr opt) +{ + int ret = 0; + virBufferAddLit(opt, "file="); + if (disk->nhosts != 1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("gluster accepts only one host")); + ret = -1; + } else { + virBufferAsprintf(opt, "gluster://%s:%s/%s", + disk->hosts->name, disk->hosts->port, disk->src); + if (disk->hosts->transport) + virBufferAsprintf(opt, "?transport=%s", disk->hosts->transport);
*TypeToString(disk->hosts->transport)
+ } + return ret; +} + char * qemuBuildDriveStr(virConnectPtr conn ATTRIBUTE_UNUSED, virDomainDiskDefPtr disk, @@ -2209,6 +2289,12 @@ qemuBuildDriveStr(virConnectPtr conn ATTRIBUTE_UNUSED, goto error; virBufferAddChar(&opt, ','); break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: + if (qemuBuildGlusterString(disk, &opt) < 0) + goto error; + virBufferAddChar(&opt, ','); + break; + case VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG: if (disk->nhosts == 0) { virBufferEscape(&opt, ',', ",", "file=sheepdog:%s,", @@ -5135,6 +5221,18 @@ qemuBuildCommandLine(virConnectPtr conn, file = virBufferContentAndReset(&opt); } break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: + { + virBuffer opt = VIR_BUFFER_INITIALIZER; + if (qemuBuildGlusterString(disk, &opt) < 0) + goto error; + if (virBufferError(&opt)) { + virReportOOMError(); + goto error; + } + file = virBufferContentAndReset(&opt); + } + break; case VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG: if (disk->nhosts == 0) { if (virAsprintf(&file, "sheepdog:%s,", disk->src) < 0) { @@ -6811,6 +6909,21 @@ qemuParseCommandLineDisk(virCapsPtr caps, goto cleanup;
VIR_FREE(p); + } else if (STRPREFIX(def->src, "gluster://")) { + char *p = def->src; + + def->type = VIR_DOMAIN_DISK_TYPE_NETWORK; + def->protocol = VIR_DOMAIN_DISK_PROTOCOL_GLUSTER; + def->src = strdup(p + strlen("gluster://")); + if (!def->src) { + virReportOOMError(); + goto cleanup; + } + + if (qemuParseGlusterString(def) < 0) + goto cleanup; + + VIR_FREE(p); } else if (STRPREFIX(def->src, "sheepdog:")) { char *p = def->src; char *port, *vdi; @@ -7976,6 +8089,10 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps, disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_RBD; val += strlen("rbd:"); + } else if (STRPREFIX(val, "gluster://")) { + disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; + disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_GLUSTER; + val += strlen("gluster://"); } else if (STRPREFIX(val, "sheepdog:")) { disk->type = VIR_DOMAIN_DISK_TYPE_NETWORK; disk->protocol = VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG; @@ -8061,6 +8178,12 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps, goto no_memory; } break; + case VIR_DOMAIN_DISK_PROTOCOL_GLUSTER: +
Remove this empty line here.
+ if (qemuParseGlusterString(disk) < 0) + goto error; + + break; } }
Hopefully this gluster functionality will be committed to qemu soon after 1.2 release so that we can include this set in libvirt-0.10.2. Jirka

On Wed, Sep 5, 2012 at 7:03 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
@@ -1042,6 +1043,13 @@ <attribute name="port"> <ref name="unsignedInt"/> </attribute> + <attribute name="transport"> + <choice> + <value>socket</value> + <value>unix</value> + <value>rdma</value>
This could be a bit confusing as socket is too generic, after all unix is also a socket. Could we change the values "tcp", "unix", "rdma" or something similar depending on what "socket" was supposed to mean?
That is how gluster calls it and hence I am using the same in QEMU and the same is true here too. This is something for gluster developers to decide if they want to change socket to something more specific like tcp as you suggest. Regards, Bharata.

On 09/05/2012 09:08 AM, Bharata B Rao wrote:
On Wed, Sep 5, 2012 at 7:03 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
@@ -1042,6 +1043,13 @@ <attribute name="port"> <ref name="unsignedInt"/> </attribute> + <attribute name="transport"> + <choice> + <value>socket</value> + <value>unix</value> + <value>rdma</value>
This could be a bit confusing as socket is too generic, after all unix is also a socket. Could we change the values "tcp", "unix", "rdma" or something similar depending on what "socket" was supposed to mean?
That is how gluster calls it and hence I am using the same in QEMU and the same is true here too. This is something for gluster developers to decide if they want to change socket to something more specific like tcp as you suggest.
Just because gluster calls it a confusing name does not mean we have to repeat the confusion in libvirt - it is feasible to have a mapping where we name it 'tcp' in the XML but map that to 'socket' in the command line that eventually reaches gluster. The question then becomes whether using sensible naming in libvirt, but no longer directly mapped to underlying gluster naming, will be the cause of its own set of headaches. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Wed, Sep 5, 2012 at 8:45 PM, Eric Blake <eblake@redhat.com> wrote:
On 09/05/2012 09:08 AM, Bharata B Rao wrote:
On Wed, Sep 5, 2012 at 7:03 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
@@ -1042,6 +1043,13 @@ <attribute name="port"> <ref name="unsignedInt"/> </attribute> + <attribute name="transport"> + <choice> + <value>socket</value> + <value>unix</value> + <value>rdma</value>
This could be a bit confusing as socket is too generic, after all unix is also a socket. Could we change the values "tcp", "unix", "rdma" or something similar depending on what "socket" was supposed to mean?
That is how gluster calls it and hence I am using the same in QEMU and the same is true here too. This is something for gluster developers to decide if they want to change socket to something more specific like tcp as you suggest.
Just because gluster calls it a confusing name does not mean we have to repeat the confusion in libvirt - it is feasible to have a mapping where we name it 'tcp' in the XML but map that to 'socket' in the command line that eventually reaches gluster. The question then becomes whether using sensible naming in libvirt, but no longer directly mapped to underlying gluster naming, will be the cause of its own set of headaches.
Vijay - would really like to have your inputs here... - While the transport-type for a volume is shown as tcp in "gluster volume info", libgfapi forces me to use transport=socket to access the same volume from QEMU. So does "socket" mean "tcp" really ? If so, should I just switch over to using transport=tcp from QEMU ? If not, can you explain a bit about the difference b/n socket and tcp transport types ? - Also apart from socket (or tcp ?), rdma and unix, are there any other transport options that QEMU should care about ? - Are rdma and unix transport types operational at the moment ? If not, do you see them being used in gluster any time in the future ? The reason behind asking this is to check if we are spending effort in defining semantics in QEMU for a transport type that is never going to be used in gluster. Also I see that "gluster volume create" supports tcp and rdma but doesn't list unix as an option. Regards, Bharata.

On 09/06/2012 08:54 PM, Bharata B Rao wrote:
Vijay - would really like to have your inputs here...
- While the transport-type for a volume is shown as tcp in "gluster volume info", libgfapi forces me to use transport=socket to access the same volume from QEMU. So does "socket" mean "tcp" really ? If so, should I just switch over to using transport=tcp from QEMU ? If not, can you explain a bit about the difference b/n socket and tcp transport types ?
I suggest that we switch over to using transport=tcp. "socket" is a generic abstraction that is used by various transport types - tcp, rdma and unix. This needs to change in libfgapi as well and that should happen shortly.
- Also apart from socket (or tcp ?), rdma and unix, are there any other transport options that QEMU should care about ?
The ones you enumerate should be good enough.
- Are rdma and unix transport types operational at the moment ? If not, do you see them being used in gluster any time in the future ? The reason behind asking this is to check if we are spending effort in defining semantics in QEMU for a transport type that is never going to be used in gluster. Also I see that "gluster volume create" supports tcp and rdma but doesn't list unix as an option.
Yes, both rdma and unix transport types are operational at the moment. From a volume perspective, only tcp and rdma are valid types right now. unix transport is used for communication between related gluster processes on the same node. There could be cases where the need to talk to glusterd on the localhost through "unix" transport type might arise. Hence we can define semantics for all three types - tcp, rdma and unix. Thanks, Vijay

On 09/05/2012 07:03 PM, Jiri Denemark wrote:
On Thu, Aug 23, 2012 at 16:31:51 +0530, Harsh Prateek Bora wrote:
Qemu accepts gluster protocol as supported storage backend beside others. This patch allows users to specify disks on gluster backends like this:
<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='gluster' name='volume/image'> <host name='example.org' port='6000' transport='socket'/> </source> <target dev='vda' bus='virtio'/> </disk>
Note: In the <host> element above, transport is an optional attribute. Valid transport types for a network based disk can be socket, unix or rdma.
TODO: - Add support for IPv6 format based server addr - Support for transport types other than socket.
Overall, this patch set looks fine. See my comments inline.
Hi Jiri, Thanks for an early review. I will address your comments in the next version. regards, Harsh

Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com> --- tests/qemuargv2xmltest.c | 1 + .../qemuxml2argv-disk-drive-network-gluster.args | 1 + .../qemuxml2argv-disk-drive-network-gluster.xml | 33 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 2 ++ 4 files changed, 37 insertions(+) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml diff --git a/tests/qemuargv2xmltest.c b/tests/qemuargv2xmltest.c index 439218e..2bcec49 100644 --- a/tests/qemuargv2xmltest.c +++ b/tests/qemuargv2xmltest.c @@ -177,6 +177,7 @@ mymain(void) DO_TEST("disk-drive-cache-directsync"); DO_TEST("disk-drive-cache-unsafe"); DO_TEST("disk-drive-network-nbd"); + DO_TEST("disk-drive-network-gluster"); DO_TEST("disk-drive-network-rbd"); /* older format using CEPH_ARGS env var */ DO_TEST("disk-drive-network-rbd-ceph-env"); diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args new file mode 100644 index 0000000..a374d93 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args @@ -0,0 +1 @@ +LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M pc -no-kqemu -m 214 -smp 1 -nographic -monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c -drive file=/dev/HostVG/QEMUGuest1,if=ide,bus=0,unit=0 -drive file=gluster://example.org:6000/Volume/Image?transport=socket,if=virtio,format=raw -net none -serial none -parallel none -usb diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml new file mode 100644 index 0000000..0a0d8e8 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml @@ -0,0 +1,33 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='block' device='disk'> + <source dev='/dev/HostVG/QEMUGuest1'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <disk type='network' device='disk'> + <driver name='qemu' type='raw'/> + <source protocol='gluster' name='Volume/Image'> + <host name='example.org' port='6000' transport='socket'/> + </source> + <target dev='vda' bus='virtio'/> + </disk> + <controller type='usb' index='0'/> + <controller type='ide' index='0'/> + <memballoon model='virtio'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 71513fb..9d05200 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -446,6 +446,8 @@ mymain(void) QEMU_CAPS_DRIVE_CACHE_UNSAFE, QEMU_CAPS_DRIVE_FORMAT); DO_TEST("disk-drive-network-nbd", QEMU_CAPS_DRIVE, QEMU_CAPS_DRIVE_FORMAT); + DO_TEST("disk-drive-network-gluster", false, + QEMU_CAPS_DRIVE, QEMU_CAPS_DRIVE_FORMAT); DO_TEST("disk-drive-network-rbd", QEMU_CAPS_DRIVE, QEMU_CAPS_DRIVE_FORMAT); DO_TEST("disk-drive-network-sheepdog", -- 1.7.11.2

On Thu, Aug 23, 2012 at 16:31:52 +0530, Harsh Prateek Bora wrote:
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com> --- tests/qemuargv2xmltest.c | 1 + .../qemuxml2argv-disk-drive-network-gluster.args | 1 + .../qemuxml2argv-disk-drive-network-gluster.xml | 33 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 2 ++ 4 files changed, 37 insertions(+) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml
Excellent. However, some changes may be needed if we end up with "socket" transport being renamed to something else. Jirka

On 09/05/2012 07:05 PM, Jiri Denemark wrote:
On Thu, Aug 23, 2012 at 16:31:52 +0530, Harsh Prateek Bora wrote:
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com> --- tests/qemuargv2xmltest.c | 1 + .../qemuxml2argv-disk-drive-network-gluster.args | 1 + .../qemuxml2argv-disk-drive-network-gluster.xml | 33 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 2 ++ 4 files changed, 37 insertions(+) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml
Excellent. However, some changes may be needed if we end up with "socket" transport being renamed to something else.
Thanks. Sure, will do. Harsh
Jirka

On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 08/24/2012 12:05 PM, Daniel Veillard wrote:
On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0,
Hi DV, Yeh, I completely understand that and I have posted patches for review purpose only. Thanks, Harsh
Daniel

On 08/24/2012 12:22 PM, Harsh Bora wrote:
On 08/24/2012 12:05 PM, Daniel Veillard wrote:
On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0,
I am working on enabling oVirt/VDSM to be able to exploit this, using harsh's RFC patches. VDSM patch @ http://gerrit.ovirt.org/#/c/6856/ An early feedback would help me, especially on the xml spec posted here. My VDSM patch depends on it. thanx, deepak

Hi, Harsh: I've try your patch, but can't boot the vm. [root@yinyin qemu-glusterfs]# virsh create gluster-libvirt.xml 错误:从 gluster-libvirt.xml 创建域失败 错误:Unable to read from monitor: Connection reset by peer the libvirt build the qemu/gluster command correctly, the qemu-kvm try to run, but faile after a while, that cause the libvirt monitor connect failed. the /var/libvirt/qemu/gluster-vm.log follow: 2012-08-30 01:03:08.418+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 2012-08-30 01:03:08.423+0000: 4452: debug : virCommandHook:2041 : Run hook 0x48f160 0x7f433ba0e570 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2475 : Obtaining domain lock 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:123 : plugin=0x7f43300b7980 dom=0x7f43240022b0 withResources=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerNew:291 : plugin=0x7f43300b7980 type=0 nparams=4 params=0x7f433ba0d9d0 flags=0 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:98 : key=uuid type=uuid value=f65bd812-45fb-cc2d-75fd-84206248e026 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:94 : key=name type=string value=gluster-vm 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=id type=uint value=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=pid type=uint value=4452 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:135 : Adding leases 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:140 : Adding disks 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerAcquire:337 : lock=0x7f4324001ba0 state='(null)' flags=3 fd=0x7f433ba0db3c 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerFree:374 : lock=0x7f4324001ba0 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2500 : Moving process to cgroup 2012-08-30 01:03:08.423+0000: 4452: debug : virCgroupNew:603 : New group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 0:cpu at /cgroup/cpu in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 2:cpuset at /cgroup/cpuset in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 3:memory at /cgroup/memory in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 4:devices at /cgroup/devices in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 5:freezer at /cgroup/freezer in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 6:blkio at /cgroup/blkio in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:524 : Make group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpu/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuacct/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuset/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/memory/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/devices/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/freezer/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/blkio/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpu/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.426+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuacct/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.429+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuset/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.432+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/memory/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.435+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/devices/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.437+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/freezer/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.439+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/blkio/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.442+0000: 4452: debug : qemuProcessInitCpuAffinity:1731 : Setting CPU affinity 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessInitCpuAffinity:1760 : Set CPU affinity with specified cpuset 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2512 : Setting up security labelling 2012-08-30 01:03:08.443+0000: 4452: debug : virSecurityDACSetProcessLabel:637 : Dropping privileges of DEF to 107:107 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2519 : Hook complete ret=0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2043 : Done hook 0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2056 : Notifying parent for handshake start on 24 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2077 : Waiting on parent for handshake complete on 25 2012-08-30 01:03:08.495+0000: 4452: debug : virCommandHook:2093 : Hook is done 0 Gluster connection failed for server=10.1.81.111 port=24007 volume=dht image=windows7-32-DoubCards-iotest-qcow2.img transport=socket qemu-kvm: -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native: could not open disk image gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img: No data available 2012-08-30 01:03:11.565+0000: shutting down I can boot the vm with the command: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 my question: 1.what's the libvirt hook funciton? could it affect the qemu-kvm command? 2.It's hard to debug the qemu-kvm progress from libvirt, I try to hang the glusterd for a moment, then to gdb the qemu-kvm, do your have better methods? Best Regards, Yin Yin You have new mail in /var/spool/mail/root On Fri, Aug 24, 2012 at 5:44 PM, Deepak C Shetty < deepakcs@linux.vnet.ibm.com> wrote:
On 08/24/2012 12:22 PM, Harsh Bora wrote:
On 08/24/2012 12:05 PM, Daniel Veillard wrote:
On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/**html/qemu-devel/2012-08/**msg01539.html<http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html>
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0,
I am working on enabling oVirt/VDSM to be able to exploit this, using harsh's RFC patches. VDSM patch @ http://gerrit.ovirt.org/#/c/**6856/<http://gerrit.ovirt.org/#/c/6856/>
An early feedback would help me, especially on the xml spec posted here. My VDSM patch depends on it.
thanx, deepak
______________________________**_________________ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/**mailman/listinfo/gluster-devel<https://lists.nongnu.org/mailman/listinfo/gluster-devel>

Hi, Harsh: I make some break in glusterd , and can gdb the qemu-kvm forked from libvirtd. break in glusterd: (gdb) i b Num Type Disp Enb Address What 1 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122 2 breakpoint keep y 0x00000034f4607070 in rpcsvc_program_actor at rpcsvc.c:137 breakpoint already hit 2 times 3 breakpoint keep y 0x00007f903ef199f0 in glusterd_set_clnt_mgmt_program at glusterd-handshake.c:359 4 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122 in rpcsvc_handle_rpc_call fun, it call rpcsvc_program_actor and return right. (gdb) p *actor $13 = {procname = "GETSPEC", '\000' <repeats 24 times>, procnum = 2, actor = 0x7f903ef1a0a0 <server_getspec>, vector_sizer = 0, unprivileged = _gf_false} but in if (0 == svc->allow_insecure && unprivileged && !actor->unprivileged) { /* Non-privileged user, fail request */ gf_log ("glusterd", GF_LOG_ERROR, "Request received from non-" "privileged port. Failing request"); rpcsvc_request_destroy (req); return -1; } so the server_getspec on server not be called, which cause qemu-kvm progress failed. my question: 1.(0 == svc->allow_insecure && unprivileged && !actor->unprivileged) which one wrong here ? Best Regards, Yin Yin On Thu, Aug 30, 2012 at 9:14 AM, Yin Yin <maillistofyinyin@gmail.com> wrote:
Hi, Harsh: I've try your patch, but can't boot the vm. [root@yinyin qemu-glusterfs]# virsh create gluster-libvirt.xml 错误:从 gluster-libvirt.xml 创建域失败 错误:Unable to read from monitor: Connection reset by peer
the libvirt build the qemu/gluster command correctly, the qemu-kvm try to run, but faile after a while, that cause the libvirt monitor connect failed.
the /var/libvirt/qemu/gluster-vm.log follow:
2012-08-30 01:03:08.418+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 2012-08-30 01:03:08.423+0000: 4452: debug : virCommandHook:2041 : Run hook 0x48f160 0x7f433ba0e570 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2475 : Obtaining domain lock 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:123 : plugin=0x7f43300b7980 dom=0x7f43240022b0 withResources=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerNew:291 : plugin=0x7f43300b7980 type=0 nparams=4 params=0x7f433ba0d9d0 flags=0 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:98 : key=uuid type=uuid value=f65bd812-45fb-cc2d-75fd-84206248e026 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:94 : key=name type=string value=gluster-vm 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=id type=uint value=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=pid type=uint value=4452 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:135 : Adding leases 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:140 : Adding disks 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerAcquire:337 : lock=0x7f4324001ba0 state='(null)' flags=3 fd=0x7f433ba0db3c 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerFree:374 : lock=0x7f4324001ba0 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2500 : Moving process to cgroup 2012-08-30 01:03:08.423+0000: 4452: debug : virCgroupNew:603 : New group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 0:cpu at /cgroup/cpu in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 2:cpuset at /cgroup/cpuset in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 3:memory at /cgroup/memory in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 4:devices at /cgroup/devices in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 5:freezer at /cgroup/freezer in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 6:blkio at /cgroup/blkio in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:524 : Make group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpu/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuacct/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuset/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/memory/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/devices/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/freezer/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/blkio/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpu/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.426+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuacct/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.429+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuset/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.432+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/memory/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.435+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/devices/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.437+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/freezer/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.439+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/blkio/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.442+0000: 4452: debug : qemuProcessInitCpuAffinity:1731 : Setting CPU affinity 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessInitCpuAffinity:1760 : Set CPU affinity with specified cpuset 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2512 : Setting up security labelling 2012-08-30 01:03:08.443+0000: 4452: debug : virSecurityDACSetProcessLabel:637 : Dropping privileges of DEF to 107:107 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2519 : Hook complete ret=0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2043 : Done hook 0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2056 : Notifying parent for handshake start on 24 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2077 : Waiting on parent for handshake complete on 25 2012-08-30 01:03:08.495+0000: 4452: debug : virCommandHook:2093 : Hook is done 0 Gluster connection failed for server=10.1.81.111 port=24007 volume=dht image=windows7-32-DoubCards-iotest-qcow2.img transport=socket qemu-kvm: -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native: could not open disk image gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img: No data available 2012-08-30 01:03:11.565+0000: shutting down
I can boot the vm with the command: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster:// 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
my question: 1.what's the libvirt hook funciton? could it affect the qemu-kvm command? 2.It's hard to debug the qemu-kvm progress from libvirt, I try to hang the glusterd for a moment, then to gdb the qemu-kvm, do your have better methods?
Best Regards, Yin Yin
You have new mail in /var/spool/mail/root On Fri, Aug 24, 2012 at 5:44 PM, Deepak C Shetty < deepakcs@linux.vnet.ibm.com> wrote:
On 08/24/2012 12:22 PM, Harsh Bora wrote:
On 08/24/2012 12:05 PM, Daniel Veillard wrote:
On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/**html/qemu-devel/2012-08/**msg01539.html<http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html>
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0,
I am working on enabling oVirt/VDSM to be able to exploit this, using harsh's RFC patches. VDSM patch @ http://gerrit.ovirt.org/#/c/**6856/<http://gerrit.ovirt.org/#/c/6856/>
An early feedback would help me, especially on the xml spec posted here. My VDSM patch depends on it.
thanx, deepak
______________________________**_________________ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/**mailman/listinfo/gluster-devel<https://lists.nongnu.org/mailman/listinfo/gluster-devel>

On 08/30/2012 08:27 AM, Yin Yin wrote:
Hi, Harsh: I make some break in glusterd , and can gdb the qemu-kvm forked from libvirtd.
break in glusterd:
(gdb) i b Num Type Disp Enb Address What 1 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122 2 breakpoint keep y 0x00000034f4607070 in rpcsvc_program_actor at rpcsvc.c:137 breakpoint already hit 2 times 3 breakpoint keep y 0x00007f903ef199f0 in glusterd_set_clnt_mgmt_program at glusterd-handshake.c:359 4 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122
in rpcsvc_handle_rpc_call fun, it call rpcsvc_program_actor and return right. (gdb) p *actor $13 = {procname = "GETSPEC", '\000' <repeats 24 times>, procnum = 2, actor = 0x7f903ef1a0a0 <server_getspec>, vector_sizer = 0, unprivileged = _gf_false}
but in
if(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged){ /* Non-privileged user, fail request */ gf_log("glusterd",GF_LOG_ERROR, "Request received from non-" "privileged port. Failing request"); rpcsvc_request_destroy(req); return-1; }
so the server_getspec on server not be called, which cause qemu-kvm progress failed.
my question: 1.(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged) which one wrong here ?
Best Regards, Yin Yin
Yin, IIUC, you need to set this option to True on your gluster volume to get past this error. Gluster experts can provide more info here. Option: nfs.ports-insecure Default Value: (null) Description: Allow client connections from unprivileged ports. By default only privileged ports are allowed. Use this option to enable or disable insecure ports for a specific subvolume and to override the global setting set by the previous option. volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME> eg: gluster volume set <volname> nfs.ports-insecure on something like that.

On 08/30/2012 08:27 AM, Yin Yin wrote:
Hi, Harsh: I make some break in glusterd , and can gdb the qemu-kvm forked from libvirtd.
break in glusterd:
(gdb) i b Num Type Disp Enb Address What 1 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122 2 breakpoint keep y 0x00000034f4607070 in rpcsvc_program_actor at rpcsvc.c:137 breakpoint already hit 2 times 3 breakpoint keep y 0x00007f903ef199f0 in glusterd_set_clnt_mgmt_program at glusterd-handshake.c:359 4 breakpoint keep y 0x00007f903ef1a0a0 in server_getspec at glusterd-handshake.c:122
in rpcsvc_handle_rpc_call fun, it call rpcsvc_program_actor and return right. (gdb) p *actor $13 = {procname = "GETSPEC", '\000' <repeats 24 times>, procnum = 2, actor = 0x7f903ef1a0a0 <server_getspec>, vector_sizer = 0, unprivileged = _gf_false}
but in
if(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged){ /* Non-privileged user, fail request */ gf_log("glusterd",GF_LOG_ERROR, "Request received from non-" "privileged port. Failing request"); rpcsvc_request_destroy(req); return-1; }
so the server_getspec on server not be called, which cause qemu-kvm progress failed.
my question: 1.(0==svc->allow_insecure&&unprivileged&&!actor->unprivileged) which one wrong here ?
You should be able to check this, step next in gdb and print value of each var (to check which one is false). However, I think its more about configuring glusterd correctly and less about the libvirt/qemu part of it. I am willing to be corrected on this. Let us know if Deepak's suggestion to set nfs.ports-insecure option (might affect svc->allow_insecure in above statement) on gluster volume works for you. Thanks for testing my patch though ! regards, Harsh
Best Regards, Yin Yin
On Thu, Aug 30, 2012 at 9:14 AM, Yin Yin <maillistofyinyin@gmail.com <mailto:maillistofyinyin@gmail.com>> wrote:
Hi, Harsh: I've try your patch, but can't boot the vm. [root@yinyin qemu-glusterfs]# virsh create gluster-libvirt.xml 错误:从 gluster-libvirt.xml 创建域失败 错误:Unable to read from monitor: Connection reset by peer
the libvirt build the qemu/gluster command correctly, the qemu-kvm try to run, but faile after a while, that cause the libvirt monitor connect failed.
the /var/libvirt/qemu/gluster-vm.log follow: 2012-08-30 01:03:08.418+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 2012-08-30 01:03:08.423+0000: 4452: debug : virCommandHook:2041 : Run hook 0x48f160 0x7f433ba0e570 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2475 : Obtaining domain lock 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:123 : plugin=0x7f43300b7980 dom=0x7f43240022b0 withResources=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerNew:291 : plugin=0x7f43300b7980 type=0 nparams=4 params=0x7f433ba0d9d0 flags=0 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:98 : key=uuid type=uuid value=f65bd812-45fb-cc2d-75fd-84206248e026 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:94 : key=name type=string value=gluster-vm 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=id type=uint value=1 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 : key=pid type=uint value=4452 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:135 : Adding leases 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:140 : Adding disks 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerAcquire:337 : lock=0x7f4324001ba0 state='(null)' flags=3 fd=0x7f433ba0db3c 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerFree:374 : lock=0x7f4324001ba0 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2500 : Moving process to cgroup 2012-08-30 01:03:08.423+0000: 4452: debug : virCgroupNew:603 : New group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 0:cpu at /cgroup/cpu in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 2:cpuset at /cgroup/cpuset in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 3:memory at /cgroup/memory in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 4:devices at /cgroup/devices in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 5:freezer at /cgroup/freezer in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected mount/mapping 6:blkio at /cgroup/blkio in 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:524 : Make group /libvirt/qemu/gluster-vm 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpu/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuacct/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/cpuset/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/memory/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/devices/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/freezer/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make controller /cgroup/blkio/libvirt/qemu/gluster-vm/ 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpu/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.426+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuacct/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.429+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/cpuset/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.432+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/memory/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.435+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/devices/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.437+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/freezer/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.439+0000: 4452: debug : virCgroupSetValueStr:320 : Set value '/cgroup/blkio/libvirt/qemu/gluster-vm/tasks' to '4452' 2012-08-30 01:03:08.442+0000: 4452: debug : qemuProcessInitCpuAffinity:1731 : Setting CPU affinity 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessInitCpuAffinity:1760 : Set CPU affinity with specified cpuset 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2512 : Setting up security labelling 2012-08-30 01:03:08.443+0000: 4452: debug : virSecurityDACSetProcessLabel:637 : Dropping privileges of DEF to 107:107 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2519 : Hook complete ret=0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2043 : Done hook 0 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2056 : Notifying parent for handshake start on 24 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2077 : Waiting on parent for handshake complete on 25 2012-08-30 01:03:08.495+0000: 4452: debug : virCommandHook:2093 : Hook is done 0 Gluster connection failed for server=10.1.81.111 port=24007 volume=dht image=windows7-32-DoubCards-iotest-qcow2.img transport=socket qemu-kvm: -drive file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native>: could not open disk image gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img>: No data available 2012-08-30 01:03:11.565+0000: shutting down
I can boot the vm with the command: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid f65bd812-45fb-cc2d-75fd-84206248e026 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native <http://10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device usb-tablet,id=input0 -spice port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
my question: 1.what's the libvirt hook funciton? could it affect the qemu-kvm command? 2.It's hard to debug the qemu-kvm progress from libvirt, I try to hang the glusterd for a moment, then to gdb the qemu-kvm, do your have better methods?
Best Regards, Yin Yin
You have new mail in /var/spool/mail/root On Fri, Aug 24, 2012 at 5:44 PM, Deepak C Shetty <deepakcs@linux.vnet.ibm.com <mailto:deepakcs@linux.vnet.ibm.com>> wrote:
On 08/24/2012 12:22 PM, Harsh Bora wrote:
On 08/24/2012 12:05 PM, Daniel Veillard wrote:
On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
This patchset provides support for Gluster protocol based network disks. It is based on the proposed gluster support in Qemu on qemu-devel: http://lists.gnu.org/archive/__html/qemu-devel/2012-08/__msg01539.html <http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html>
Just to be clear, that qemu feature didn't make the deadline for 1.2, right ? I don't think we can add support at the libvirt level until the patches are commited in QEmu, but that doesn't prevent reviewing them in advance . Right now we are in freeze for 0.10.0,
I am working on enabling oVirt/VDSM to be able to exploit this, using harsh's RFC patches. VDSM patch @ http://gerrit.ovirt.org/#/c/__6856/ <http://gerrit.ovirt.org/#/c/6856/>
An early feedback would help me, especially on the xml spec posted here. My VDSM patch depends on it.
thanx, deepak
_________________________________________________ Gluster-devel mailing list Gluster-devel@nongnu.org <mailto:Gluster-devel@nongnu.org> https://lists.nongnu.org/__mailman/listinfo/gluster-devel <https://lists.nongnu.org/mailman/listinfo/gluster-devel>
participants (9)
-
Bharata B Rao
-
Daniel Veillard
-
Deepak C Shetty
-
Eric Blake
-
Harsh Bora
-
Harsh Prateek Bora
-
Jiri Denemark
-
Vijay Bellur
-
Yin Yin