[libvirt] [PATCH] Changes to support Veritas HyperScale (VxHS) block device protocol with qemu-kvm

Sample XML for a vxhs vdisk is as follows: <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 11b3330..ade35a3 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2276,9 +2276,9 @@ <dd> The <code>protocol</code> attribute specifies the protocol to access to the requested image. Possible values are "nbd", - "iscsi", "rbd", "sheepdog" or "gluster". If the - <code>protocol</code> attribute is "rbd", "sheepdog" or - "gluster", an additional attribute <code>name</code> is + "iscsi", "rbd", "sheepdog", "gluster" or "vxhs". If the + <code>protocol</code> attribute is "rbd", "sheepdog", "gluster" + or "vxhs", an additional attribute <code>name</code> is mandatory to specify which volume/image will be used. For "nbd", the <code>name</code> attribute is optional. For "iscsi" (<span class="since">since 1.0.4</span>), the <code>name</code> @@ -2388,6 +2388,12 @@ <td> one or more (<span class="since">Since 2.1.0</span>), just one prior to that </td> <td> 24007 </td> </tr> + <tr> + <td> vxhs </td> + <td> a server running Veritas HyperScale daemon </td> + <td> only one </td> + <td> 9999 </td> + </tr> </table> <p> gluster supports "tcp", "rdma", "unix" as valid values for the diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 19d45fd..e569e0a 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1464,6 +1464,7 @@ <value>ftp</value> <value>ftps</value> <value>tftp</value> + <value>vxhs</value> </choice> </attribute> <optional> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a5fce3..c7b95aa 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -491,6 +491,9 @@ qemuNetworkDriveGetPort(int protocol, /* no default port specified */ return 0; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + return 9999; + case VIR_STORAGE_NET_PROTOCOL_RBD: case VIR_STORAGE_NET_PROTOCOL_LAST: case VIR_STORAGE_NET_PROTOCOL_NONE: @@ -1034,6 +1037,7 @@ qemuBuildNetworkDriveStr(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: + case VIR_STORAGE_NET_PROTOCOL_VXHS: ret = qemuBuildNetworkDriveURI(src, secinfo); break; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a82e58b..4910004 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13724,6 +13724,7 @@ qemuDomainSnapshotPrepareDiskExternalBackingInactive(virDomainDiskDefPtr disk) case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external inactive snapshots are not supported on " @@ -13787,6 +13788,7 @@ qemuDomainSnapshotPrepareDiskExternalOverlayActive(virDomainSnapshotDiskDefPtr d case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external active snapshots are not supported on " @@ -13932,6 +13934,7 @@ qemuDomainSnapshotPrepareDiskInternal(virConnectPtr conn, case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("internal inactive snapshots are not supported on " diff --git a/src/qemu/qemu_parse_command.c b/src/qemu/qemu_parse_command.c index c3b27aa..f2edc28 100644 --- a/src/qemu/qemu_parse_command.c +++ b/src/qemu/qemu_parse_command.c @@ -263,6 +263,16 @@ qemuParseNBDString(virDomainDiskDefPtr disk) return -1; } +static int +qemuParseVxHSString(virDomainDiskDefPtr def) +{ + virURIPtr uri = NULL; + + if (!(uri = virURIParse(def->src->path))) + return -1; + + return qemuParseDriveURIString(def, uri, "vxhs"); +} /* * This method takes a string representing a QEMU command line ARGV set @@ -737,6 +747,12 @@ qemuParseCommandLineDisk(virDomainXMLOptionPtr xmlopt, if (VIR_STRDUP(def->src->path, vdi) < 0) goto error; } + } else if (STRPREFIX(def->src->path, "vxhs:")) { + def->src->type = VIR_STORAGE_TYPE_NETWORK; + def->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + + if (qemuParseVxHSString(def) < 0) + goto error; } else { def->src->type = VIR_STORAGE_TYPE_FILE; } @@ -1933,6 +1949,10 @@ qemuParseCommandLine(virCapsPtr caps, disk->src->type = VIR_STORAGE_TYPE_NETWORK; disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_SHEEPDOG; val += strlen("sheepdog:"); + } else if (STRPREFIX(val, "vxhs:")) { + disk->src->type = VIR_STORAGE_TYPE_NETWORK; + disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + val += strlen("vxhs:"); } else { disk->src->type = VIR_STORAGE_TYPE_FILE; } @@ -2009,6 +2029,11 @@ qemuParseCommandLine(virCapsPtr caps, goto error; break; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + if (qemuParseVxHSString(disk) < 0) + goto error; + + break; case VIR_STORAGE_NET_PROTOCOL_HTTP: case VIR_STORAGE_NET_PROTOCOL_HTTPS: case VIR_STORAGE_NET_PROTOCOL_FTP: diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c index 272db67..a730a01 100644 --- a/src/util/virstoragefile.c +++ b/src/util/virstoragefile.c @@ -86,7 +86,8 @@ VIR_ENUM_IMPL(virStorageNetProtocol, VIR_STORAGE_NET_PROTOCOL_LAST, "ftp", "ftps", "tftp", - "ssh") + "ssh", + "vxhs") VIR_ENUM_IMPL(virStorageNetHostTransport, VIR_STORAGE_NET_HOST_TRANS_LAST, "tcp", @@ -2634,6 +2635,7 @@ virStorageSourceParseBackingColon(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: virReportError(VIR_ERR_INTERNAL_ERROR, _("malformed backing store path for protocol %s"), protocol); diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h index 3d09468..88dff36 100644 --- a/src/util/virstoragefile.h +++ b/src/util/virstoragefile.h @@ -132,6 +132,7 @@ typedef enum { VIR_STORAGE_NET_PROTOCOL_FTPS, VIR_STORAGE_NET_PROTOCOL_TFTP, VIR_STORAGE_NET_PROTOCOL_SSH, + VIR_STORAGE_NET_PROTOCOL_VXHS, VIR_STORAGE_NET_PROTOCOL_LAST } virStorageNetProtocol; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args new file mode 100644 index 0000000..3d37b00 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/libexec/qemu-kvm \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-cpu qemu32 \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +format=raw,if=none,id=drive-virtio-disk0,cache=none \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml new file mode 100644 index 0000000..45c807f --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> + <host name='192.168.0.1' port='9999'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 1ee8402..d94e3f2 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -869,6 +869,7 @@ mymain(void) # endif DO_TEST("disk-drive-network-rbd-ipv6", NONE); DO_TEST_FAILURE("disk-drive-network-rbd-no-colon", NONE); + DO_TEST("disk-drive-network-vxhs", NONE); DO_TEST("disk-drive-no-boot", QEMU_CAPS_BOOTINDEX); DO_TEST_PARSE_ERROR("disk-device-lun-type-invalid", -- 2.5.5

$SUBJ s/Change to support/Add support for s/ with qemu-kvm// On 11/12/2016 01:19 AM, Ashish Mittal wrote:
Sample XML for a vxhs vdisk is as follows:
<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
From what I see from the QEMU list that'll work for the json you
Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr) and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a "case VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch (virStorageNetProtocol) src->protocol". Also running 'make syntax-check' would show that tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a too long line at the end: GEN test-wrap-argv --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args 2016-11-14 06:16:40.739509847 -0500 +++ - 2016-11-14 06:23:59.661334157 -0500 @@ -20,4 +20,5 @@ -usb \ -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ format=raw,if=none,id=drive-virtio-disk0,cache=none \ --device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\ +id=virtio-disk0 Incorrect line wrapping in tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args Use test-wrap-argv.pl to wrap test data files cfg.mk:1075: recipe for target 'test-wrap-argv' failed make: *** [test-wrap-argv] Error 1 All those are easily fixable; however, you will also need to modify the virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order to do the JSON parsing properly as well as tests/virstoragetest.c to add test case(s). If you use 'gitk' or some other git log -p browser, starting with commit id 'e91f767c743' and going forward through Peter's series you can see how this was done for other various protocols. provided in your example: Sample command line using the JSON syntax: ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410", "server":{"host":"172.172.17.4","port":"9999"}}' Other questions/concerns... These may have been answered previously, but I haven't followed closely along. 1. It appears to me that the 'name' field is a UUID for the VxHS volume, but it's not clear that's all it can be. You'll note in virDomainDiskSourceParse a specific gluster check. So that we don't get surprised at some point in the future - are there any specific things in the name that need to be addressed? Beyond that, how does one know what to provide for that 'name=' field? For VxHS, it seems to me that the path must be a UUID of some sort. So would anyone know which UUID to magically pick in order to add the disk to a guest? How does one get a list? 2. Does/will VxHS ever require authentication similar to iSCSI and RBD? 3. Will there be (or is there) support for LUKS volumes on the VxHS server? 4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change. 5. Along the same lines - virStorageFileBackendForTypeInternal handles getting details/metadata about the file format in order to determine block sizes for inactive domains. See the path qemuDomainGetBlockInfo calling into qemuStorageLimitsRefresh which calls virStorageFileInitAs. These would be for the backend of a 'virsh domblkinfo' type command when the domain isn't running. The gluster backend has a set of them (search on _virStorageFileBackend and virStorageFileBackendGluster). 6. Use something like cscope to be sure to check all the places where VIR_STORAGE_NET_PROTOCOL_ is used in the code and made sure there isn't a special case for VXHS (RBD, GLUSTER, and ISCSI have a few). Similarly check usage of VIR_STORAGE_TYPE_NETWORK. Some checks are not within switch/case statements and some involve specific disk configuration options that are(n't) supported. What about any need for migration? (see qemuMigrationIsSafe)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 11b3330..ade35a3 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2276,9 +2276,9 @@ <dd> The <code>protocol</code> attribute specifies the protocol to access to the requested image. Possible values are "nbd", - "iscsi", "rbd", "sheepdog" or "gluster". If the - <code>protocol</code> attribute is "rbd", "sheepdog" or - "gluster", an additional attribute <code>name</code> is + "iscsi", "rbd", "sheepdog", "gluster" or "vxhs". If the + <code>protocol</code> attribute is "rbd", "sheepdog", "gluster" + or "vxhs", an additional attribute <code>name</code> is mandatory to specify which volume/image will be used. For "nbd", the <code>name</code> attribute is optional. For "iscsi" (<span class="since">since 1.0.4</span>), the <code>name</code>
Just before the closing </dd> add something to describe what the 'name' will be for vxfs like is done for iscsi, such as: For "vxhs" (<span class="since">since 2.5.0</span>), the <code>name</code> is ... [it looks like a UUID to me - although there's nothing that validates that hypothesis].
@@ -2388,6 +2388,12 @@ <td> one or more (<span class="since">Since 2.1.0</span>), just one prior to that </td> <td> 24007 </td> </tr> + <tr> + <td> vxhs </td> + <td> a server running Veritas HyperScale daemon </td> + <td> only one </td> + <td> 9999 </td> + </tr> </table> <p> gluster supports "tcp", "rdma", "unix" as valid values for the diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 19d45fd..e569e0a 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1464,6 +1464,7 @@ <value>ftp</value> <value>ftps</value> <value>tftp</value> + <value>vxhs</value> </choice> </attribute> <optional> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a5fce3..c7b95aa 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -491,6 +491,9 @@ qemuNetworkDriveGetPort(int protocol, /* no default port specified */ return 0;
+ case VIR_STORAGE_NET_PROTOCOL_VXHS: + return 9999; + case VIR_STORAGE_NET_PROTOCOL_RBD: case VIR_STORAGE_NET_PROTOCOL_LAST: case VIR_STORAGE_NET_PROTOCOL_NONE: @@ -1034,6 +1037,7 @@ qemuBuildNetworkDriveStr(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: + case VIR_STORAGE_NET_PROTOCOL_VXHS: ret = qemuBuildNetworkDriveURI(src, secinfo); break;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a82e58b..4910004 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13724,6 +13724,7 @@ qemuDomainSnapshotPrepareDiskExternalBackingInactive(virDomainDiskDefPtr disk) case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external inactive snapshots are not supported on " @@ -13787,6 +13788,7 @@ qemuDomainSnapshotPrepareDiskExternalOverlayActive(virDomainSnapshotDiskDefPtr d case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external active snapshots are not supported on " @@ -13932,6 +13934,7 @@ qemuDomainSnapshotPrepareDiskInternal(virConnectPtr conn, case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("internal inactive snapshots are not supported on " diff --git a/src/qemu/qemu_parse_command.c b/src/qemu/qemu_parse_command.c index c3b27aa..f2edc28 100644 --- a/src/qemu/qemu_parse_command.c +++ b/src/qemu/qemu_parse_command.c @@ -263,6 +263,16 @@ qemuParseNBDString(virDomainDiskDefPtr disk) return -1; }
+static int +qemuParseVxHSString(virDomainDiskDefPtr def) +{ + virURIPtr uri = NULL; + + if (!(uri = virURIParse(def->src->path))) + return -1; +
Should there be validation of the name? Something like a call to virUUIDIsValid? This is code for the path of parsing a qemu command line of a guest not started by libvirt in order to add it to libvirt (virsh qemu-attach), which is used "less often", so I'm less concerned about this, but mainly curious. John
+ return qemuParseDriveURIString(def, uri, "vxhs"); +}
/* * This method takes a string representing a QEMU command line ARGV set @@ -737,6 +747,12 @@ qemuParseCommandLineDisk(virDomainXMLOptionPtr xmlopt, if (VIR_STRDUP(def->src->path, vdi) < 0) goto error; } + } else if (STRPREFIX(def->src->path, "vxhs:")) { + def->src->type = VIR_STORAGE_TYPE_NETWORK; + def->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + + if (qemuParseVxHSString(def) < 0) + goto error; } else { def->src->type = VIR_STORAGE_TYPE_FILE; } @@ -1933,6 +1949,10 @@ qemuParseCommandLine(virCapsPtr caps, disk->src->type = VIR_STORAGE_TYPE_NETWORK; disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_SHEEPDOG; val += strlen("sheepdog:"); + } else if (STRPREFIX(val, "vxhs:")) { + disk->src->type = VIR_STORAGE_TYPE_NETWORK; + disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + val += strlen("vxhs:"); } else { disk->src->type = VIR_STORAGE_TYPE_FILE; } @@ -2009,6 +2029,11 @@ qemuParseCommandLine(virCapsPtr caps, goto error;
break; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + if (qemuParseVxHSString(disk) < 0) + goto error; + + break; case VIR_STORAGE_NET_PROTOCOL_HTTP: case VIR_STORAGE_NET_PROTOCOL_HTTPS: case VIR_STORAGE_NET_PROTOCOL_FTP: diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c index 272db67..a730a01 100644 --- a/src/util/virstoragefile.c +++ b/src/util/virstoragefile.c @@ -86,7 +86,8 @@ VIR_ENUM_IMPL(virStorageNetProtocol, VIR_STORAGE_NET_PROTOCOL_LAST, "ftp", "ftps", "tftp", - "ssh") + "ssh", + "vxhs")
VIR_ENUM_IMPL(virStorageNetHostTransport, VIR_STORAGE_NET_HOST_TRANS_LAST, "tcp", @@ -2634,6 +2635,7 @@ virStorageSourceParseBackingColon(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: virReportError(VIR_ERR_INTERNAL_ERROR, _("malformed backing store path for protocol %s"), protocol); diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h index 3d09468..88dff36 100644 --- a/src/util/virstoragefile.h +++ b/src/util/virstoragefile.h @@ -132,6 +132,7 @@ typedef enum { VIR_STORAGE_NET_PROTOCOL_FTPS, VIR_STORAGE_NET_PROTOCOL_TFTP, VIR_STORAGE_NET_PROTOCOL_SSH, + VIR_STORAGE_NET_PROTOCOL_VXHS,
VIR_STORAGE_NET_PROTOCOL_LAST } virStorageNetProtocol; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args new file mode 100644 index 0000000..3d37b00 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/libexec/qemu-kvm \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-cpu qemu32 \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +format=raw,if=none,id=drive-virtio-disk0,cache=none \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml new file mode 100644 index 0000000..45c807f --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> + <host name='192.168.0.1' port='9999'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 1ee8402..d94e3f2 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -869,6 +869,7 @@ mymain(void) # endif DO_TEST("disk-drive-network-rbd-ipv6", NONE); DO_TEST_FAILURE("disk-drive-network-rbd-no-colon", NONE); + DO_TEST("disk-drive-network-vxhs", NONE); DO_TEST("disk-drive-no-boot", QEMU_CAPS_BOOTINDEX); DO_TEST_PARSE_ERROR("disk-device-lun-type-invalid",

Hi John, Thanks for the review and your comments. Sorry about the delay in reply. Our qemu patch has been undergoing a lot of changes, and we were hoping to finalize some things before the next libvirt patch. On Mon, Nov 14, 2016 at 12:44 PM, John Ferlan <jferlan@redhat.com> wrote:
$SUBJ
s/Change to support/Add support for
s/ with qemu-kvm//
On 11/12/2016 01:19 AM, Ashish Mittal wrote:
Sample XML for a vxhs vdisk is as follows:
<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr) and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a "case VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch (virStorageNetProtocol) src->protocol".
Also running 'make syntax-check' would show that tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a too long line at the end:
GEN test-wrap-argv --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args 2016-11-14 06:16:40.739509847 -0500 +++ - 2016-11-14 06:23:59.661334157 -0500 @@ -20,4 +20,5 @@ -usb \ -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ format=raw,if=none,id=drive-virtio-disk0,cache=none \ --device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\ +id=virtio-disk0 Incorrect line wrapping in tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args Use test-wrap-argv.pl to wrap test data files cfg.mk:1075: recipe for target 'test-wrap-argv' failed make: *** [test-wrap-argv] Error 1
All those are easily fixable; however, you will also need to modify the virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order to do the JSON parsing properly as well as tests/virstoragetest.c to add test case(s). If you use 'gitk' or some other git log -p browser, starting with commit id 'e91f767c743' and going forward through Peter's series you can see how this was done for other various protocols.
From what I see from the QEMU list that'll work for the json you provided in your example:
Sample command line using the JSON syntax: ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410", "server":{"host":"172.172.17.4","port":"9999"}}'
Will fix above issues.
Other questions/concerns... These may have been answered previously, but I haven't followed closely along.
1. It appears to me that the 'name' field is a UUID for the VxHS volume, but it's not clear that's all it can be. You'll note in virDomainDiskSourceParse a specific gluster check. So that we don't get surprised at some point in the future - are there any specific things in the name that need to be addressed?
That's correct. Name is in fact a UUID and nothing else. The "name" tag in XML name='eb90327c-8302-4725-9e1b-4e85ed4dc251' is misleading. I will change it to "vdisk-id" so there's no confusion. We don't think there is a need to validate the name/vdisk-id format. In case of incorrect vdisk ID, qemu vxhs code will fail the vdisk open.
Beyond that, how does one know what to provide for that 'name=' field? For VxHS, it seems to me that the path must be a UUID of some sort. So would anyone know which UUID to magically pick in order to add the disk to a guest? How does one get a list?
The UUID allocation and vdisk creation is done out-of-band by the VxHS server. The UUIDs for disks are populated in the XML file at the time of VM creation. User does not (need to) know what UUID to open or the list of all UUIDs.
2. Does/will VxHS ever require authentication similar to iSCSI and RBD?
This is under active discussion with the qemu community.
3. Will there be (or is there) support for LUKS volumes on the VxHS server?
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
5. Along the same lines - virStorageFileBackendForTypeInternal handles getting details/metadata about the file format in order to determine block sizes for inactive domains. See the path qemuDomainGetBlockInfo calling into qemuStorageLimitsRefresh which calls virStorageFileInitAs. These would be for the backend of a 'virsh domblkinfo' type command when the domain isn't running. The gluster backend has a set of them (search on _virStorageFileBackend and virStorageFileBackendGluster).
Would supporting the above functionality be optional? As mentioned above, vxhs disk creation happens outside of qemu/libvirt by the vxhs server in response to external messages. The XML config for the guest VM is then populated with correct UUID + server IP/port info, which is picked up by libvirt.
6. Use something like cscope to be sure to check all the places where VIR_STORAGE_NET_PROTOCOL_ is used in the code and made sure there isn't a special case for VXHS (RBD, GLUSTER, and ISCSI have a few). Similarly check usage of VIR_STORAGE_TYPE_NETWORK. Some checks are not within switch/case statements and some involve specific disk configuration options that are(n't) supported. What about any need for migration? (see qemuMigrationIsSafe)
Will check for other special cases. We do support live migration add cache='none' to the config by default. Do I need to make additional changes for this?
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 11b3330..ade35a3 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2276,9 +2276,9 @@ <dd> The <code>protocol</code> attribute specifies the protocol to access to the requested image. Possible values are "nbd", - "iscsi", "rbd", "sheepdog" or "gluster". If the - <code>protocol</code> attribute is "rbd", "sheepdog" or - "gluster", an additional attribute <code>name</code> is + "iscsi", "rbd", "sheepdog", "gluster" or "vxhs". If the + <code>protocol</code> attribute is "rbd", "sheepdog", "gluster" + or "vxhs", an additional attribute <code>name</code> is mandatory to specify which volume/image will be used. For "nbd", the <code>name</code> attribute is optional. For "iscsi" (<span class="since">since 1.0.4</span>), the <code>name</code>
Just before the closing </dd> add something to describe what the 'name' will be for vxfs like is done for iscsi, such as:
For "vxhs" (<span class="since">since 2.5.0</span>), the <code>name</code> is ... [it looks like a UUID to me - although there's nothing that validates that hypothesis].
Will do.
@@ -2388,6 +2388,12 @@ <td> one or more (<span class="since">Since 2.1.0</span>), just one prior to that </td> <td> 24007 </td> </tr> + <tr> + <td> vxhs </td> + <td> a server running Veritas HyperScale daemon </td> + <td> only one </td> + <td> 9999 </td> + </tr> </table> <p> gluster supports "tcp", "rdma", "unix" as valid values for the diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 19d45fd..e569e0a 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1464,6 +1464,7 @@ <value>ftp</value> <value>ftps</value> <value>tftp</value> + <value>vxhs</value> </choice> </attribute> <optional> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a5fce3..c7b95aa 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -491,6 +491,9 @@ qemuNetworkDriveGetPort(int protocol, /* no default port specified */ return 0;
+ case VIR_STORAGE_NET_PROTOCOL_VXHS: + return 9999; + case VIR_STORAGE_NET_PROTOCOL_RBD: case VIR_STORAGE_NET_PROTOCOL_LAST: case VIR_STORAGE_NET_PROTOCOL_NONE: @@ -1034,6 +1037,7 @@ qemuBuildNetworkDriveStr(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: + case VIR_STORAGE_NET_PROTOCOL_VXHS: ret = qemuBuildNetworkDriveURI(src, secinfo); break;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a82e58b..4910004 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13724,6 +13724,7 @@ qemuDomainSnapshotPrepareDiskExternalBackingInactive(virDomainDiskDefPtr disk) case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external inactive snapshots are not supported on " @@ -13787,6 +13788,7 @@ qemuDomainSnapshotPrepareDiskExternalOverlayActive(virDomainSnapshotDiskDefPtr d case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external active snapshots are not supported on " @@ -13932,6 +13934,7 @@ qemuDomainSnapshotPrepareDiskInternal(virConnectPtr conn, case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("internal inactive snapshots are not supported on " diff --git a/src/qemu/qemu_parse_command.c b/src/qemu/qemu_parse_command.c index c3b27aa..f2edc28 100644 --- a/src/qemu/qemu_parse_command.c +++ b/src/qemu/qemu_parse_command.c @@ -263,6 +263,16 @@ qemuParseNBDString(virDomainDiskDefPtr disk) return -1; }
+static int +qemuParseVxHSString(virDomainDiskDefPtr def) +{ + virURIPtr uri = NULL; + + if (!(uri = virURIParse(def->src->path))) + return -1; +
Should there be validation of the name? Something like a call to virUUIDIsValid?
This is code for the path of parsing a qemu command line of a guest not started by libvirt in order to add it to libvirt (virsh qemu-attach), which is used "less often", so I'm less concerned about this, but mainly curious.
An incorrect UUID will only cause vxhs code in qemu to fail the disk open. So we don't think there is a need for additional checks. Please let us know if this sounds OK!
John
+ return qemuParseDriveURIString(def, uri, "vxhs"); +}
/* * This method takes a string representing a QEMU command line ARGV set @@ -737,6 +747,12 @@ qemuParseCommandLineDisk(virDomainXMLOptionPtr xmlopt, if (VIR_STRDUP(def->src->path, vdi) < 0) goto error; } + } else if (STRPREFIX(def->src->path, "vxhs:")) { + def->src->type = VIR_STORAGE_TYPE_NETWORK; + def->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + + if (qemuParseVxHSString(def) < 0) + goto error; } else { def->src->type = VIR_STORAGE_TYPE_FILE; } @@ -1933,6 +1949,10 @@ qemuParseCommandLine(virCapsPtr caps, disk->src->type = VIR_STORAGE_TYPE_NETWORK; disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_SHEEPDOG; val += strlen("sheepdog:"); + } else if (STRPREFIX(val, "vxhs:")) { + disk->src->type = VIR_STORAGE_TYPE_NETWORK; + disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + val += strlen("vxhs:"); } else { disk->src->type = VIR_STORAGE_TYPE_FILE; } @@ -2009,6 +2029,11 @@ qemuParseCommandLine(virCapsPtr caps, goto error;
break; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + if (qemuParseVxHSString(disk) < 0) + goto error; + + break; case VIR_STORAGE_NET_PROTOCOL_HTTP: case VIR_STORAGE_NET_PROTOCOL_HTTPS: case VIR_STORAGE_NET_PROTOCOL_FTP: diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c index 272db67..a730a01 100644 --- a/src/util/virstoragefile.c +++ b/src/util/virstoragefile.c @@ -86,7 +86,8 @@ VIR_ENUM_IMPL(virStorageNetProtocol, VIR_STORAGE_NET_PROTOCOL_LAST, "ftp", "ftps", "tftp", - "ssh") + "ssh", + "vxhs")
VIR_ENUM_IMPL(virStorageNetHostTransport, VIR_STORAGE_NET_HOST_TRANS_LAST, "tcp", @@ -2634,6 +2635,7 @@ virStorageSourceParseBackingColon(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: virReportError(VIR_ERR_INTERNAL_ERROR, _("malformed backing store path for protocol %s"), protocol); diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h index 3d09468..88dff36 100644 --- a/src/util/virstoragefile.h +++ b/src/util/virstoragefile.h @@ -132,6 +132,7 @@ typedef enum { VIR_STORAGE_NET_PROTOCOL_FTPS, VIR_STORAGE_NET_PROTOCOL_TFTP, VIR_STORAGE_NET_PROTOCOL_SSH, + VIR_STORAGE_NET_PROTOCOL_VXHS,
VIR_STORAGE_NET_PROTOCOL_LAST } virStorageNetProtocol; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args new file mode 100644 index 0000000..3d37b00 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/libexec/qemu-kvm \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-cpu qemu32 \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +format=raw,if=none,id=drive-virtio-disk0,cache=none \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml new file mode 100644 index 0000000..45c807f --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> + <host name='192.168.0.1' port='9999'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 1ee8402..d94e3f2 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -869,6 +869,7 @@ mymain(void) # endif DO_TEST("disk-drive-network-rbd-ipv6", NONE); DO_TEST_FAILURE("disk-drive-network-rbd-no-colon", NONE); + DO_TEST("disk-drive-network-vxhs", NONE); DO_TEST("disk-drive-no-boot", QEMU_CAPS_BOOTINDEX); DO_TEST_PARSE_ERROR("disk-device-lun-type-invalid",
Thanks, Ashish

On 12/16/2016 10:26 PM, ashish mittal wrote:
Hi John,
Thanks for the review and your comments. Sorry about the delay in reply. Our qemu patch has been undergoing a lot of changes, and we were hoping to finalize some things before the next libvirt patch.
I've been following the qemu changes from a distance - they're on the radar, but I don't follow the details! I'm deep in the middle of other work. I think some of the qemu changes will alter how some of the details work here.
On Mon, Nov 14, 2016 at 12:44 PM, John Ferlan <jferlan@redhat.com> wrote:
$SUBJ
s/Change to support/Add support for
s/ with qemu-kvm//
On 11/12/2016 01:19 AM, Ashish Mittal wrote:
Sample XML for a vxhs vdisk is as follows:
<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr) and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a "case VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch (virStorageNetProtocol) src->protocol".
Also running 'make syntax-check' would show that tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a too long line at the end:
GEN test-wrap-argv --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args 2016-11-14 06:16:40.739509847 -0500 +++ - 2016-11-14 06:23:59.661334157 -0500 @@ -20,4 +20,5 @@ -usb \ -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ format=raw,if=none,id=drive-virtio-disk0,cache=none \ --device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\ +id=virtio-disk0 Incorrect line wrapping in tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args Use test-wrap-argv.pl to wrap test data files cfg.mk:1075: recipe for target 'test-wrap-argv' failed make: *** [test-wrap-argv] Error 1
All those are easily fixable; however, you will also need to modify the virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order to do the JSON parsing properly as well as tests/virstoragetest.c to add test case(s). If you use 'gitk' or some other git log -p browser, starting with commit id 'e91f767c743' and going forward through Peter's series you can see how this was done for other various protocols.
From what I see from the QEMU list that'll work for the json you provided in your example:
Sample command line using the JSON syntax: ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410", "server":{"host":"172.172.17.4","port":"9999"}}'
Will fix above issues.
Other questions/concerns... These may have been answered previously, but I haven't followed closely along.
1. It appears to me that the 'name' field is a UUID for the VxHS volume, but it's not clear that's all it can be. You'll note in virDomainDiskSourceParse a specific gluster check. So that we don't get surprised at some point in the future - are there any specific things in the name that need to be addressed?
That's correct. Name is in fact a UUID and nothing else. The "name" tag in XML name='eb90327c-8302-4725-9e1b-4e85ed4dc251' is misleading. I will change it to "vdisk-id" so there's no confusion.
We don't think there is a need to validate the name/vdisk-id format. In case of incorrect vdisk ID, qemu vxhs code will fail the vdisk open.
Beyond that, how does one know what to provide for that 'name=' field? For VxHS, it seems to me that the path must be a UUID of some sort. So would anyone know which UUID to magically pick in order to add the disk to a guest? How does one get a list?
The UUID allocation and vdisk creation is done out-of-band by the VxHS server. The UUIDs for disks are populated in the XML file at the time of VM creation. User does not (need to) know what UUID to open or the list of all UUIDs.
Now I'm confused - maybe I asked the wrong question. You show the domain XML: <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> How does anyone know what to put in 'name'? That's a UUID. That's what I'm looking to determine. How do I know which volume I'd be choosing? Maybe there's something I'm missing about the implementation. There has to be something to allow someone to make an intelligent choice about which vxhs volume/disk/storage is being used by the domain. If I compare that to iSCSI: <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> There are some iSCSI tools and built up knowledge that show you how to generate the value for the "name" attribute. But it's a lot easier to do this if one creates an iSCSI pool and then adds the disk/volume/lun to the domain via pool source XML: <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/> or <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/> where 'volume' is filled in from the output of a 'virsh vol-list iscsi-pool'...
2. Does/will VxHS ever require authentication similar to iSCSI and RBD?
This is under active discussion with the qemu community.
Yes - I've seen.. The secrets concept is available in libvirt and it works with the qemu secrets... The RBD code uses the model I think you'll need to use.
3. Will there be (or is there) support for LUKS volumes on the VxHS server?
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
5. Along the same lines - virStorageFileBackendForTypeInternal handles getting details/metadata about the file format in order to determine block sizes for inactive domains. See the path qemuDomainGetBlockInfo calling into qemuStorageLimitsRefresh which calls virStorageFileInitAs. These would be for the backend of a 'virsh domblkinfo' type command when the domain isn't running. The gluster backend has a set of them (search on _virStorageFileBackend and virStorageFileBackendGluster).
Would supporting the above functionality be optional? As mentioned above, vxhs disk creation happens outside of qemu/libvirt by the vxhs server in response to external messages. The XML config for the guest VM is then populated with correct UUID + server IP/port info, which is picked up by libvirt.
How would the libvirt code find out anything it needs about the target volume such as allocation, capacity, and physical usage. I think these are details we'll work through once the qemu connection is made. Of course without a storage pool and need for volume def, then this would be unnecessary.
6. Use something like cscope to be sure to check all the places where VIR_STORAGE_NET_PROTOCOL_ is used in the code and made sure there isn't a special case for VXHS (RBD, GLUSTER, and ISCSI have a few). Similarly check usage of VIR_STORAGE_TYPE_NETWORK. Some checks are not within switch/case statements and some involve specific disk configuration options that are(n't) supported. What about any need for migration? (see qemuMigrationIsSafe)
Will check for other special cases. We do support live migration add cache='none' to the config by default. Do I need to make additional changes for this?
At this point - I have no idea. You have networked storage and there are interactions with libvirt/qemu and the nbd server which I don't have fresh in mind right now.
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 11b3330..ade35a3 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2276,9 +2276,9 @@ <dd> The <code>protocol</code> attribute specifies the protocol to access to the requested image. Possible values are "nbd", - "iscsi", "rbd", "sheepdog" or "gluster". If the - <code>protocol</code> attribute is "rbd", "sheepdog" or - "gluster", an additional attribute <code>name</code> is + "iscsi", "rbd", "sheepdog", "gluster" or "vxhs". If the + <code>protocol</code> attribute is "rbd", "sheepdog", "gluster" + or "vxhs", an additional attribute <code>name</code> is mandatory to specify which volume/image will be used. For "nbd", the <code>name</code> attribute is optional. For "iscsi" (<span class="since">since 1.0.4</span>), the <code>name</code>
Just before the closing </dd> add something to describe what the 'name' will be for vxfs like is done for iscsi, such as:
For "vxhs" (<span class="since">since 2.5.0</span>), the <code>name</code> is ... [it looks like a UUID to me - although there's nothing that validates that hypothesis].
Will do.
@@ -2388,6 +2388,12 @@ <td> one or more (<span class="since">Since 2.1.0</span>), just one prior to that </td> <td> 24007 </td> </tr> + <tr> + <td> vxhs </td> + <td> a server running Veritas HyperScale daemon </td> + <td> only one </td> + <td> 9999 </td> + </tr> </table> <p> gluster supports "tcp", "rdma", "unix" as valid values for the diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 19d45fd..e569e0a 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1464,6 +1464,7 @@ <value>ftp</value> <value>ftps</value> <value>tftp</value> + <value>vxhs</value> </choice> </attribute> <optional> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a5fce3..c7b95aa 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -491,6 +491,9 @@ qemuNetworkDriveGetPort(int protocol, /* no default port specified */ return 0;
+ case VIR_STORAGE_NET_PROTOCOL_VXHS: + return 9999; + case VIR_STORAGE_NET_PROTOCOL_RBD: case VIR_STORAGE_NET_PROTOCOL_LAST: case VIR_STORAGE_NET_PROTOCOL_NONE: @@ -1034,6 +1037,7 @@ qemuBuildNetworkDriveStr(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: + case VIR_STORAGE_NET_PROTOCOL_VXHS: ret = qemuBuildNetworkDriveURI(src, secinfo); break;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a82e58b..4910004 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13724,6 +13724,7 @@ qemuDomainSnapshotPrepareDiskExternalBackingInactive(virDomainDiskDefPtr disk) case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external inactive snapshots are not supported on " @@ -13787,6 +13788,7 @@ qemuDomainSnapshotPrepareDiskExternalOverlayActive(virDomainSnapshotDiskDefPtr d case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external active snapshots are not supported on " @@ -13932,6 +13934,7 @@ qemuDomainSnapshotPrepareDiskInternal(virConnectPtr conn, case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("internal inactive snapshots are not supported on " diff --git a/src/qemu/qemu_parse_command.c b/src/qemu/qemu_parse_command.c index c3b27aa..f2edc28 100644 --- a/src/qemu/qemu_parse_command.c +++ b/src/qemu/qemu_parse_command.c @@ -263,6 +263,16 @@ qemuParseNBDString(virDomainDiskDefPtr disk) return -1; }
+static int +qemuParseVxHSString(virDomainDiskDefPtr def) +{ + virURIPtr uri = NULL; + + if (!(uri = virURIParse(def->src->path))) + return -1; +
Should there be validation of the name? Something like a call to virUUIDIsValid?
This is code for the path of parsing a qemu command line of a guest not started by libvirt in order to add it to libvirt (virsh qemu-attach), which is used "less often", so I'm less concerned about this, but mainly curious.
An incorrect UUID will only cause vxhs code in qemu to fail the disk open. So we don't think there is a need for additional checks. Please let us know if this sounds OK!
I'll wait for the qemu code to be done before I page these changes fully back in. John
John
+ return qemuParseDriveURIString(def, uri, "vxhs"); +}
/* * This method takes a string representing a QEMU command line ARGV set @@ -737,6 +747,12 @@ qemuParseCommandLineDisk(virDomainXMLOptionPtr xmlopt, if (VIR_STRDUP(def->src->path, vdi) < 0) goto error; } + } else if (STRPREFIX(def->src->path, "vxhs:")) { + def->src->type = VIR_STORAGE_TYPE_NETWORK; + def->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + + if (qemuParseVxHSString(def) < 0) + goto error; } else { def->src->type = VIR_STORAGE_TYPE_FILE; } @@ -1933,6 +1949,10 @@ qemuParseCommandLine(virCapsPtr caps, disk->src->type = VIR_STORAGE_TYPE_NETWORK; disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_SHEEPDOG; val += strlen("sheepdog:"); + } else if (STRPREFIX(val, "vxhs:")) { + disk->src->type = VIR_STORAGE_TYPE_NETWORK; + disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + val += strlen("vxhs:"); } else { disk->src->type = VIR_STORAGE_TYPE_FILE; } @@ -2009,6 +2029,11 @@ qemuParseCommandLine(virCapsPtr caps, goto error;
break; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + if (qemuParseVxHSString(disk) < 0) + goto error; + + break; case VIR_STORAGE_NET_PROTOCOL_HTTP: case VIR_STORAGE_NET_PROTOCOL_HTTPS: case VIR_STORAGE_NET_PROTOCOL_FTP: diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c index 272db67..a730a01 100644 --- a/src/util/virstoragefile.c +++ b/src/util/virstoragefile.c @@ -86,7 +86,8 @@ VIR_ENUM_IMPL(virStorageNetProtocol, VIR_STORAGE_NET_PROTOCOL_LAST, "ftp", "ftps", "tftp", - "ssh") + "ssh", + "vxhs")
VIR_ENUM_IMPL(virStorageNetHostTransport, VIR_STORAGE_NET_HOST_TRANS_LAST, "tcp", @@ -2634,6 +2635,7 @@ virStorageSourceParseBackingColon(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: virReportError(VIR_ERR_INTERNAL_ERROR, _("malformed backing store path for protocol %s"), protocol); diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h index 3d09468..88dff36 100644 --- a/src/util/virstoragefile.h +++ b/src/util/virstoragefile.h @@ -132,6 +132,7 @@ typedef enum { VIR_STORAGE_NET_PROTOCOL_FTPS, VIR_STORAGE_NET_PROTOCOL_TFTP, VIR_STORAGE_NET_PROTOCOL_SSH, + VIR_STORAGE_NET_PROTOCOL_VXHS,
VIR_STORAGE_NET_PROTOCOL_LAST } virStorageNetProtocol; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args new file mode 100644 index 0000000..3d37b00 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/libexec/qemu-kvm \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-cpu qemu32 \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +format=raw,if=none,id=drive-virtio-disk0,cache=none \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml new file mode 100644 index 0000000..45c807f --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> + <host name='192.168.0.1' port='9999'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 1ee8402..d94e3f2 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -869,6 +869,7 @@ mymain(void) # endif DO_TEST("disk-drive-network-rbd-ipv6", NONE); DO_TEST_FAILURE("disk-drive-network-rbd-no-colon", NONE); + DO_TEST("disk-drive-network-vxhs", NONE); DO_TEST("disk-drive-no-boot", QEMU_CAPS_BOOTINDEX); DO_TEST_PARSE_ERROR("disk-device-lun-type-invalid",
Thanks, Ashish

On Mon, Dec 19, 2016 at 4:13 PM, John Ferlan <jferlan@redhat.com> wrote:
On 12/16/2016 10:26 PM, ashish mittal wrote:
Hi John,
Thanks for the review and your comments. Sorry about the delay in reply. Our qemu patch has been undergoing a lot of changes, and we were hoping to finalize some things before the next libvirt patch.
I've been following the qemu changes from a distance - they're on the radar, but I don't follow the details! I'm deep in the middle of other work. I think some of the qemu changes will alter how some of the details work here.
On Mon, Nov 14, 2016 at 12:44 PM, John Ferlan <jferlan@redhat.com> wrote:
$SUBJ
s/Change to support/Add support for
s/ with qemu-kvm//
On 11/12/2016 01:19 AM, Ashish Mittal wrote:
Sample XML for a vxhs vdisk is as follows:
<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr) and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a "case VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch (virStorageNetProtocol) src->protocol".
Also running 'make syntax-check' would show that tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a too long line at the end:
GEN test-wrap-argv --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args 2016-11-14 06:16:40.739509847 -0500 +++ - 2016-11-14 06:23:59.661334157 -0500 @@ -20,4 +20,5 @@ -usb \ -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ format=raw,if=none,id=drive-virtio-disk0,cache=none \ --device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\ +id=virtio-disk0 Incorrect line wrapping in tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args Use test-wrap-argv.pl to wrap test data files cfg.mk:1075: recipe for target 'test-wrap-argv' failed make: *** [test-wrap-argv] Error 1
All those are easily fixable; however, you will also need to modify the virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order to do the JSON parsing properly as well as tests/virstoragetest.c to add test case(s). If you use 'gitk' or some other git log -p browser, starting with commit id 'e91f767c743' and going forward through Peter's series you can see how this was done for other various protocols.
From what I see from the QEMU list that'll work for the json you provided in your example:
Sample command line using the JSON syntax: ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410", "server":{"host":"172.172.17.4","port":"9999"}}'
Will fix above issues.
Other questions/concerns... These may have been answered previously, but I haven't followed closely along.
1. It appears to me that the 'name' field is a UUID for the VxHS volume, but it's not clear that's all it can be. You'll note in virDomainDiskSourceParse a specific gluster check. So that we don't get surprised at some point in the future - are there any specific things in the name that need to be addressed?
That's correct. Name is in fact a UUID and nothing else. The "name" tag in XML name='eb90327c-8302-4725-9e1b-4e85ed4dc251' is misleading. I will change it to "vdisk-id" so there's no confusion.
We don't think there is a need to validate the name/vdisk-id format. In case of incorrect vdisk ID, qemu vxhs code will fail the vdisk open.
Beyond that, how does one know what to provide for that 'name=' field? For VxHS, it seems to me that the path must be a UUID of some sort. So would anyone know which UUID to magically pick in order to add the disk to a guest? How does one get a list?
The UUID allocation and vdisk creation is done out-of-band by the VxHS server. The UUIDs for disks are populated in the XML file at the time of VM creation. User does not (need to) know what UUID to open or the list of all UUIDs.
Now I'm confused - maybe I asked the wrong question.
You show the domain XML:
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source>
How does anyone know what to put in 'name'? That's a UUID. That's what I'm looking to determine.
How do I know which volume I'd be choosing? Maybe there's something I'm missing about the implementation. There has to be something to allow someone to make an intelligent choice about which vxhs volume/disk/storage is being used by the domain.
If I compare that to iSCSI:
<source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source>
There are some iSCSI tools and built up knowledge that show you how to generate the value for the "name" attribute.
But it's a lot easier to do this if one creates an iSCSI pool and then adds the disk/volume/lun to the domain via pool source XML:
<source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/>
or
<source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/>
where 'volume' is filled in from the output of a 'virsh vol-list iscsi-pool'...
2. Does/will VxHS ever require authentication similar to iSCSI and RBD?
This is under active discussion with the qemu community.
Yes - I've seen.. The secrets concept is available in libvirt and it works with the qemu secrets... The RBD code uses the model I think you'll need to use.
3. Will there be (or is there) support for LUKS volumes on the VxHS server?
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
Some explanation is in order - HyperScale is not designed to be used as a stand-alone independent storage. It is designed only to be used in the OpenStack environment with all the related Cinder/Nova changes in place. Therefore, we do not have a need for most of the above-mentioned functions from libvirt/qemu. Even in the OpenStack environment, we do not support creating storage volumes independent of a guest VM. A VxHS storage volume can only be created for a particular guest VM. With this scheme, a user does not have to manage storage pools separately. VxHS automatically configures and consumes the direct attached SSD/HDD on the OpenStack compute when enabled. After this, all the requests to add storage to guest VMs are forwarded by OpenStack directly to the hyperscale daemon on the compute, and it takes care of creating the underlying storage etc. The role of libvirt is limited to opening the volume specified in the guest XML file. Volume creation, deletion etc is done by the VxHS daemon in response to messages from the OpenStack controller. Our OpenStack orchestration code (Nova) is responsible for updating the guest XML with correct volume information for libvirt to use. A regular user (libvirt) is not expected to know about what volume IDs exist on any given host. A regular user also does not have a volume device node on the local compute to query. The only way to get to a volume is via the network using the server IP-port address.
5. Along the same lines - virStorageFileBackendForTypeInternal handles getting details/metadata about the file format in order to determine block sizes for inactive domains. See the path qemuDomainGetBlockInfo calling into qemuStorageLimitsRefresh which calls virStorageFileInitAs. These would be for the backend of a 'virsh domblkinfo' type command when the domain isn't running. The gluster backend has a set of them (search on _virStorageFileBackend and virStorageFileBackendGluster).
Would supporting the above functionality be optional? As mentioned above, vxhs disk creation happens outside of qemu/libvirt by the vxhs server in response to external messages. The XML config for the guest VM is then populated with correct UUID + server IP/port info, which is picked up by libvirt.
How would the libvirt code find out anything it needs about the target volume such as allocation, capacity, and physical usage. I think these are details we'll work through once the qemu connection is made.
Of course without a storage pool and need for volume def, then this would be unnecessary.
6. Use something like cscope to be sure to check all the places where VIR_STORAGE_NET_PROTOCOL_ is used in the code and made sure there isn't a special case for VXHS (RBD, GLUSTER, and ISCSI have a few). Similarly check usage of VIR_STORAGE_TYPE_NETWORK. Some checks are not within switch/case statements and some involve specific disk configuration options that are(n't) supported. What about any need for migration? (see qemuMigrationIsSafe)
Will check for other special cases. We do support live migration add cache='none' to the config by default. Do I need to make additional changes for this?
At this point - I have no idea. You have networked storage and there are interactions with libvirt/qemu and the nbd server which I don't have fresh in mind right now.
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 11b3330..ade35a3 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -2276,9 +2276,9 @@ <dd> The <code>protocol</code> attribute specifies the protocol to access to the requested image. Possible values are "nbd", - "iscsi", "rbd", "sheepdog" or "gluster". If the - <code>protocol</code> attribute is "rbd", "sheepdog" or - "gluster", an additional attribute <code>name</code> is + "iscsi", "rbd", "sheepdog", "gluster" or "vxhs". If the + <code>protocol</code> attribute is "rbd", "sheepdog", "gluster" + or "vxhs", an additional attribute <code>name</code> is mandatory to specify which volume/image will be used. For "nbd", the <code>name</code> attribute is optional. For "iscsi" (<span class="since">since 1.0.4</span>), the <code>name</code>
Just before the closing </dd> add something to describe what the 'name' will be for vxfs like is done for iscsi, such as:
For "vxhs" (<span class="since">since 2.5.0</span>), the <code>name</code> is ... [it looks like a UUID to me - although there's nothing that validates that hypothesis].
Will do.
@@ -2388,6 +2388,12 @@ <td> one or more (<span class="since">Since 2.1.0</span>), just one prior to that </td> <td> 24007 </td> </tr> + <tr> + <td> vxhs </td> + <td> a server running Veritas HyperScale daemon </td> + <td> only one </td> + <td> 9999 </td> + </tr> </table> <p> gluster supports "tcp", "rdma", "unix" as valid values for the diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 19d45fd..e569e0a 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -1464,6 +1464,7 @@ <value>ftp</value> <value>ftps</value> <value>tftp</value> + <value>vxhs</value> </choice> </attribute> <optional> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a5fce3..c7b95aa 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -491,6 +491,9 @@ qemuNetworkDriveGetPort(int protocol, /* no default port specified */ return 0;
+ case VIR_STORAGE_NET_PROTOCOL_VXHS: + return 9999; + case VIR_STORAGE_NET_PROTOCOL_RBD: case VIR_STORAGE_NET_PROTOCOL_LAST: case VIR_STORAGE_NET_PROTOCOL_NONE: @@ -1034,6 +1037,7 @@ qemuBuildNetworkDriveStr(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: + case VIR_STORAGE_NET_PROTOCOL_VXHS: ret = qemuBuildNetworkDriveURI(src, secinfo); break;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a82e58b..4910004 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13724,6 +13724,7 @@ qemuDomainSnapshotPrepareDiskExternalBackingInactive(virDomainDiskDefPtr disk) case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external inactive snapshots are not supported on " @@ -13787,6 +13788,7 @@ qemuDomainSnapshotPrepareDiskExternalOverlayActive(virDomainSnapshotDiskDefPtr d case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("external active snapshots are not supported on " @@ -13932,6 +13934,7 @@ qemuDomainSnapshotPrepareDiskInternal(virConnectPtr conn, case VIR_STORAGE_NET_PROTOCOL_FTPS: case VIR_STORAGE_NET_PROTOCOL_TFTP: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: case VIR_STORAGE_NET_PROTOCOL_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, _("internal inactive snapshots are not supported on " diff --git a/src/qemu/qemu_parse_command.c b/src/qemu/qemu_parse_command.c index c3b27aa..f2edc28 100644 --- a/src/qemu/qemu_parse_command.c +++ b/src/qemu/qemu_parse_command.c @@ -263,6 +263,16 @@ qemuParseNBDString(virDomainDiskDefPtr disk) return -1; }
+static int +qemuParseVxHSString(virDomainDiskDefPtr def) +{ + virURIPtr uri = NULL; + + if (!(uri = virURIParse(def->src->path))) + return -1; +
Should there be validation of the name? Something like a call to virUUIDIsValid?
This is code for the path of parsing a qemu command line of a guest not started by libvirt in order to add it to libvirt (virsh qemu-attach), which is used "less often", so I'm less concerned about this, but mainly curious.
An incorrect UUID will only cause vxhs code in qemu to fail the disk open. So we don't think there is a need for additional checks. Please let us know if this sounds OK!
I'll wait for the qemu code to be done before I page these changes fully back in.
John
John
+ return qemuParseDriveURIString(def, uri, "vxhs"); +}
/* * This method takes a string representing a QEMU command line ARGV set @@ -737,6 +747,12 @@ qemuParseCommandLineDisk(virDomainXMLOptionPtr xmlopt, if (VIR_STRDUP(def->src->path, vdi) < 0) goto error; } + } else if (STRPREFIX(def->src->path, "vxhs:")) { + def->src->type = VIR_STORAGE_TYPE_NETWORK; + def->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + + if (qemuParseVxHSString(def) < 0) + goto error; } else { def->src->type = VIR_STORAGE_TYPE_FILE; } @@ -1933,6 +1949,10 @@ qemuParseCommandLine(virCapsPtr caps, disk->src->type = VIR_STORAGE_TYPE_NETWORK; disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_SHEEPDOG; val += strlen("sheepdog:"); + } else if (STRPREFIX(val, "vxhs:")) { + disk->src->type = VIR_STORAGE_TYPE_NETWORK; + disk->src->protocol = VIR_STORAGE_NET_PROTOCOL_VXHS; + val += strlen("vxhs:"); } else { disk->src->type = VIR_STORAGE_TYPE_FILE; } @@ -2009,6 +2029,11 @@ qemuParseCommandLine(virCapsPtr caps, goto error;
break; + case VIR_STORAGE_NET_PROTOCOL_VXHS: + if (qemuParseVxHSString(disk) < 0) + goto error; + + break; case VIR_STORAGE_NET_PROTOCOL_HTTP: case VIR_STORAGE_NET_PROTOCOL_HTTPS: case VIR_STORAGE_NET_PROTOCOL_FTP: diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c index 272db67..a730a01 100644 --- a/src/util/virstoragefile.c +++ b/src/util/virstoragefile.c @@ -86,7 +86,8 @@ VIR_ENUM_IMPL(virStorageNetProtocol, VIR_STORAGE_NET_PROTOCOL_LAST, "ftp", "ftps", "tftp", - "ssh") + "ssh", + "vxhs")
VIR_ENUM_IMPL(virStorageNetHostTransport, VIR_STORAGE_NET_HOST_TRANS_LAST, "tcp", @@ -2634,6 +2635,7 @@ virStorageSourceParseBackingColon(virStorageSourcePtr src, case VIR_STORAGE_NET_PROTOCOL_ISCSI: case VIR_STORAGE_NET_PROTOCOL_GLUSTER: case VIR_STORAGE_NET_PROTOCOL_SSH: + case VIR_STORAGE_NET_PROTOCOL_VXHS: virReportError(VIR_ERR_INTERNAL_ERROR, _("malformed backing store path for protocol %s"), protocol); diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h index 3d09468..88dff36 100644 --- a/src/util/virstoragefile.h +++ b/src/util/virstoragefile.h @@ -132,6 +132,7 @@ typedef enum { VIR_STORAGE_NET_PROTOCOL_FTPS, VIR_STORAGE_NET_PROTOCOL_TFTP, VIR_STORAGE_NET_PROTOCOL_SSH, + VIR_STORAGE_NET_PROTOCOL_VXHS,
VIR_STORAGE_NET_PROTOCOL_LAST } virStorageNetProtocol; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args new file mode 100644 index 0000000..3d37b00 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/libexec/qemu-kvm \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-cpu qemu32 \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +format=raw,if=none,id=drive-virtio-disk0,cache=none \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml new file mode 100644 index 0000000..45c807f --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> + <host name='192.168.0.1' port='9999'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 1ee8402..d94e3f2 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -869,6 +869,7 @@ mymain(void) # endif DO_TEST("disk-drive-network-rbd-ipv6", NONE); DO_TEST_FAILURE("disk-drive-network-rbd-no-colon", NONE); + DO_TEST("disk-drive-network-vxhs", NONE); DO_TEST("disk-drive-no-boot", QEMU_CAPS_BOOTINDEX); DO_TEST_PARSE_ERROR("disk-device-lun-type-invalid",
Thanks, Ashish

On Tue, Jan 03, 2017 at 05:37:22PM -0800, ashish mittal wrote:
On Mon, Dec 19, 2016 at 4:13 PM, John Ferlan <jferlan@redhat.com> wrote:
On 12/16/2016 10:26 PM, ashish mittal wrote:
Hi John,
Thanks for the review and your comments. Sorry about the delay in reply. Our qemu patch has been undergoing a lot of changes, and we were hoping to finalize some things before the next libvirt patch.
I've been following the qemu changes from a distance - they're on the radar, but I don't follow the details! I'm deep in the middle of other work. I think some of the qemu changes will alter how some of the details work here.
On Mon, Nov 14, 2016 at 12:44 PM, John Ferlan <jferlan@redhat.com> wrote:
$SUBJ
s/Change to support/Add support for
s/ with qemu-kvm//
On 11/12/2016 01:19 AM, Ashish Mittal wrote:
Sample XML for a vxhs vdisk is as follows:
<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com> --- docs/formatdomain.html.in | 12 ++++++-- docs/schemas/domaincommon.rng | 1 + src/qemu/qemu_command.c | 4 +++ src/qemu/qemu_driver.c | 3 ++ src/qemu/qemu_parse_command.c | 25 ++++++++++++++++ src/util/virstoragefile.c | 4 ++- src/util/virstoragefile.h | 1 + .../qemuxml2argv-disk-drive-network-vxhs.args | 23 +++++++++++++++ .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++++++++++++++ tests/qemuxml2argvtest.c | 1 + 10 files changed, 104 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr) and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a "case VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch (virStorageNetProtocol) src->protocol".
Also running 'make syntax-check' would show that tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a too long line at the end:
GEN test-wrap-argv --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args 2016-11-14 06:16:40.739509847 -0500 +++ - 2016-11-14 06:23:59.661334157 -0500 @@ -20,4 +20,5 @@ -usb \ -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\ format=raw,if=none,id=drive-virtio-disk0,cache=none \ --device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\ +id=virtio-disk0 Incorrect line wrapping in tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args Use test-wrap-argv.pl to wrap test data files cfg.mk:1075: recipe for target 'test-wrap-argv' failed make: *** [test-wrap-argv] Error 1
All those are easily fixable; however, you will also need to modify the virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order to do the JSON parsing properly as well as tests/virstoragetest.c to add test case(s). If you use 'gitk' or some other git log -p browser, starting with commit id 'e91f767c743' and going forward through Peter's series you can see how this was done for other various protocols.
From what I see from the QEMU list that'll work for the json you provided in your example:
Sample command line using the JSON syntax: ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on 'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410", "server":{"host":"172.172.17.4","port":"9999"}}'
Will fix above issues.
Other questions/concerns... These may have been answered previously, but I haven't followed closely along.
1. It appears to me that the 'name' field is a UUID for the VxHS volume, but it's not clear that's all it can be. You'll note in virDomainDiskSourceParse a specific gluster check. So that we don't get surprised at some point in the future - are there any specific things in the name that need to be addressed?
That's correct. Name is in fact a UUID and nothing else. The "name" tag in XML name='eb90327c-8302-4725-9e1b-4e85ed4dc251' is misleading. I will change it to "vdisk-id" so there's no confusion.
We don't think there is a need to validate the name/vdisk-id format. In case of incorrect vdisk ID, qemu vxhs code will fail the vdisk open.
Beyond that, how does one know what to provide for that 'name=' field? For VxHS, it seems to me that the path must be a UUID of some sort. So would anyone know which UUID to magically pick in order to add the disk to a guest? How does one get a list?
The UUID allocation and vdisk creation is done out-of-band by the VxHS server. The UUIDs for disks are populated in the XML file at the time of VM creation. User does not (need to) know what UUID to open or the list of all UUIDs.
Now I'm confused - maybe I asked the wrong question.
You show the domain XML:
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source>
How does anyone know what to put in 'name'? That's a UUID. That's what I'm looking to determine.
How do I know which volume I'd be choosing? Maybe there's something I'm missing about the implementation. There has to be something to allow someone to make an intelligent choice about which vxhs volume/disk/storage is being used by the domain.
If I compare that to iSCSI:
<source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source>
There are some iSCSI tools and built up knowledge that show you how to generate the value for the "name" attribute.
But it's a lot easier to do this if one creates an iSCSI pool and then adds the disk/volume/lun to the domain via pool source XML:
<source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/>
or
<source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/>
where 'volume' is filled in from the output of a 'virsh vol-list iscsi-pool'...
2. Does/will VxHS ever require authentication similar to iSCSI and RBD?
This is under active discussion with the qemu community.
Yes - I've seen.. The secrets concept is available in libvirt and it works with the qemu secrets... The RBD code uses the model I think you'll need to use.
3. Will there be (or is there) support for LUKS volumes on the VxHS server?
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
Some explanation is in order -
HyperScale is not designed to be used as a stand-alone independent storage. It is designed only to be used in the OpenStack environment with all the related Cinder/Nova changes in place. Therefore, we do not have a need for most of the above-mentioned functions from libvirt/qemu.
Nova is actually actively working towards using storage pools from libvirt, initially for managing transient images, but I can see it being used to deal with Cinder volumes too. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|

[...]
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
Some explanation is in order -
HyperScale is not designed to be used as a stand-alone independent storage. It is designed only to be used in the OpenStack environment with all the related Cinder/Nova changes in place. Therefore, we do not have a need for most of the above-mentioned functions from libvirt/qemu.
Even in the OpenStack environment, we do not support creating storage volumes independent of a guest VM. A VxHS storage volume can only be created for a particular guest VM. With this scheme, a user does not have to manage storage pools separately. VxHS automatically configures and consumes the direct attached SSD/HDD on the OpenStack compute when enabled. After this, all the requests to add storage to guest VMs are forwarded by OpenStack directly to the hyperscale daemon on the compute, and it takes care of creating the underlying storage etc.
And how does that volume creation occur? It would seem that there's some command that does that. That's "independent" of libvirt, so in order for a guest to use that storage - it'd need to be created first anyway. I then assume as part of guest vm destruction you must somehow destroy the volume too. Synchronizing with creation and ensuring deletion would seem to be fairly important tasks and would seemingly be something you'd like to have more integrated with the environment. So what about hotplug? Can someone add in VxHS storage to a guest after it's started? And migration? Can the guest be migrated? I haven't crawled through that code recently, but I know there'd be changes to either allow or not based on the storage type.
The role of libvirt is limited to opening the volume specified in the guest XML file. Volume creation, deletion etc is done by the VxHS daemon in response to messages from the OpenStack controller. Our OpenStack orchestration code (Nova) is responsible for updating the guest XML with correct volume information for libvirt to use. A regular user (libvirt) is not expected to know about what volume IDs exist on any given host. A regular user also does not have a volume device node on the local compute to query. The only way to get to a volume is via the network using the server IP-port address.
For me having the storage pool is a more complete solution. You don't have to support all the storage backend volume functions (build, create, upload, download, wipe, resize, etc.), but knowing what volumes exist and can be used for a domain is nice. It's also nice to know how much storage exists (allocation, capacity, physical). It also allows a single point of authentication - the pool authenticates at startup (see the iSCSI and RBD code) and then the domains can use storage from the pool.
From a guest view point, rather than having to provide:
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <auth username='user'> <secret type='vxfs' usage='somestring'/> </auth> you'd have: <source pool='pool-name' volume='vol-name'/> The "host" and "auth" would be part of the <pool> definition. Having to discover the available 'names' ends up being the difference it seems for vxhs since your model would seem to be create storage, create/start guest, destroy guest, destroy storage. I think there's value in being able to use the libvirt storage API's especially w/r/t integrating the volume and guest management. John

On Wed, Jan 4, 2017 at 7:00 AM, John Ferlan <jferlan@redhat.com> wrote:
[...]
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
Some explanation is in order -
HyperScale is not designed to be used as a stand-alone independent storage. It is designed only to be used in the OpenStack environment with all the related Cinder/Nova changes in place. Therefore, we do not have a need for most of the above-mentioned functions from libvirt/qemu.
Even in the OpenStack environment, we do not support creating storage volumes independent of a guest VM. A VxHS storage volume can only be created for a particular guest VM. With this scheme, a user does not have to manage storage pools separately. VxHS automatically configures and consumes the direct attached SSD/HDD on the OpenStack compute when enabled. After this, all the requests to add storage to guest VMs are forwarded by OpenStack directly to the hyperscale daemon on the compute, and it takes care of creating the underlying storage etc.
And how does that volume creation occur? It would seem that there's some command that does that. That's "independent" of libvirt, so in order for a guest to use that storage - it'd need to be created first anyway. I then assume as part of guest vm destruction you must somehow destroy the volume too. Synchronizing with creation and ensuring deletion would seem to be fairly important tasks and would seemingly be something you'd like to have more integrated with the environment.
We have hooks in nova that trap volume creation/deletion request. Then we send messages to our service running on every compute to carry out the necessary steps to create/delete the volume.
So what about hotplug? Can someone add in VxHS storage to a guest after it's started?
Yes, we do support hot-plugging. We use OpenStack nova framework to generate a config for the new volume and attach_device it to a running guest.
And migration? Can the guest be migrated? I haven't crawled through that code recently, but I know there'd be changes to either allow or not based on the storage type.
We do support storage migration. We simulate shared storage on top of direct-attached storage, therefore libvirt/qemu can assume shared storage for the purpose of migration.
The role of libvirt is limited to opening the volume specified in the guest XML file. Volume creation, deletion etc is done by the VxHS daemon in response to messages from the OpenStack controller. Our OpenStack orchestration code (Nova) is responsible for updating the guest XML with correct volume information for libvirt to use. A regular user (libvirt) is not expected to know about what volume IDs exist on any given host. A regular user also does not have a volume device node on the local compute to query. The only way to get to a volume is via the network using the server IP-port address.
For me having the storage pool is a more complete solution. You don't have to support all the storage backend volume functions (build, create, upload, download, wipe, resize, etc.), but knowing what volumes exist and can be used for a domain is nice. It's also nice to know how much storage exists (allocation, capacity, physical). It also allows a single point of authentication - the pool authenticates at startup (see the iSCSI and RBD code) and then the domains can use storage from the pool.
From a guest view point, rather than having to provide:
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <auth username='user'> <secret type='vxfs' usage='somestring'/> </auth>
you'd have:
<source pool='pool-name' volume='vol-name'/>
The "host" and "auth" would be part of the <pool> definition. Having to discover the available 'names' ends up being the difference it seems for vxhs since your model would seem to be create storage, create/start guest, destroy guest, destroy storage. I think there's value in being able to use the libvirt storage API's especially w/r/t integrating the volume and guest management.
Would it be OK to consider adding storage pool functionality to vxhs after Nova starts using it?
John

On Wed, Jan 04, 2017 at 04:26:58PM -0800, ashish mittal wrote:
On Wed, Jan 4, 2017 at 7:00 AM, John Ferlan <jferlan@redhat.com> wrote:
[...]
We don't anticipate a need for this.
4. There's no VxHS Storage Pool support in this patch (OK actually an additional set of patches in order to support). That would be expected especially for a networked storage environment. You'll note there are src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage iSCSI, Gluster, and RBD protocol specific things. For starters, create, delete, and refresh - especially things that a stat() wouldn't necessarily return (capacity, allocation, type a/k/a target.format). Perhaps even the ability to upload/download and wipe volumes in the pool. Having a pool is a bit more work, but look at the genesis of existing storage_backend_*.c files to get a sense for what needs to change.
VxHS does not need the Storage Pool functionality. Do we still need to implement this?
It's something that's expected. See my reasoning above.
Some explanation is in order -
HyperScale is not designed to be used as a stand-alone independent storage. It is designed only to be used in the OpenStack environment with all the related Cinder/Nova changes in place. Therefore, we do not have a need for most of the above-mentioned functions from libvirt/qemu.
Even in the OpenStack environment, we do not support creating storage volumes independent of a guest VM. A VxHS storage volume can only be created for a particular guest VM. With this scheme, a user does not have to manage storage pools separately. VxHS automatically configures and consumes the direct attached SSD/HDD on the OpenStack compute when enabled. After this, all the requests to add storage to guest VMs are forwarded by OpenStack directly to the hyperscale daemon on the compute, and it takes care of creating the underlying storage etc.
And how does that volume creation occur? It would seem that there's some command that does that. That's "independent" of libvirt, so in order for a guest to use that storage - it'd need to be created first anyway. I then assume as part of guest vm destruction you must somehow destroy the volume too. Synchronizing with creation and ensuring deletion would seem to be fairly important tasks and would seemingly be something you'd like to have more integrated with the environment.
We have hooks in nova that trap volume creation/deletion request. Then we send messages to our service running on every compute to carry out the necessary steps to create/delete the volume.
So what about hotplug? Can someone add in VxHS storage to a guest after it's started?
Yes, we do support hot-plugging. We use OpenStack nova framework to generate a config for the new volume and attach_device it to a running guest.
And migration? Can the guest be migrated? I haven't crawled through that code recently, but I know there'd be changes to either allow or not based on the storage type.
We do support storage migration. We simulate shared storage on top of direct-attached storage, therefore libvirt/qemu can assume shared storage for the purpose of migration.
The role of libvirt is limited to opening the volume specified in the guest XML file. Volume creation, deletion etc is done by the VxHS daemon in response to messages from the OpenStack controller. Our OpenStack orchestration code (Nova) is responsible for updating the guest XML with correct volume information for libvirt to use. A regular user (libvirt) is not expected to know about what volume IDs exist on any given host. A regular user also does not have a volume device node on the local compute to query. The only way to get to a volume is via the network using the server IP-port address.
For me having the storage pool is a more complete solution. You don't have to support all the storage backend volume functions (build, create, upload, download, wipe, resize, etc.), but knowing what volumes exist and can be used for a domain is nice. It's also nice to know how much storage exists (allocation, capacity, physical). It also allows a single point of authentication - the pool authenticates at startup (see the iSCSI and RBD code) and then the domains can use storage from the pool.
From a guest view point, rather than having to provide:
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'> <host name='192.168.0.1' port='9999'/> </source> <auth username='user'> <secret type='vxfs' usage='somestring'/> </auth>
you'd have:
<source pool='pool-name' volume='vol-name'/>
The "host" and "auth" would be part of the <pool> definition. Having to discover the available 'names' ends up being the difference it seems for vxhs since your model would seem to be create storage, create/start guest, destroy guest, destroy storage. I think there's value in being able to use the libvirt storage API's especially w/r/t integrating the volume and guest management.
Would it be OK to consider adding storage pool functionality to vxhs after Nova starts using it?
From the libvirt POV there's no requirement to have storage pool support merged immediately. It is fine to just have the QEMU integration done in a first patch and storage pool stuff as a followon patch.
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
participants (4)
-
Ashish Mittal
-
ashish mittal
-
Daniel P. Berrange
-
John Ferlan