On Tue, Jan 03, 2017 at 05:37:22PM -0800, ashish mittal wrote:
On Mon, Dec 19, 2016 at 4:13 PM, John Ferlan
<jferlan(a)redhat.com> wrote:
>
>
> On 12/16/2016 10:26 PM, ashish mittal wrote:
>> Hi John,
>>
>> Thanks for the review and your comments. Sorry about the delay in
>> reply. Our qemu patch has been undergoing a lot of changes, and we
>> were hoping to finalize some things before the next libvirt patch.
>>
>
> I've been following the qemu changes from a distance - they're on the
> radar, but I don't follow the details! I'm deep in the middle of other
> work. I think some of the qemu changes will alter how some of the
> details work here.
>
>>
>>
>> On Mon, Nov 14, 2016 at 12:44 PM, John Ferlan <jferlan(a)redhat.com> wrote:
>>>
>>> $SUBJ
>>>
>>> s/Change to support/Add support for
>>>
>>> s/ with qemu-kvm//
>>>
>>> On 11/12/2016 01:19 AM, Ashish Mittal wrote:
>>>> Sample XML for a vxhs vdisk is as follows:
>>>>
>>>> <disk type='network' device='disk'>
>>>> <driver name='qemu' type='raw'
cache='none'/>
>>>> <source protocol='vxhs'
name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
>>>> <host name='192.168.0.1' port='9999'/>
>>>> </source>
>>>> <backingStore/>
>>>> <target dev='vda' bus='virtio'/>
>>>> <serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial>
>>>> <alias name='virtio-disk0'/>
>>>> <address type='pci' domain='0x0000'
bus='0x00' slot='0x04' function='0x0'/>
>>>> </disk>
>>>>
>>>> Signed-off-by: Ashish Mittal <Ashish.Mittal(a)veritas.com>
>>>> ---
>>>> docs/formatdomain.html.in | 12 ++++++--
>>>> docs/schemas/domaincommon.rng | 1 +
>>>> src/qemu/qemu_command.c | 4 +++
>>>> src/qemu/qemu_driver.c | 3 ++
>>>> src/qemu/qemu_parse_command.c | 25
++++++++++++++++
>>>> src/util/virstoragefile.c | 4 ++-
>>>> src/util/virstoragefile.h | 1 +
>>>> .../qemuxml2argv-disk-drive-network-vxhs.args | 23
+++++++++++++++
>>>> .../qemuxml2argv-disk-drive-network-vxhs.xml | 34
++++++++++++++++++++++
>>>> tests/qemuxml2argvtest.c | 1 +
>>>> 10 files changed, 104 insertions(+), 4 deletions(-)
>>>> create mode 100644
tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
>>>> create mode 100644
tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
>>>>
>>>
>>> Missing changes for src/libxl/libxl_conf.c (libxlMakeNetworkDiskSrcStr)
>>> and src/xenconfig/xen_xl.c (xenFormatXLDiskSrcNet) - need to add a
"case
>>> VIR_STORAGE_NET_PROTOCOL_VXHS:" for the "switch
(virStorageNetProtocol)
>>> src->protocol".
>>>
>>> Also running 'make syntax-check' would show that
>>> tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args has a
>>> too long line at the end:
>>>
>>> GEN test-wrap-argv
>>> --- tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
>>> 2016-11-14 06:16:40.739509847 -0500
>>> +++ - 2016-11-14 06:23:59.661334157 -0500
>>> @@ -20,4 +20,5 @@
>>> -usb \
>>> -drive file=vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251,\
>>> format=raw,if=none,id=drive-virtio-disk0,cache=none \
>>> --device
>>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
>>> +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\
>>> +id=virtio-disk0
>>> Incorrect line wrapping in
>>> tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
>>> Use test-wrap-argv.pl to wrap test data files
>>> cfg.mk:1075: recipe for target 'test-wrap-argv' failed
>>> make: *** [test-wrap-argv] Error 1
>>>
>>>
>>> All those are easily fixable; however, you will also need to modify the
>>> virStorageSourceJSONDriverParser in src/util/virstoragefile.c in order
>>> to do the JSON parsing properly as well as tests/virstoragetest.c to add
>>> test case(s). If you use 'gitk' or some other git log -p browser,
>>> starting with commit id 'e91f767c743' and going forward through
Peter's
>>> series you can see how this was done for other various protocols.
>>>
>>> From what I see from the QEMU list that'll work for the json you
>>> provided in your example:
>>>
>>> Sample command line using the JSON syntax:
>>> ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us
>>> -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>>> -msg timestamp=on
>>>
'json:{"driver":"vxhs","vdisk-id":"c3e9095a-a5ee-4dce-afeb-2a59fb387410",
>>>
"server":{"host":"172.172.17.4","port":"9999"}}'
>>>
>>>
>>
>> Will fix above issues.
>>
>>> Other questions/concerns... These may have been answered previously, but
>>> I haven't followed closely along.
>>>
>>> 1. It appears to me that the 'name' field is a UUID for the VxHS
>>> volume, but it's not clear that's all it can be. You'll note
in
>>> virDomainDiskSourceParse a specific gluster check. So that we don't get
>>> surprised at some point in the future - are there any specific things in
>>> the name that need to be addressed?
>>>
>>
>> That's correct. Name is in fact a UUID and nothing else. The
"name"
>> tag in XML name='eb90327c-8302-4725-9e1b-4e85ed4dc251' is misleading.
>> I will change it to "vdisk-id" so there's no confusion.
>>
>> We don't think there is a need to validate the name/vdisk-id format.
>> In case of incorrect vdisk ID, qemu vxhs code will fail the vdisk
>> open.
>>
>>
>>> Beyond that, how does one know what to provide for that 'name='
field?
>>> For VxHS, it seems to me that the path must be a UUID of some sort. So
>>> would anyone know which UUID to magically pick in order to add the disk
>>> to a guest? How does one get a list?
>>>
>>
>> The UUID allocation and vdisk creation is done out-of-band by the VxHS
>> server. The UUIDs for disks are populated in the XML file at the time
>> of VM creation. User does not (need to) know what UUID to open or the
>> list of all UUIDs.
>>
>
> Now I'm confused - maybe I asked the wrong question.
>
> You show the domain XML:
>
> <source protocol='vxhs'
name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
> <host name='192.168.0.1' port='9999'/>
> </source>
>
> How does anyone know what to put in 'name'? That's a UUID. That's
what
> I'm looking to determine.
>
> How do I know which volume I'd be choosing? Maybe there's something I'm
> missing about the implementation. There has to be something to allow
> someone to make an intelligent choice about which vxhs
> volume/disk/storage is being used by the domain.
>
> If I compare that to iSCSI:
>
> <source protocol='iscsi'
name='iqn.2013-07.com.example:iscsi-nopool/2'>
> <host name='example.com' port='3260'/>
> </source>
>
> There are some iSCSI tools and built up knowledge that show you how to
> generate the value for the "name" attribute.
>
> But it's a lot easier to do this if one creates an iSCSI pool and then
> adds the disk/volume/lun to the domain via pool source XML:
>
> <source pool='iscsi-pool' volume='unit:0:0:1'
mode='host'/>
>
> or
>
> <source pool='iscsi-pool' volume='unit:0:0:2'
mode='direct'/>
>
> where 'volume' is filled in from the output of a 'virsh vol-list
> iscsi-pool'...
>
>
>>> 2. Does/will VxHS ever require authentication similar to iSCSI and RBD?
>>>
>>
>> This is under active discussion with the qemu community.
>>
>
> Yes - I've seen.. The secrets concept is available in libvirt and it
> works with the qemu secrets... The RBD code uses the model I think
> you'll need to use.
>
>>> 3. Will there be (or is there) support for LUKS volumes on the VxHS
server?
>>>
>>
>> We don't anticipate a need for this.
>>
>>> 4. There's no VxHS Storage Pool support in this patch (OK actually an
>>> additional set of patches in order to support). That would be expected
>>> especially for a networked storage environment. You'll note there are
>>> src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage
>>> iSCSI, Gluster, and RBD protocol specific things. For starters, create,
>>> delete, and refresh - especially things that a stat() wouldn't
>>> necessarily return (capacity, allocation, type a/k/a target.format).
>>> Perhaps even the ability to upload/download and wipe volumes in the
>>> pool. Having a pool is a bit more work, but look at the genesis of
>>> existing storage_backend_*.c files to get a sense for what needs to change.
>>>
>>
>> VxHS does not need the Storage Pool functionality. Do we still need to
>> implement this?
>>
>
> It's something that's expected. See my reasoning above.
>
Some explanation is in order -
HyperScale is not designed to be used as a stand-alone independent
storage. It is designed only to be used in the OpenStack environment
with all the related Cinder/Nova changes in place. Therefore, we do
not have a need for most of the above-mentioned functions from
libvirt/qemu.
Nova is actually actively working towards using storage pools from
libvirt, initially for managing transient images, but I can see it
being used to deal with Cinder volumes too.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://entangle-photo.org -o-
http://search.cpan.org/~danberr/ :|