Re: [libvirt] [PATCH v3] Add support for Veritas HyperScale (VxHS) block device protocol

Hi, I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following - Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}' I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated. Thanks, Ashish On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
#1. Based on Peter's v2 comments, we don't want to support the older/legacy syntax for VxHS, so it's something that should be removed - although we should check for it being present and fail if found.
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
#2. Is the desire to ever support more than 1 host? If not, then is the "server" syntax you've borrowed from the Gluster code necessary? Could you just go with the single "host" like NBD and SSH. As it relates to the qemu command line - I'm not quite as clear. From the example I see in commit id '7b7da9e28', the gluster syntax would have:
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
+file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
whereas, the VxHS syntax is: +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
FWIW: I also note there is no ".type=tcp" in your output - so perhaps the "default" is tcp unless otherwise specified, but I'm sure of the qemu syntax requirements in this area. I assume that since there's only 1 server, the ".0, .1, .2" become unnecessary (something added by commit id 'f1bbc7df4' for multiple gluster hosts).
That's correct. TCP is the default.
I haven't closedly followed the qemu syntax discussion, but it would it would be possible to use:
+file.host=192.168.0.1,file.port=9999
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model. First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too). For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line. I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that. The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format). IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster). That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot... I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists. You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing. John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
#1. Based on Peter's v2 comments, we don't want to support the older/legacy syntax for VxHS, so it's something that should be removed - although we should check for it being present and fail if found.
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
#2. Is the desire to ever support more than 1 host? If not, then is the "server" syntax you've borrowed from the Gluster code necessary? Could you just go with the single "host" like NBD and SSH. As it relates to the qemu command line - I'm not quite as clear. From the example I see in commit id '7b7da9e28', the gluster syntax would have:
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
+file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
whereas, the VxHS syntax is: +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
FWIW: I also note there is no ".type=tcp" in your output - so perhaps the "default" is tcp unless otherwise specified, but I'm sure of the qemu syntax requirements in this area. I assume that since there's only 1 server, the ".0, .1, .2" become unnecessary (something added by commit id 'f1bbc7df4' for multiple gluster hosts).
That's correct. TCP is the default.
I haven't closedly followed the qemu syntax discussion, but it would it would be possible to use:
+file.host=192.168.0.1,file.port=9999
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

Thanks for the information. I'll work with this and get back if I get stuck somewhere. My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that... On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
#1. Based on Peter's v2 comments, we don't want to support the older/legacy syntax for VxHS, so it's something that should be removed - although we should check for it being present and fail if found.
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
#2. Is the desire to ever support more than 1 host? If not, then is the "server" syntax you've borrowed from the Gluster code necessary? Could you just go with the single "host" like NBD and SSH. As it relates to the qemu command line - I'm not quite as clear. From the example I see in commit id '7b7da9e28', the gluster syntax would have:
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
+file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
whereas, the VxHS syntax is: +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
FWIW: I also note there is no ".type=tcp" in your output - so perhaps the "default" is tcp unless otherwise specified, but I'm sure of the qemu syntax requirements in this area. I assume that since there's only 1 server, the ".0, .1, .2" become unnecessary (something added by commit id 'f1bbc7df4' for multiple gluster hosts).
That's correct. TCP is the default.
I haven't closedly followed the qemu syntax discussion, but it would it would be possible to use:
+file.host=192.168.0.1,file.port=9999
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

Looking at the code, I suspect significant changes will be needed to support passing TLS arguments for disk devices. Here's what I have right now - int qemuProcessPrepareDomain(virConnectPtr conn, virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int flags) { ... VIR_DEBUG("Prepare chardev source backends for TLS"); qemuDomainPrepareChardevSource(vm->def, driver); <== This is where char devices prepares their TLS... <=== This is where we would expect a generic "Prepare disk source backends for TLS" for all the disk devices <=== -- OR -- I can do something specific for VxHS here VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs"); if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps changes will be needed here also, but we plan to pass the cert directory, endpoint and ID as plain-text..still would need a place to save these new values.. goto cleanup; ... } I'm thinking, I would also need to extend one of the following structures to save the TLS related info. struct _virDomainDiskDef { } --- OR --- struct _qemuDomainDiskPrivate { } Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time? Thanks, Ashish On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602@gmail.com> wrote:
Thanks for the information. I'll work with this and get back if I get stuck somewhere.
My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that...
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
> > #1. Based on Peter's v2 comments, we don't want to support the > older/legacy syntax for VxHS, so it's something that should be removed - > although we should check for it being present and fail if found. >
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
> #2. Is the desire to ever support more than 1 host? If not, then is the > "server" syntax you've borrowed from the Gluster code necessary? Could > you just go with the single "host" like NBD and SSH. As it relates to > the qemu command line - I'm not quite as clear. From the example I see > in commit id '7b7da9e28', the gluster syntax would have: >
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ > +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ > +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ > > whereas, the VxHS syntax is: > +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ > > FWIW: I also note there is no ".type=tcp" in your output - so perhaps > the "default" is tcp unless otherwise specified, but I'm sure of the > qemu syntax requirements in this area. I assume that since there's only > 1 server, the ".0, .1, .2" become unnecessary (something added by commit > id 'f1bbc7df4' for multiple gluster hosts). >
That's correct. TCP is the default.
> I haven't closedly followed the qemu syntax discussion, but it would it > would be possible to use: > > +file.host=192.168.0.1,file.port=9999 >
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

On 04/14/2017 07:26 PM, ashish mittal wrote:
Looking at the code, I suspect significant changes will be needed to support passing TLS arguments for disk devices. Here's what I have right now -
Sure you're going to need to add some code, but there should be enough infrastructure there now that is reusable. I made a number of changes during the 7.4 release to generate "generic enough" API's that both chardev TLS and migrate TLS can use them. Adding a disk TLS option should be easy. It was something I had in the back of my mind while altering the APIs. I think if you use cscope and then search on 'haveTLS' you'll get the necessary pieces. The 'migTLSAlias' may also give you some ideas.
int qemuProcessPrepareDomain(virConnectPtr conn, virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int flags) { ... VIR_DEBUG("Prepare chardev source backends for TLS"); qemuDomainPrepareChardevSource(vm->def, driver); <== This is where char devices prepares their TLS...
<=== This is where we would expect a generic "Prepare disk source backends for TLS" for all the disk devices <=== -- OR -- I can do something specific for VxHS here
I'm not yet sure you need this step for disk TLS. For chardev, it's "enforcing" that if the qemu.conf chardevTLS is set, then as long as the domain chardev source object using TCP doesn't have a "tls='no'" or "tls='yes'" attribute, that TLS gets set for the object. I recall not agreeing with exactly how this was implemented for chardevTLS, but I do understand why. You need to decide "how" you see this being utilized. For example, if an XML object had "tls='yes'", but a global qemu.conf variable was set to 0, then would you expect TLS to work? Conversely, if the global qemu.conf was set, then would you expect any disk that could use TLS would be forced to try unless the XML configuration explicitly set disable. In the long run adding TLS options to a disk startup would still be on a protocol by protocol basis. If you did add this, then it would be a "generic" qemuDomainPrepareDiskSource() where qemuDomainPrepareDiskSource would traverse the vm->def->ndisks (like qemuDomainPrepareChardevSourceTLS does for each chardev source) looking for disks with a <source> defined and checking the value of the haveTLS in order to decide whether to set/clear the flag based on the global setting. You wouldn't need the tlsFromConfig - that's there because of a cross version migration configuration issue that shouldn't apply for your purposes.
VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs"); if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps changes will be needed here also, but we plan to pass the cert directory, endpoint and ID as plain-text..still would need a place to save these new values..
Yes, but more specifically in qemuDomainSecretDiskPrepare in order to lookup of the secret if it exists (logic should be similar to qemuDomainSecretChardevPrepare, but specific to disk needs)
goto cleanup; ... }
I'm thinking, I would also need to extend one of the following structures to save the TLS related info.
struct _virDomainDiskDef { } --- OR --- struct _qemuDomainDiskPrivate {
This one^^
}
Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time?
No, it'd be preferable to have a complete solution. That solution should be a separately compilable/checkable multi-patch series where each patch makes forward progress towards your goal. Consider first getting the non-TLS environment working. You have some of it working with the existing patches - it's updating the generation of the command line that would seem to need the adjustment. I think I've already pointed out a series or two that should help you in that endeavor. Then for TLS, again it's a multi-step process 1. Modify the qemu_conf.c and qemu.conf files to search for new disk_vxhs_tls_x509* definitions since I assume that either you'll use the existing qemu definitions or you (more likely) want to have some directory that would have a VxHS specific environment. I suggest a "disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI, etc. support TLS, then they'd conceivably have their own "disk_rbd" or "disk_icsci" options since each would have their own server to authenticate using TLS creds. 2. Add a "tls='yes'" (similar to how chardev did things) for the disk <source> entry. This allows the mechanism to allow/deny on a disk by disk basis whether TLS credentials would be needed. 3. Set up the qemu infrastructure to accept/recognize some sort of tls option which would allow the tls-creds-x509 to be added to the command line. Start with adding a new entry to _qemuDomainDiskPrivate such as "qemuDomainSecretInfoPtr tlsinfo;". Then qemuDomainSecretDiskPrepare is where you'll need to adjust to add a possible secret that's used to access the TLS credentials (see qemuDomainSecretChardevPrepare for the example). Then when the disk is added on the command line you'll have the necessary pieces for the two calls qemuBuildObjectSecretCommandLine and qemuBuildTLSx509CommandLine used to generate the optional secret and required tls-creds-x509 object. The alias you use for that object is what then gets added to the command line generation for the VxHS disk. 4. You'll need to follow a similar process for hotplug support, except there's slightly different calls in order to hot plug the secret and tls objects. Although everything needs to go in for the same libvirt release, it's OK to get things working and reviewed as they're ready. Be cognizant that libvirt releases are monthly - so dropping a large series near the end of the month probably would result in review/push falling into the subsequent month's release. Also attempting to do everything in a smaller subset of patches will result in a request to split things up into manageable chunks. John
Thanks, Ashish
On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602@gmail.com> wrote:
Thanks for the information. I'll work with this and get back if I get stuck somewhere.
My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that...
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
>> >> #1. Based on Peter's v2 comments, we don't want to support the >> older/legacy syntax for VxHS, so it's something that should be removed - >> although we should check for it being present and fail if found. >> > > I am testing with changed code to return error if legacy syntax is > found for VxHS. Also added a test case to check for failure on legacy > syntax and it seems to pass (test #41 below). > > Then I added a pass test case to check conversion from new native > syntax to XML (test #40 below). That test fails with error > 'qemuParseCommandLineDisk:901 : internal error: missing file parameter > in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See > > Looks like none of the existing tests in qemuargv2xmltest test for the > parsing of new syntax, and qemuParseCommandLineDisk() expects to find > 'file=' for a drive or it errors out. If this is true, will it be able > to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
> > Output from the newly added test cases (40 should pass and 41 checks > for error) : > > 40) QEMU ARGV-2-XML disk-drive-network-vxhs > ... Got unexpected warning from qemuParseCommandLineString: > 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 > 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain > 2017-01-28 00:57:30.814+0000: 10391: error : > qemuParseCommandLineDisk:901 : internal error: missing file parameter > in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' > libvirt: QEMU Driver error : internal error: missing file parameter in > drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' > FAILED > > 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail > ... Got expected error from qemuParseCommandLineString: > libvirt: QEMU Driver error : internal error: VxHS protocol does not > support URI syntax > 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' > OK > 42) QEMU ARGV-2-XML disk-usb ... OK > > > >> #2. Is the desire to ever support more than 1 host? If not, then is the >> "server" syntax you've borrowed from the Gluster code necessary? Could >> you just go with the single "host" like NBD and SSH. As it relates to >> the qemu command line - I'm not quite as clear. From the example I see >> in commit id '7b7da9e28', the gluster syntax would have: >> > > Present understanding is to have only one host. You are right, the > "server" part is not necessary. Will have to check with the qemu > community on this change. > >> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ >> +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ >> +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ >> >> whereas, the VxHS syntax is: >> +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ >> >> FWIW: I also note there is no ".type=tcp" in your output - so perhaps >> the "default" is tcp unless otherwise specified, but I'm sure of the >> qemu syntax requirements in this area. I assume that since there's only >> 1 server, the ".0, .1, .2" become unnecessary (something added by commit >> id 'f1bbc7df4' for multiple gluster hosts). >> > > That's correct. TCP is the default. > >> I haven't closedly followed the qemu syntax discussion, but it would it >> would be possible to use: >> >> +file.host=192.168.0.1,file.port=9999 >> > > That is correct. Above syntax would also work for us. I will pose this > suggestion to the qemu community and update with their response. >
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

Thanks! On Mon, Apr 17, 2017 at 9:19 AM, John Ferlan <jferlan@redhat.com> wrote:
On 04/14/2017 07:26 PM, ashish mittal wrote:
Looking at the code, I suspect significant changes will be needed to support passing TLS arguments for disk devices. Here's what I have right now -
Sure you're going to need to add some code, but there should be enough infrastructure there now that is reusable. I made a number of changes during the 7.4 release to generate "generic enough" API's that both chardev TLS and migrate TLS can use them. Adding a disk TLS option should be easy. It was something I had in the back of my mind while altering the APIs.
I think if you use cscope and then search on 'haveTLS' you'll get the necessary pieces. The 'migTLSAlias' may also give you some ideas.
int qemuProcessPrepareDomain(virConnectPtr conn, virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int flags) { ... VIR_DEBUG("Prepare chardev source backends for TLS"); qemuDomainPrepareChardevSource(vm->def, driver); <== This is where char devices prepares their TLS...
<=== This is where we would expect a generic "Prepare disk source backends for TLS" for all the disk devices <=== -- OR -- I can do something specific for VxHS here
I'm not yet sure you need this step for disk TLS. For chardev, it's "enforcing" that if the qemu.conf chardevTLS is set, then as long as the domain chardev source object using TCP doesn't have a "tls='no'" or "tls='yes'" attribute, that TLS gets set for the object. I recall not agreeing with exactly how this was implemented for chardevTLS, but I do understand why.
You need to decide "how" you see this being utilized. For example, if an XML object had "tls='yes'", but a global qemu.conf variable was set to 0, then would you expect TLS to work? Conversely, if the global qemu.conf was set, then would you expect any disk that could use TLS would be forced to try unless the XML configuration explicitly set disable. In the long run adding TLS options to a disk startup would still be on a protocol by protocol basis.
If you did add this, then it would be a "generic"
qemuDomainPrepareDiskSource()
where qemuDomainPrepareDiskSource would traverse the vm->def->ndisks (like qemuDomainPrepareChardevSourceTLS does for each chardev source) looking for disks with a <source> defined and checking the value of the haveTLS in order to decide whether to set/clear the flag based on the global setting. You wouldn't need the tlsFromConfig - that's there because of a cross version migration configuration issue that shouldn't apply for your purposes.
VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs"); if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps changes will be needed here also, but we plan to pass the cert directory, endpoint and ID as plain-text..still would need a place to save these new values..
Yes, but more specifically in qemuDomainSecretDiskPrepare in order to lookup of the secret if it exists (logic should be similar to qemuDomainSecretChardevPrepare, but specific to disk needs)
goto cleanup; ... }
I'm thinking, I would also need to extend one of the following structures to save the TLS related info.
struct _virDomainDiskDef { } --- OR --- struct _qemuDomainDiskPrivate {
This one^^
}
Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time?
No, it'd be preferable to have a complete solution. That solution should be a separately compilable/checkable multi-patch series where each patch makes forward progress towards your goal.
Consider first getting the non-TLS environment working. You have some of it working with the existing patches - it's updating the generation of the command line that would seem to need the adjustment. I think I've already pointed out a series or two that should help you in that endeavor.
Then for TLS, again it's a multi-step process
1. Modify the qemu_conf.c and qemu.conf files to search for new disk_vxhs_tls_x509* definitions since I assume that either you'll use the existing qemu definitions or you (more likely) want to have some directory that would have a VxHS specific environment. I suggest a "disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI, etc. support TLS, then they'd conceivably have their own "disk_rbd" or "disk_icsci" options since each would have their own server to authenticate using TLS creds.
2. Add a "tls='yes'" (similar to how chardev did things) for the disk <source> entry. This allows the mechanism to allow/deny on a disk by disk basis whether TLS credentials would be needed.
3. Set up the qemu infrastructure to accept/recognize some sort of tls option which would allow the tls-creds-x509 to be added to the command line. Start with adding a new entry to _qemuDomainDiskPrivate such as "qemuDomainSecretInfoPtr tlsinfo;". Then qemuDomainSecretDiskPrepare is where you'll need to adjust to add a possible secret that's used to access the TLS credentials (see qemuDomainSecretChardevPrepare for the example). Then when the disk is added on the command line you'll have the necessary pieces for the two calls qemuBuildObjectSecretCommandLine and qemuBuildTLSx509CommandLine used to generate the optional secret and required tls-creds-x509 object. The alias you use for that object is what then gets added to the command line generation for the VxHS disk.
4. You'll need to follow a similar process for hotplug support, except there's slightly different calls in order to hot plug the secret and tls objects.
Although everything needs to go in for the same libvirt release, it's OK to get things working and reviewed as they're ready. Be cognizant that libvirt releases are monthly - so dropping a large series near the end of the month probably would result in review/push falling into the subsequent month's release. Also attempting to do everything in a smaller subset of patches will result in a request to split things up into manageable chunks.
John
Thanks, Ashish
On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602@gmail.com> wrote:
Thanks for the information. I'll work with this and get back if I get stuck somewhere.
My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that...
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
>>> >>> #1. Based on Peter's v2 comments, we don't want to support the >>> older/legacy syntax for VxHS, so it's something that should be removed - >>> although we should check for it being present and fail if found. >>> >> >> I am testing with changed code to return error if legacy syntax is >> found for VxHS. Also added a test case to check for failure on legacy >> syntax and it seems to pass (test #41 below). >> >> Then I added a pass test case to check conversion from new native >> syntax to XML (test #40 below). That test fails with error >> 'qemuParseCommandLineDisk:901 : internal error: missing file parameter >> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...' > > The qemu_parse_command.c changes while nice to have weren't even updated > when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') > Check the changes to add the new s > > IOW: This code knows how to parse something like: > > -drive > 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1' > > but it's clueless for: > > -drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ > file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ > file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ > file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ > if=none,id=drive-virtio-disk2 \ > -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ > id=virtio-disk2 > > See >> >> Looks like none of the existing tests in qemuargv2xmltest test for the >> parsing of new syntax, and qemuParseCommandLineDisk() expects to find >> 'file=' for a drive or it errors out. If this is true, will it be able >> to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
>> >> Output from the newly added test cases (40 should pass and 41 checks >> for error) : >> >> 40) QEMU ARGV-2-XML disk-drive-network-vxhs >> ... Got unexpected warning from qemuParseCommandLineString: >> 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 >> 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain >> 2017-01-28 00:57:30.814+0000: 10391: error : >> qemuParseCommandLineDisk:901 : internal error: missing file parameter >> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >> libvirt: QEMU Driver error : internal error: missing file parameter in >> drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >> FAILED >> >> 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail >> ... Got expected error from qemuParseCommandLineString: >> libvirt: QEMU Driver error : internal error: VxHS protocol does not >> support URI syntax >> 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' >> OK >> 42) QEMU ARGV-2-XML disk-usb ... OK >> >> >> >>> #2. Is the desire to ever support more than 1 host? If not, then is the >>> "server" syntax you've borrowed from the Gluster code necessary? Could >>> you just go with the single "host" like NBD and SSH. As it relates to >>> the qemu command line - I'm not quite as clear. From the example I see >>> in commit id '7b7da9e28', the gluster syntax would have: >>> >> >> Present understanding is to have only one host. You are right, the >> "server" part is not necessary. Will have to check with the qemu >> community on this change. >> >>> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ >>> +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ >>> +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ >>> >>> whereas, the VxHS syntax is: >>> +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ >>> >>> FWIW: I also note there is no ".type=tcp" in your output - so perhaps >>> the "default" is tcp unless otherwise specified, but I'm sure of the >>> qemu syntax requirements in this area. I assume that since there's only >>> 1 server, the ".0, .1, .2" become unnecessary (something added by commit >>> id 'f1bbc7df4' for multiple gluster hosts). >>> >> >> That's correct. TCP is the default. >> >>> I haven't closedly followed the qemu syntax discussion, but it would it >>> would be possible to use: >>> >>> +file.host=192.168.0.1,file.port=9999 >>> >> >> That is correct. Above syntax would also work for us. I will pose this >> suggestion to the qemu community and update with their response. >>
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

On Mon, Apr 17, 2017 at 9:19 AM, John Ferlan <jferlan@redhat.com> wrote:
On 04/14/2017 07:26 PM, ashish mittal wrote:
Looking at the code, I suspect significant changes will be needed to support passing TLS arguments for disk devices. Here's what I have right now -
Sure you're going to need to add some code, but there should be enough infrastructure there now that is reusable. I made a number of changes during the 7.4 release to generate "generic enough" API's that both chardev TLS and migrate TLS can use them. Adding a disk TLS option should be easy. It was something I had in the back of my mind while altering the APIs.
I think if you use cscope and then search on 'haveTLS' you'll get the necessary pieces. The 'migTLSAlias' may also give you some ideas.
int qemuProcessPrepareDomain(virConnectPtr conn, virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int flags) { ... VIR_DEBUG("Prepare chardev source backends for TLS"); qemuDomainPrepareChardevSource(vm->def, driver); <== This is where char devices prepares their TLS...
<=== This is where we would expect a generic "Prepare disk source backends for TLS" for all the disk devices <=== -- OR -- I can do something specific for VxHS here
I'm not yet sure you need this step for disk TLS. For chardev, it's "enforcing" that if the qemu.conf chardevTLS is set, then as long as the domain chardev source object using TCP doesn't have a "tls='no'" or "tls='yes'" attribute, that TLS gets set for the object. I recall not agreeing with exactly how this was implemented for chardevTLS, but I do understand why.
You need to decide "how" you see this being utilized. For example, if an XML object had "tls='yes'", but a global qemu.conf variable was set to 0, then would you expect TLS to work? Conversely, if the global qemu.conf was set, then would you expect any disk that could use TLS would be forced to try unless the XML configuration explicitly set disable. In the long run adding TLS options to a disk startup would still be on a protocol by protocol basis.
If you did add this, then it would be a "generic"
qemuDomainPrepareDiskSource()
where qemuDomainPrepareDiskSource would traverse the vm->def->ndisks (like qemuDomainPrepareChardevSourceTLS does for each chardev source) looking for disks with a <source> defined and checking the value of the haveTLS in order to decide whether to set/clear the flag based on the global setting. You wouldn't need the tlsFromConfig - that's there because of a cross version migration configuration issue that shouldn't apply for your purposes.
VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs"); if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps changes will be needed here also, but we plan to pass the cert directory, endpoint and ID as plain-text..still would need a place to save these new values..
Yes, but more specifically in qemuDomainSecretDiskPrepare in order to lookup of the secret if it exists (logic should be similar to qemuDomainSecretChardevPrepare, but specific to disk needs)
goto cleanup; ... }
I'm thinking, I would also need to extend one of the following structures to save the TLS related info.
struct _virDomainDiskDef { } --- OR --- struct _qemuDomainDiskPrivate {
This one^^
}
Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time?
No, it'd be preferable to have a complete solution. That solution should be a separately compilable/checkable multi-patch series where each patch makes forward progress towards your goal.
Consider first getting the non-TLS environment working. You have some of it working with the existing patches - it's updating the generation of the command line that would seem to need the adjustment. I think I've already pointed out a series or two that should help you in that endeavor.
I think this part is taken care of. Please see my earlier emails and do let me know if something is missing on the base functionality.
Then for TLS, again it's a multi-step process
1. Modify the qemu_conf.c and qemu.conf files to search for new disk_vxhs_tls_x509* definitions since I assume that either you'll use the existing qemu definitions or you (more likely) want to have some directory that would have a VxHS specific environment. I suggest a "disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI, etc. support TLS, then they'd conceivably have their own "disk_rbd" or "disk_icsci" options since each would have their own server to authenticate using TLS creds.
Thanks for laying out the steps so nicely. I plan to refer to chardev TLS commits to keep my checkins to manageable chunks. For example, the first TLS related commit on top of the base VxHS patch, which corresponds to chardev commit ID 3f60a9c3, would look like: https://github.com/MittalAshish/libvirt/compare/basechanges_withoutTLS...Mit... If this looks fine, I will send out the next patch series as the base VxHS patch + TLS part 1 patch. Two questions please - (1) Can the base VxHS patch be merged to master before the TLS is fully implemented? (2) Should the TLS support be added as multiple patches, or one patch with multiple incremental commits? Thanks!
2. Add a "tls='yes'" (similar to how chardev did things) for the disk <source> entry. This allows the mechanism to allow/deny on a disk by disk basis whether TLS credentials would be needed.
3. Set up the qemu infrastructure to accept/recognize some sort of tls option which would allow the tls-creds-x509 to be added to the command line. Start with adding a new entry to _qemuDomainDiskPrivate such as "qemuDomainSecretInfoPtr tlsinfo;". Then qemuDomainSecretDiskPrepare is where you'll need to adjust to add a possible secret that's used to access the TLS credentials (see qemuDomainSecretChardevPrepare for the example). Then when the disk is added on the command line you'll have the necessary pieces for the two calls qemuBuildObjectSecretCommandLine and qemuBuildTLSx509CommandLine used to generate the optional secret and required tls-creds-x509 object. The alias you use for that object is what then gets added to the command line generation for the VxHS disk.
4. You'll need to follow a similar process for hotplug support, except there's slightly different calls in order to hot plug the secret and tls objects.
Although everything needs to go in for the same libvirt release, it's OK to get things working and reviewed as they're ready. Be cognizant that libvirt releases are monthly - so dropping a large series near the end of the month probably would result in review/push falling into the subsequent month's release. Also attempting to do everything in a smaller subset of patches will result in a request to split things up into manageable chunks.
John
Thanks, Ashish
On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602@gmail.com> wrote:
Thanks for the information. I'll work with this and get back if I get stuck somewhere.
My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that...
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
>>> >>> #1. Based on Peter's v2 comments, we don't want to support the >>> older/legacy syntax for VxHS, so it's something that should be removed - >>> although we should check for it being present and fail if found. >>> >> >> I am testing with changed code to return error if legacy syntax is >> found for VxHS. Also added a test case to check for failure on legacy >> syntax and it seems to pass (test #41 below). >> >> Then I added a pass test case to check conversion from new native >> syntax to XML (test #40 below). That test fails with error >> 'qemuParseCommandLineDisk:901 : internal error: missing file parameter >> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...' > > The qemu_parse_command.c changes while nice to have weren't even updated > when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') > Check the changes to add the new s > > IOW: This code knows how to parse something like: > > -drive > 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1' > > but it's clueless for: > > -drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ > file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ > file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ > file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ > if=none,id=drive-virtio-disk2 \ > -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ > id=virtio-disk2 > > See >> >> Looks like none of the existing tests in qemuargv2xmltest test for the >> parsing of new syntax, and qemuParseCommandLineDisk() expects to find >> 'file=' for a drive or it errors out. If this is true, will it be able >> to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
>> >> Output from the newly added test cases (40 should pass and 41 checks >> for error) : >> >> 40) QEMU ARGV-2-XML disk-drive-network-vxhs >> ... Got unexpected warning from qemuParseCommandLineString: >> 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 >> 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain >> 2017-01-28 00:57:30.814+0000: 10391: error : >> qemuParseCommandLineDisk:901 : internal error: missing file parameter >> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >> libvirt: QEMU Driver error : internal error: missing file parameter in >> drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >> FAILED >> >> 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail >> ... Got expected error from qemuParseCommandLineString: >> libvirt: QEMU Driver error : internal error: VxHS protocol does not >> support URI syntax >> 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' >> OK >> 42) QEMU ARGV-2-XML disk-usb ... OK >> >> >> >>> #2. Is the desire to ever support more than 1 host? If not, then is the >>> "server" syntax you've borrowed from the Gluster code necessary? Could >>> you just go with the single "host" like NBD and SSH. As it relates to >>> the qemu command line - I'm not quite as clear. From the example I see >>> in commit id '7b7da9e28', the gluster syntax would have: >>> >> >> Present understanding is to have only one host. You are right, the >> "server" part is not necessary. Will have to check with the qemu >> community on this change. >> >>> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ >>> +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ >>> +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ >>> >>> whereas, the VxHS syntax is: >>> +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ >>> >>> FWIW: I also note there is no ".type=tcp" in your output - so perhaps >>> the "default" is tcp unless otherwise specified, but I'm sure of the >>> qemu syntax requirements in this area. I assume that since there's only >>> 1 server, the ".0, .1, .2" become unnecessary (something added by commit >>> id 'f1bbc7df4' for multiple gluster hosts). >>> >> >> That's correct. TCP is the default. >> >>> I haven't closedly followed the qemu syntax discussion, but it would it >>> would be possible to use: >>> >>> +file.host=192.168.0.1,file.port=9999 >>> >> >> That is correct. Above syntax would also work for us. I will pose this >> suggestion to the qemu community and update with their response. >>
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

On Tue, Apr 25, 2017 at 8:22 PM, ashish mittal <ashmit602@gmail.com> wrote:
On Mon, Apr 17, 2017 at 9:19 AM, John Ferlan <jferlan@redhat.com> wrote:
On 04/14/2017 07:26 PM, ashish mittal wrote:
Looking at the code, I suspect significant changes will be needed to support passing TLS arguments for disk devices. Here's what I have right now -
Sure you're going to need to add some code, but there should be enough infrastructure there now that is reusable. I made a number of changes during the 7.4 release to generate "generic enough" API's that both chardev TLS and migrate TLS can use them. Adding a disk TLS option should be easy. It was something I had in the back of my mind while altering the APIs.
I think if you use cscope and then search on 'haveTLS' you'll get the necessary pieces. The 'migTLSAlias' may also give you some ideas.
int qemuProcessPrepareDomain(virConnectPtr conn, virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int flags) { ... VIR_DEBUG("Prepare chardev source backends for TLS"); qemuDomainPrepareChardevSource(vm->def, driver); <== This is where char devices prepares their TLS...
<=== This is where we would expect a generic "Prepare disk source backends for TLS" for all the disk devices <=== -- OR -- I can do something specific for VxHS here
I'm not yet sure you need this step for disk TLS. For chardev, it's "enforcing" that if the qemu.conf chardevTLS is set, then as long as the domain chardev source object using TCP doesn't have a "tls='no'" or "tls='yes'" attribute, that TLS gets set for the object. I recall not agreeing with exactly how this was implemented for chardevTLS, but I do understand why.
You need to decide "how" you see this being utilized. For example, if an XML object had "tls='yes'", but a global qemu.conf variable was set to 0, then would you expect TLS to work? Conversely, if the global qemu.conf was set, then would you expect any disk that could use TLS would be forced to try unless the XML configuration explicitly set disable. In the long run adding TLS options to a disk startup would still be on a protocol by protocol basis.
If you did add this, then it would be a "generic"
qemuDomainPrepareDiskSource()
where qemuDomainPrepareDiskSource would traverse the vm->def->ndisks (like qemuDomainPrepareChardevSourceTLS does for each chardev source) looking for disks with a <source> defined and checking the value of the haveTLS in order to decide whether to set/clear the flag based on the global setting. You wouldn't need the tlsFromConfig - that's there because of a cross version migration configuration issue that shouldn't apply for your purposes.
VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs"); if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps changes will be needed here also, but we plan to pass the cert directory, endpoint and ID as plain-text..still would need a place to save these new values..
Yes, but more specifically in qemuDomainSecretDiskPrepare in order to lookup of the secret if it exists (logic should be similar to qemuDomainSecretChardevPrepare, but specific to disk needs)
goto cleanup; ... }
I'm thinking, I would also need to extend one of the following structures to save the TLS related info.
struct _virDomainDiskDef { } --- OR --- struct _qemuDomainDiskPrivate {
This one^^
}
Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time?
No, it'd be preferable to have a complete solution. That solution should be a separately compilable/checkable multi-patch series where each patch makes forward progress towards your goal.
Consider first getting the non-TLS environment working. You have some of it working with the existing patches - it's updating the generation of the command line that would seem to need the adjustment. I think I've already pointed out a series or two that should help you in that endeavor.
I think this part is taken care of. Please see my earlier emails and do let me know if something is missing on the base functionality.
Then for TLS, again it's a multi-step process
1. Modify the qemu_conf.c and qemu.conf files to search for new disk_vxhs_tls_x509* definitions since I assume that either you'll use the existing qemu definitions or you (more likely) want to have some directory that would have a VxHS specific environment. I suggest a "disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI, etc. support TLS, then they'd conceivably have their own "disk_rbd" or "disk_icsci" options since each would have their own server to authenticate using TLS creds.
Thanks for laying out the steps so nicely. I plan to refer to chardev TLS commits to keep my checkins to manageable chunks. For example, the first TLS related commit on top of the base VxHS patch, which corresponds to chardev commit ID 3f60a9c3, would look like: https://github.com/MittalAshish/libvirt/compare/basechanges_withoutTLS...Mit...
If this looks fine, I will send out the next patch series as the base VxHS patch + TLS part 1 patch.
Two questions please - (1) Can the base VxHS patch be merged to master before the TLS is fully implemented? (2) Should the TLS support be added as multiple patches, or one patch with multiple incremental commits?
Every commit becomes a patch, so I guess the question is incorrect. What I'm trying to understand is - do the patches get merged as they get reviewed/approved, or do they get merged after the last bit is complete? If it's the latter, then I guess it should be OK to make changes to an older patch in the series in a newer version?
Thanks!
2. Add a "tls='yes'" (similar to how chardev did things) for the disk <source> entry. This allows the mechanism to allow/deny on a disk by disk basis whether TLS credentials would be needed.
3. Set up the qemu infrastructure to accept/recognize some sort of tls option which would allow the tls-creds-x509 to be added to the command line. Start with adding a new entry to _qemuDomainDiskPrivate such as "qemuDomainSecretInfoPtr tlsinfo;". Then qemuDomainSecretDiskPrepare is where you'll need to adjust to add a possible secret that's used to access the TLS credentials (see qemuDomainSecretChardevPrepare for the example). Then when the disk is added on the command line you'll have the necessary pieces for the two calls qemuBuildObjectSecretCommandLine and qemuBuildTLSx509CommandLine used to generate the optional secret and required tls-creds-x509 object. The alias you use for that object is what then gets added to the command line generation for the VxHS disk.
4. You'll need to follow a similar process for hotplug support, except there's slightly different calls in order to hot plug the secret and tls objects.
Although everything needs to go in for the same libvirt release, it's OK to get things working and reviewed as they're ready. Be cognizant that libvirt releases are monthly - so dropping a large series near the end of the month probably would result in review/push falling into the subsequent month's release. Also attempting to do everything in a smaller subset of patches will result in a request to split things up into manageable chunks.
John
Thanks, Ashish
On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602@gmail.com> wrote:
Thanks for the information. I'll work with this and get back if I get stuck somewhere.
My immediate objective is to figure out how to pass the TLS x509 certificate information to the vxhs block device on the qemu command line. I guess I expected some other block device (i.e. NBD) to call the qemuBuildTLSx509CommandLine(), but got confused when I did not find that...
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote: > [...] > Pressed send too soon, sigh. > > >>>> >>>> #1. Based on Peter's v2 comments, we don't want to support the >>>> older/legacy syntax for VxHS, so it's something that should be removed - >>>> although we should check for it being present and fail if found. >>>> >>> >>> I am testing with changed code to return error if legacy syntax is >>> found for VxHS. Also added a test case to check for failure on legacy >>> syntax and it seems to pass (test #41 below). >>> >>> Then I added a pass test case to check conversion from new native >>> syntax to XML (test #40 below). That test fails with error >>> 'qemuParseCommandLineDisk:901 : internal error: missing file parameter >>> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...' >> >> The qemu_parse_command.c changes while nice to have weren't even updated >> when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') >> Check the changes to add the new s >> >> IOW: This code knows how to parse something like: >> >> -drive >> 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1' >> >> but it's clueless for: >> >> -drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ >> file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ >> file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ >> file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ >> if=none,id=drive-virtio-disk2 \ >> -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ >> id=virtio-disk2 >> >> See >>> >>> Looks like none of the existing tests in qemuargv2xmltest test for the >>> parsing of new syntax, and qemuParseCommandLineDisk() expects to find >>> 'file=' for a drive or it errors out. If this is true, will it be able >>> to parse the new syntax? Some help here please! > > So I wouldn't expect the VxHS code to be able to do that unless you > wanted to be adventurous. The good news is that this code is primarily > for developers that need to take a qemu command line to generate the > libvirt syntax. It has not really been kept up to date with all the most > recent command line changes. I started to try over a year ago, but got > very side tracked. > >>> >>> Output from the newly added test cases (40 should pass and 41 checks >>> for error) : >>> >>> 40) QEMU ARGV-2-XML disk-drive-network-vxhs >>> ... Got unexpected warning from qemuParseCommandLineString: >>> 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 >>> 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain >>> 2017-01-28 00:57:30.814+0000: 10391: error : >>> qemuParseCommandLineDisk:901 : internal error: missing file parameter >>> in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >>> libvirt: QEMU Driver error : internal error: missing file parameter in >>> drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' >>> FAILED >>> >>> 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail >>> ... Got expected error from qemuParseCommandLineString: >>> libvirt: QEMU Driver error : internal error: VxHS protocol does not >>> support URI syntax >>> 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' >>> OK >>> 42) QEMU ARGV-2-XML disk-usb ... OK >>> >>> >>> >>>> #2. Is the desire to ever support more than 1 host? If not, then is the >>>> "server" syntax you've borrowed from the Gluster code necessary? Could >>>> you just go with the single "host" like NBD and SSH. As it relates to >>>> the qemu command line - I'm not quite as clear. From the example I see >>>> in commit id '7b7da9e28', the gluster syntax would have: >>>> >>> >>> Present understanding is to have only one host. You are right, the >>> "server" part is not necessary. Will have to check with the qemu >>> community on this change. >>> >>>> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ >>>> +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ >>>> +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ >>>> >>>> whereas, the VxHS syntax is: >>>> +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ >>>> >>>> FWIW: I also note there is no ".type=tcp" in your output - so perhaps >>>> the "default" is tcp unless otherwise specified, but I'm sure of the >>>> qemu syntax requirements in this area. I assume that since there's only >>>> 1 server, the ".0, .1, .2" become unnecessary (something added by commit >>>> id 'f1bbc7df4' for multiple gluster hosts). >>>> >>> >>> That's correct. TCP is the default. >>> >>>> I haven't closedly followed the qemu syntax discussion, but it would it >>>> would be possible to use: >>>> >>>> +file.host=192.168.0.1,file.port=9999 >>>> >>> >>> That is correct. Above syntax would also work for us. I will pose this >>> suggestion to the qemu community and update with their response. >>> > > It's not that important... I was looking for a simplification and > generation of only what's required. You can continue using the server > syntax - perhaps just leave a note/comment in the code indicating the > decision point and move on. > > [...] > > John

[...]
Given that adding TLS support for VxHS (and disk devices in general) will not be trivial, I want to check if this can be taken up at a later time?
No, it'd be preferable to have a complete solution. That solution should be a separately compilable/checkable multi-patch series where each patch makes forward progress towards your goal.
Consider first getting the non-TLS environment working. You have some of it working with the existing patches - it's updating the generation of the command line that would seem to need the adjustment. I think I've already pointed out a series or two that should help you in that endeavor.
I think this part is taken care of. Please see my earlier emails and do let me know if something is missing on the base functionality.
Well considering the "original" [PATCH v3] was posted 1/19/2017: https://www.redhat.com/archives/libvir-list/2017-January/msg00892.html it's been long since paged out of short term memory by today! I also know that there's been a lot of adjustments along the way with the QEMU patches and I haven't followed each of those that closely. I hope when I see a new series I can start with a "fresh perspective".
Then for TLS, again it's a multi-step process
1. Modify the qemu_conf.c and qemu.conf files to search for new disk_vxhs_tls_x509* definitions since I assume that either you'll use the existing qemu definitions or you (more likely) want to have some directory that would have a VxHS specific environment. I suggest a "disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI, etc. support TLS, then they'd conceivably have their own "disk_rbd" or "disk_icsci" options since each would have their own server to authenticate using TLS creds.
Thanks for laying out the steps so nicely. I plan to refer to chardev TLS commits to keep my checkins to manageable chunks. For example, the first TLS related commit on top of the base VxHS patch, which corresponds to chardev commit ID 3f60a9c3, would look like: https://github.com/MittalAshish/libvirt/compare/basechanges_withoutTLS...Mit...
If this looks fine, I will send out the next patch series as the base VxHS patch + TLS part 1 patch.
Two questions please - (1) Can the base VxHS patch be merged to master before the TLS is fully implemented? (2) Should the TLS support be added as multiple patches, or one patch with multiple incremental commits?
Every commit becomes a patch, so I guess the question is incorrect. What I'm trying to understand is - do the patches get merged as they get reviewed/approved, or do they get merged after the last bit is complete? If it's the latter, then I guess it should be OK to make changes to an older patch in the series in a newer version?
Start here: http://libvirt.org/docs.html See contributor guidelines: http://libvirt.org/hacking.html Look at the "history" of patches a bit and you'll get a feel for what the general flow should be. Dumping everything into one patch is a no go, creating a 100 patch series won't get your patches looked at quickly. Find a happy medium. Since having "basic" support and all the TLS bells and whistles cannot be done in one commit (far too big), use the opportunity to logically build up support. Patches should logically add the capabilities you'll need, but must also pass the "make check syntax-check" for each patch. In the long run what is "fully functionally complete" is a decision left for the contributor. In this case, you're trying to get to having TLS in order to have enough functionality in libvirt to match what's been added to QEMU. If it's possible in QEMU to start up and use without TLS, then you can use that inflection point in libvirt to get something pushed; however, since a fully functional and complete solution includes TLS, that capability needs to be added in via libvirt as well. The question you have to ask yourself though is - if the partial solution makes it in 3.4 and the TLS cannot make it until 3.5, would any extra changes would be required to handle version differences? Migration makes this tricky. That's (more or less) what happened with chardev. Some TLS bits made it in one release but full support crossed releases and things got complex. I really think though that if you target being "fully functionally complete", then that will be the best solution. Upstream libvirt releases are on a month by month cadence, so don't expect to post something in the last days prior to a release and have it reviewed and merged unless it perhaps a cleanup of something that was already mostly agreed upon earlier in the month. Still it cannot be pushed during freeze which is typically the last 4-7 days before release. John BTW: Not sure how this series of responses has broken from the original patch, but it's becoming "too deep"... I think there's 10 responses deep in a few places. [...]

On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
Hi! Thanks for explaining how TLS could work for block devices in general, and specifically for VxHS. I agree that the plan should be to first have the base VxHS patch ready (sans TLS support), and then build the TLS support in a series of patches on top of the base patch. To that end, I have made changes to v3 to fix the problems pointed out in this series and plan to submit the new patch soon. I do have some confusion regarding your comment above - I thought I have already implemented the new syntax similar to gluster's "-drive 'file.driver=gluster,file.volume=..." in this v3 patch series.
v3 changelog: (1) Implemented the modern syntax for VxHS disk specification. (2) Changed qemuxml2argvdata VxHS test case to verify the new syntax.
Please see the newly added xml2argv test +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args +-drive file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ Let me know if I'm still missing something on the new syntax. Thanks!
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
#1. Based on Peter's v2 comments, we don't want to support the older/legacy syntax for VxHS, so it's something that should be removed - although we should check for it being present and fail if found.
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
#2. Is the desire to ever support more than 1 host? If not, then is the "server" syntax you've borrowed from the Gluster code necessary? Could you just go with the single "host" like NBD and SSH. As it relates to the qemu command line - I'm not quite as clear. From the example I see in commit id '7b7da9e28', the gluster syntax would have:
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
+file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
whereas, the VxHS syntax is: +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
FWIW: I also note there is no ".type=tcp" in your output - so perhaps the "default" is tcp unless otherwise specified, but I'm sure of the qemu syntax requirements in this area. I assume that since there's only 1 server, the ".0, .1, .2" become unnecessary (something added by commit id 'f1bbc7df4' for multiple gluster hosts).
That's correct. TCP is the default.
I haven't closedly followed the qemu syntax discussion, but it would it would be possible to use:
+file.host=192.168.0.1,file.port=9999
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

On Mon, Apr 24, 2017 at 2:10 PM, ashish mittal <ashmit602@gmail.com> wrote:
On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan@redhat.com> wrote:
On 04/10/2017 07:32 PM, ashish mittal wrote:
Hi,
I'm trying to figure out what changes are needed in the libvirt vxhs patch to support passing TLS X509 arguments to qemu, similar to the following -
Sample QEMU command line passing TLS credentials to the VxHS block device (run in secure mode): ./qemu-io --object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read -v 66000 2.5k' 'json:{"server.host": "127.0.0.1", "server.port": "9999", "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
I was hoping to find some NBD code related to this, but not able to locate it. Any pointers will be appreciated.
Well you have a couple of things to deal with... There's the creation of the TLS object and there's altering the parameters used for the qemu command based on your needs/model.
First off you'll need to figure out where/how you're going to define where the TLS creds exist. For that, I suspect you'll have code similar to how chardevTLS support was added. Essentially some way to either use an existing TLS environment or a way to allow someone to define a vxhs specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf - I've made changes recently there too).
For the TLS object creation on the command line, see qemuBuildTLSx509CommandLine to see how the code builds the "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client" portion of your command line.
I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects, qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
The rest of the command line is going to be a bit tricky since using the "newer" driver syntax for libvirt is "sparse". Traditionally libvirt has used "-drive file=[$uri:]$path,format=$driver,..." (use grep "\-drive file" tests/*/*.args from a libvirt git directory - you can grep that output for gluster or rbd to see the uri format).
IIUC the qemu changes correctly though, you cannot use that "file=" syntax, instead you'll need to format the command line similar to how things were done for gluster to add multiple host support where the syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep "\-drive file.driver" tests/*/*.args to see how this is done for gluster).
Hi! Thanks for explaining how TLS could work for block devices in general, and specifically for VxHS. I agree that the plan should be to first have the base VxHS patch ready (sans TLS support), and then build the TLS support in a series of patches on top of the base patch. To that end, I have made changes to v3 to fix the problems pointed out in this series and plan to submit the new patch soon. I do have some confusion regarding your comment above - I thought I have already implemented the new syntax similar to gluster's "-drive 'file.driver=gluster,file.volume=..." in this v3 patch series.
v3 changelog: (1) Implemented the modern syntax for VxHS disk specification. (2) Changed qemuxml2argvdata VxHS test case to verify the new syntax.
Please see the newly added xml2argv test +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
+-drive file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
Let me know if I'm still missing something on the new syntax. Thanks!
I've made some changes over v3 - (1) Review comments incorporated (2) Changed code to error out with proper error message on URI syntax. Added new test cases to check this. Changes can be reviewed here before I email the new patch - https://github.com/libvirt/libvirt/compare/master...MittalAshish:basechanges... I will send out the new patch if nothing else is missing in the base functionality. Thanks!
That code/support was added in a series starting at commit id '22ad4a7c' and working your way forward through about 18 patches. Using a visual tool like gitk helps a lot...
I think what will be easiest is to start at that commit and look "up" for gluster specific changes. Be careful not to fully cut-n-paste because there have been patches since that time to fix some issues with the initial implementation. I point it out only as a way for you to see which modules and where "similar" code exists.
You'll also note there is an nbd patch in that series of patches - not sure how much that helps, but it perhaps gives you some amount of guidelines. Although I don't believe nbd was added to the command line - it was just a way of syntax generation/testing.
John
Thanks, Ashish
On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan@redhat.com> wrote:
[...] Pressed send too soon, sigh.
> > #1. Based on Peter's v2 comments, we don't want to support the > older/legacy syntax for VxHS, so it's something that should be removed - > although we should check for it being present and fail if found. >
I am testing with changed code to return error if legacy syntax is found for VxHS. Also added a test case to check for failure on legacy syntax and it seems to pass (test #41 below).
Then I added a pass test case to check conversion from new native syntax to XML (test #40 below). That test fails with error 'qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
The qemu_parse_command.c changes while nice to have weren't even updated when multiple gluster servers were added (e.g. commit id '' or '7b7da9e28') Check the changes to add the new s
IOW: This code knows how to parse something like:
-drive 'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
but it's clueless for:
-drive file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\ file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ if=none,id=drive-virtio-disk2 \ -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\ id=virtio-disk2
See
Looks like none of the existing tests in qemuargv2xmltest test for the parsing of new syntax, and qemuParseCommandLineDisk() expects to find 'file=' for a drive or it errors out. If this is true, will it be able to parse the new syntax? Some help here please!
So I wouldn't expect the VxHS code to be able to do that unless you wanted to be adventurous. The good news is that this code is primarily for developers that need to take a qemu command line to generate the libvirt syntax. It has not really been kept up to date with all the most recent command line changes. I started to try over a year ago, but got very side tracked.
Output from the newly added test cases (40 should pass and 41 checks for error) :
40) QEMU ARGV-2-XML disk-drive-network-vxhs ... Got unexpected warning from qemuParseCommandLineString: 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version: 3.0.0 2017-01-28 00:57:30.814+0000: 10391: info : hostname: localhost.localdomain 2017-01-28 00:57:30.814+0000: 10391: error : qemuParseCommandLineDisk:901 : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' libvirt: QEMU Driver error : internal error: missing file parameter in drive 'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none' FAILED
41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail ... Got expected error from qemuParseCommandLineString: libvirt: QEMU Driver error : internal error: VxHS protocol does not support URI syntax 'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251' OK 42) QEMU ARGV-2-XML disk-usb ... OK
> #2. Is the desire to ever support more than 1 host? If not, then is the > "server" syntax you've borrowed from the Gluster code necessary? Could > you just go with the single "host" like NBD and SSH. As it relates to > the qemu command line - I'm not quite as clear. From the example I see > in commit id '7b7da9e28', the gluster syntax would have: >
Present understanding is to have only one host. You are right, the "server" part is not necessary. Will have to check with the qemu community on this change.
> +file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\ > +file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\ > +file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\ > > whereas, the VxHS syntax is: > +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\ > > FWIW: I also note there is no ".type=tcp" in your output - so perhaps > the "default" is tcp unless otherwise specified, but I'm sure of the > qemu syntax requirements in this area. I assume that since there's only > 1 server, the ".0, .1, .2" become unnecessary (something added by commit > id 'f1bbc7df4' for multiple gluster hosts). >
That's correct. TCP is the default.
> I haven't closedly followed the qemu syntax discussion, but it would it > would be possible to use: > > +file.host=192.168.0.1,file.port=9999 >
That is correct. Above syntax would also work for us. I will pose this suggestion to the qemu community and update with their response.
It's not that important... I was looking for a simplification and generation of only what's required. You can continue using the server syntax - perhaps just leave a note/comment in the code indicating the decision point and move on.
[...]
John

[...]
Hi! Thanks for explaining how TLS could work for block devices in general, and specifically for VxHS. I agree that the plan should be to first have the base VxHS patch ready (sans TLS support), and then build the TLS support in a series of patches on top of the base patch. To that end, I have made changes to v3 to fix the problems pointed out in this series and plan to submit the new patch soon. I do have some confusion regarding your comment above - I thought I have already implemented the new syntax similar to gluster's "-drive 'file.driver=gluster,file.volume=..." in this v3 patch series.
v3 changelog: (1) Implemented the modern syntax for VxHS disk specification. (2) Changed qemuxml2argvdata VxHS test case to verify the new syntax.
Please see the newly added xml2argv test +++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
+-drive file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,\ +file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
Let me know if I'm still missing something on the new syntax. Thanks!
I've made some changes over v3 - (1) Review comments incorporated (2) Changed code to error out with proper error message on URI syntax. Added new test cases to check this.
Changes can be reviewed here before I email the new patch - https://github.com/libvirt/libvirt/compare/master...MittalAshish:basechanges...
I will send out the new patch if nothing else is missing in the base functionality. Thanks!
Sorry - been focused on my own work lately, but will try to spend a few cycles on this today. I don't have the context fully "paged in" from the previous series and the recent responses. Heck I've almost forgotten the recent ones. I can guarantee though that you won't make 3.3 on any of these changes. Also I'm not a fan of github diff sets. John [...]
participants (2)
-
ashish mittal
-
John Ferlan