On 04/14/2017 07:26 PM, ashish mittal wrote:
Looking at the code, I suspect significant changes will be needed to
support passing TLS arguments for disk devices. Here's what I have
right now -
Sure you're going to need to add some code, but there should be enough
infrastructure there now that is reusable. I made a number of changes
during the 7.4 release to generate "generic enough" API's that both
chardev TLS and migrate TLS can use them. Adding a disk TLS option
should be easy. It was something I had in the back of my mind while
altering the APIs.
I think if you use cscope and then search on 'haveTLS' you'll get the
necessary pieces. The 'migTLSAlias' may also give you some ideas.
int
qemuProcessPrepareDomain(virConnectPtr conn,
virQEMUDriverPtr driver,
virDomainObjPtr vm,
unsigned int flags)
{
...
VIR_DEBUG("Prepare chardev source backends for TLS");
qemuDomainPrepareChardevSource(vm->def, driver); <== This is
where char devices prepares their TLS...
<=== This is where we would expect a generic "Prepare disk source
backends for TLS" for all the disk devices
<=== -- OR -- I can do something specific for VxHS here
I'm not yet sure you need this step for disk TLS. For chardev, it's
"enforcing" that if the qemu.conf chardevTLS is set, then as long as the
domain chardev source object using TCP doesn't have a "tls='no'" or
"tls='yes'" attribute, that TLS gets set for the object. I recall not
agreeing with exactly how this was implemented for chardevTLS, but I do
understand why.
You need to decide "how" you see this being utilized. For example, if an
XML object had "tls='yes'", but a global qemu.conf variable was set to
0, then would you expect TLS to work? Conversely, if the global
qemu.conf was set, then would you expect any disk that could use TLS
would be forced to try unless the XML configuration explicitly set
disable. In the long run adding TLS options to a disk startup would
still be on a protocol by protocol basis.
If you did add this, then it would be a "generic"
qemuDomainPrepareDiskSource()
where qemuDomainPrepareDiskSource would traverse the vm->def->ndisks
(like qemuDomainPrepareChardevSourceTLS does for each chardev source)
looking for disks with a <source> defined and checking the value of the
haveTLS in order to decide whether to set/clear the flag based on the
global setting. You wouldn't need the tlsFromConfig - that's there
because of a cross version migration configuration issue that shouldn't
apply for your purposes.
VIR_DEBUG("Add secrets to disks, hostdevs, and chardevs");
if (qemuDomainSecretPrepare(conn, driver, vm) < 0) <== perhaps
changes will be needed here also, but we plan to pass the cert
directory, endpoint and ID as plain-text..still would need a place to
save these new values..
Yes, but more specifically in qemuDomainSecretDiskPrepare in order to
lookup of the secret if it exists (logic should be similar to
qemuDomainSecretChardevPrepare, but specific to disk needs)
goto cleanup;
...
}
I'm thinking, I would also need to extend one of the following
structures to save the TLS related info.
struct _virDomainDiskDef {
}
--- OR ---
struct _qemuDomainDiskPrivate {
This one^^
}
Given that adding TLS support for VxHS (and disk devices in general)
will not be trivial, I want to check if this can be taken up at a
later time?
No, it'd be preferable to have a complete solution. That solution
should be a separately compilable/checkable multi-patch series where
each patch makes forward progress towards your goal.
Consider first getting the non-TLS environment working. You have some of
it working with the existing patches - it's updating the generation of
the command line that would seem to need the adjustment. I think I've
already pointed out a series or two that should help you in that endeavor.
Then for TLS, again it's a multi-step process
1. Modify the qemu_conf.c and qemu.conf files to search for new
disk_vxhs_tls_x509* definitions since I assume that either you'll use
the existing qemu definitions or you (more likely) want to have some
directory that would have a VxHS specific environment. I suggest a
"disk_vxhs_" - just to make it more specific. If some day RBD, iSCSI,
etc. support TLS, then they'd conceivably have their own "disk_rbd" or
"disk_icsci" options since each would have their own server to
authenticate using TLS creds.
2. Add a "tls='yes'" (similar to how chardev did things) for the disk
<source> entry. This allows the mechanism to allow/deny on a disk by
disk basis whether TLS credentials would be needed.
3. Set up the qemu infrastructure to accept/recognize some sort of tls
option which would allow the tls-creds-x509 to be added to the command
line. Start with adding a new entry to _qemuDomainDiskPrivate such as
"qemuDomainSecretInfoPtr tlsinfo;". Then qemuDomainSecretDiskPrepare is
where you'll need to adjust to add a possible secret that's used to
access the TLS credentials (see qemuDomainSecretChardevPrepare for the
example). Then when the disk is added on the command line you'll have
the necessary pieces for the two calls qemuBuildObjectSecretCommandLine
and qemuBuildTLSx509CommandLine used to generate the optional secret and
required tls-creds-x509 object. The alias you use for that object is
what then gets added to the command line generation for the VxHS disk.
4. You'll need to follow a similar process for hotplug support, except
there's slightly different calls in order to hot plug the secret and tls
objects.
Although everything needs to go in for the same libvirt release, it's OK
to get things working and reviewed as they're ready. Be cognizant that
libvirt releases are monthly - so dropping a large series near the end
of the month probably would result in review/push falling into the
subsequent month's release. Also attempting to do everything in a
smaller subset of patches will result in a request to split things up
into manageable chunks.
John
Thanks,
Ashish
On Tue, Apr 11, 2017 at 4:26 PM, ashish mittal <ashmit602(a)gmail.com> wrote:
> Thanks for the information. I'll work with this and get back if I get
> stuck somewhere.
>
> My immediate objective is to figure out how to pass the TLS x509
> certificate information to the vxhs block device on the qemu command
> line. I guess I expected some other block device (i.e. NBD) to call
> the qemuBuildTLSx509CommandLine(), but got confused when I did not
> find that...
>
> On Tue, Apr 11, 2017 at 3:47 PM, John Ferlan <jferlan(a)redhat.com> wrote:
>>
>>
>> On 04/10/2017 07:32 PM, ashish mittal wrote:
>>> Hi,
>>>
>>> I'm trying to figure out what changes are needed in the libvirt vxhs
>>> patch to support passing TLS X509 arguments to qemu, similar to the
>>> following -
>>>
>>> Sample QEMU command line passing TLS credentials to the VxHS block
>>> device (run in secure mode):
>>> ./qemu-io --object
>>> tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read
>>> -v 66000 2.5k' 'json:{"server.host": "127.0.0.1",
"server.port": "9999",
>>> "vdisk-id": "/test.raw", "driver":
"vxhs", "tls-creds":"tls0"}'
>>>
>>> I was hoping to find some NBD code related to this, but not able to
>>> locate it. Any pointers will be appreciated.
>>
>> Well you have a couple of things to deal with... There's the creation of
>> the TLS object and there's altering the parameters used for the qemu
>> command based on your needs/model.
>>
>> First off you'll need to figure out where/how you're going to define
>> where the TLS creds exist. For that, I suspect you'll have code similar
>> to how chardevTLS support was added. Essentially some way to either use
>> an existing TLS environment or a way to allow someone to define a vxhs
>> specific environment (hint, see src/qemu/qemu_conf.c, src/qemu/qemu.conf
>> - I've made changes recently there too).
>>
>> For the TLS object creation on the command line, see
>> qemuBuildTLSx509CommandLine to see how the code builds the
>> "tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client"
portion
>> of your command line.
>>
>> I forget if hot plug was in your plan, but see qemuDomainGetTLSObjects,
>> qemuDomainAddTLSObjects, and qemuDomainDelTLSObjects for that.
>>
>> The rest of the command line is going to be a bit tricky since using the
>> "newer" driver syntax for libvirt is "sparse". Traditionally
libvirt has
>> used "-drive file=[$uri:]$path,format=$driver,..." (use grep
"\-drive
>> file" tests/*/*.args from a libvirt git directory - you can grep that
>> output for gluster or rbd to see the uri format).
>>
>> IIUC the qemu changes correctly though, you cannot use that "file="
>> syntax, instead you'll need to format the command line similar to how
>> things were done for gluster to add multiple host support where the
>> syntax is "-drive 'file.driver=gluster,file.volume=..." (use grep
>> "\-drive file.driver" tests/*/*.args to see how this is done for
gluster).
>>
>> That code/support was added in a series starting at commit id '22ad4a7c'
>> and working your way forward through about 18 patches. Using a visual
>> tool like gitk helps a lot...
>>
>> I think what will be easiest is to start at that commit and look "up"
>> for gluster specific changes. Be careful not to fully cut-n-paste
>> because there have been patches since that time to fix some issues with
>> the initial implementation. I point it out only as a way for you to see
>> which modules and where "similar" code exists.
>>
>> You'll also note there is an nbd patch in that series of patches - not
>> sure how much that helps, but it perhaps gives you some amount of
>> guidelines. Although I don't believe nbd was added to the command line -
>> it was just a way of syntax generation/testing.
>>
>>
>> John
>>
>>>
>>> Thanks,
>>> Ashish
>>>
>>> On Wed, Feb 1, 2017 at 8:36 AM, John Ferlan <jferlan(a)redhat.com>
wrote:
>>>> [...]
>>>> Pressed send too soon, sigh.
>>>>
>>>>
>>>>>>>
>>>>>>> #1. Based on Peter's v2 comments, we don't want to
support the
>>>>>>> older/legacy syntax for VxHS, so it's something that
should be removed -
>>>>>>> although we should check for it being present and fail if
found.
>>>>>>>
>>>>>>
>>>>>> I am testing with changed code to return error if legacy syntax
is
>>>>>> found for VxHS. Also added a test case to check for failure on
legacy
>>>>>> syntax and it seems to pass (test #41 below).
>>>>>>
>>>>>> Then I added a pass test case to check conversion from new
native
>>>>>> syntax to XML (test #40 below). That test fails with error
>>>>>> 'qemuParseCommandLineDisk:901 : internal error: missing file
parameter
>>>>>> in drive
'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b...'
>>>>>
>>>>> The qemu_parse_command.c changes while nice to have weren't even
updated
>>>>> when multiple gluster servers were added (e.g. commit id ''
or '7b7da9e28')
>>>>> Check the changes to add the new s
>>>>>
>>>>> IOW: This code knows how to parse something like:
>>>>>
>>>>> -drive
>>>>>
'file=gluster+unix:///Volume2/Image?socket=/path/to/sock,format=raw,if=none,id=drive-virtio-disk1'
>>>>>
>>>>> but it's clueless for:
>>>>>
>>>>> -drive
file.driver=gluster,file.volume=Volume3,file.path=/Image.qcow2,\
>>>>>
file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\
>>>>>
file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\
>>>>>
file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
>>>>> if=none,id=drive-virtio-disk2 \
>>>>> -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,\
>>>>> id=virtio-disk2
>>>>>
>>>>> See
>>>>>>
>>>>>> Looks like none of the existing tests in qemuargv2xmltest test
for the
>>>>>> parsing of new syntax, and qemuParseCommandLineDisk() expects to
find
>>>>>> 'file=' for a drive or it errors out. If this is true,
will it be able
>>>>>> to parse the new syntax? Some help here please!
>>>>
>>>> So I wouldn't expect the VxHS code to be able to do that unless you
>>>> wanted to be adventurous. The good news is that this code is primarily
>>>> for developers that need to take a qemu command line to generate the
>>>> libvirt syntax. It has not really been kept up to date with all the most
>>>> recent command line changes. I started to try over a year ago, but got
>>>> very side tracked.
>>>>
>>>>>>
>>>>>> Output from the newly added test cases (40 should pass and 41
checks
>>>>>> for error) :
>>>>>>
>>>>>> 40) QEMU ARGV-2-XML disk-drive-network-vxhs
>>>>>> ... Got unexpected warning from qemuParseCommandLineString:
>>>>>> 2017-01-28 00:57:30.814+0000: 10391: info : libvirt version:
3.0.0
>>>>>> 2017-01-28 00:57:30.814+0000: 10391: info : hostname:
localhost.localdomain
>>>>>> 2017-01-28 00:57:30.814+0000: 10391: error :
>>>>>> qemuParseCommandLineDisk:901 : internal error: missing file
parameter
>>>>>> in drive
'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none'
>>>>>> libvirt: QEMU Driver error : internal error: missing file
parameter in
>>>>>> drive
'file.driver=vxhs,file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,id=drive-virtio-disk0,cache=none'
>>>>>> FAILED
>>>>>>
>>>>>> 41) QEMU ARGV-2-XML disk-drive-network-vxhs-fail
>>>>>> ... Got expected error from qemuParseCommandLineString:
>>>>>> libvirt: QEMU Driver error : internal error: VxHS protocol does
not
>>>>>> support URI syntax
>>>>>>
'vxhs://192.168.0.1:9999/eb90327c-8302-4725-9e1b-4e85ed4dc251'
>>>>>> OK
>>>>>> 42) QEMU ARGV-2-XML disk-usb
... OK
>>>>>>
>>>>>>
>>>>>>
>>>>>>> #2. Is the desire to ever support more than 1 host? If not,
then is the
>>>>>>> "server" syntax you've borrowed from the
Gluster code necessary? Could
>>>>>>> you just go with the single "host" like NBD and
SSH. As it relates to
>>>>>>> the qemu command line - I'm not quite as clear. From the
example I see
>>>>>>> in commit id '7b7da9e28', the gluster syntax would
have:
>>>>>>>
>>>>>>
>>>>>> Present understanding is to have only one host. You are right,
the
>>>>>> "server" part is not necessary. Will have to check with
the qemu
>>>>>> community on this change.
>>>>>>
>>>>>>>
+file.server.0.type=tcp,file.server.0.host=example.org,file.server.0.port=6000,\
>>>>>>>
+file.server.1.type=tcp,file.server.1.host=example.org,file.server.1.port=24007,\
>>>>>>>
+file.server.2.type=unix,file.server.2.socket=/path/to/sock,format=qcow2,\
>>>>>>>
>>>>>>> whereas, the VxHS syntax is:
>>>>>>>
+file.server.host=192.168.0.1,file.server.port=9999,format=raw,if=none,\
>>>>>>>
>>>>>>> FWIW: I also note there is no ".type=tcp" in your
output - so perhaps
>>>>>>> the "default" is tcp unless otherwise specified,
but I'm sure of the
>>>>>>> qemu syntax requirements in this area. I assume that since
there's only
>>>>>>> 1 server, the ".0, .1, .2" become unnecessary
(something added by commit
>>>>>>> id 'f1bbc7df4' for multiple gluster hosts).
>>>>>>>
>>>>>>
>>>>>> That's correct. TCP is the default.
>>>>>>
>>>>>>> I haven't closedly followed the qemu syntax discussion,
but it would it
>>>>>>> would be possible to use:
>>>>>>>
>>>>>>> +file.host=192.168.0.1,file.port=9999
>>>>>>>
>>>>>>
>>>>>> That is correct. Above syntax would also work for us. I will pose
this
>>>>>> suggestion to the qemu community and update with their response.
>>>>>>
>>>>
>>>> It's not that important... I was looking for a simplification and
>>>> generation of only what's required. You can continue using the
server
>>>> syntax - perhaps just leave a note/comment in the code indicating the
>>>> decision point and move on.
>>>>
>>>> [...]
>>>>
>>>> John