Issue 90 Further Clarifications

[image: Dustan Helm] <https://gitlab.com/dustan.helm> Dustan Helm <https://gitlab.com/dustan.helm> @dustan.helm <https://gitlab.com/dustan.helm> · just now <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432> <https://gitlab.com/libvirt/libvirt/-/issues/90#> Before we start making changes and solidifying our XML parameter choices, we have a few clarifying questions about the issue we'd like to get out of the way. 1. In src/qemu/qemu_capabilities.h, we found the string "/* -drive file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS declaration. What is the purpose of these strings, and how do we modify them to make sense for nfs? Would we simply mirror what is done for VXHS, adding nfs as the protocol instead? 2. Where is domain XML parsed and formatted? Is that what is referred to by the schema formats in domaincommon.rng? 3. In src/qemu/qemu_block.c, the json object arguments currently present in qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the example commit. What is the reason for this change, and how should we take it into account when implementing a new protocol type? Additionally, we were hoping we could get some clarification on the proper way to handle submitting patches with multiple commits. If we receive feedback for only part of a patch, for example, how would we be meant to update it? Would we simply amend the particular commit that needed updates, and if so, how do we amend a commit other than the most recent commit? Should we send in our patches for review one at a time, or try to get all of them made and then send them in all at once?

On Thu, Nov 19, 2020 at 03:48:48PM -0600, Dustan B Helm wrote:
[image: Dustan Helm] <https://gitlab.com/dustan.helm> Dustan Helm <https://gitlab.com/dustan.helm> @dustan.helm <https://gitlab.com/dustan.helm> · just now <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432> <https://gitlab.com/libvirt/libvirt/-/issues/90#>
Before we start making changes and solidifying our XML parameter choices, we have a few clarifying questions about the issue we'd like to get out of the way.
1.
In src/qemu/qemu_capabilities.h, we found the string "/* -drive file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS declaration. What is the purpose of these strings, and how do we modify them to make sense for nfs? Would we simply mirror what is done for VXHS, adding nfs as the protocol instead?
These comments are just a hint/reminder to readers about what QEMU command line option(s), the capability check is tracking the availability of. That particular example might be a bit misleading, since in the qemu_capabilities.c file, we're actually looking for the blockdev-arg/arg-type/+vxhs feature, not -drive. QEMU has 2 ways of configuring disks, -drive is the historical main way, and -blockdev is the modern way that libvirt introduced support for relatively recently. We actually end up having to support both approachs in libvirt currrently, as we try to make libvirt work with both old and new QEMU versions. Peter can probably offer better suggestions than me about what specific thing to probe for 'nfs'.
2.
Where is domain XML parsed and formatted? Is that what is referred to by the schema formats in domaincommon.rng?
The domaincommon.rng file provides the RelaxNG schema, which is used for (optionally) validating XML files before parsing. The actual parser lives in src/conf/domain_conf.{c,h} files. There are also docs for users about the schema in docs/formatdomain.rst
3.
In src/qemu/qemu_block.c, the json object arguments currently present in qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the example commit. What is the reason for this change, and how should we take it into account when implementing a new protocol type?
I'll leave thi question to Peter too.
Additionally, we were hoping we could get some clarification on the proper way to handle submitting patches with multiple commits. If we receive feedback for only part of a patch, for example, how would we be meant to update it? Would we simply amend the particular commit that needed updates, and if so, how do we amend a commit other than the most recent commit? Should we send in our patches for review one at a time, or try to get all of them made and then send them in all at once?
If you get comments on a particular patch in a series, then you need use "git rebase -i master", to go back and edit the original patch that had the comment on. We don't want "fixup" commits at the end of a series When posting patches, you should always post the an entire self-contained series. eg if you posted a series of 5 patches originally, and got comments on 2 patches, update the two patches in question, and then re-post the entire series of 5 patches. If you've only received feedback on a couple of patches, it is hard to know if that means the other others are fine, or if no one has had time to review them yet. You don't have to wait for explicit comments on every patch before re-posting a version 2 of a series. A rule of thumb might be to wait a day to see if other patches get more comments, and then repost a v2 with updates. The most important thing with a patch series, is that livirt be able to succesfully compile and run tests at each individual patch in the series. At tip to validate this locally before sending is to do something like git rebase -i master -x "ninja -C build test" This will rebase your local branch against master, and stop at each patch in the branch and run tests using ninja. (assumes you used "meson build" initally). In terms of how long to wait before sending patches there's no perfect answer. Generally you want a patch series to accomplish something "useful", where "useful" involves a bit of personal judgement. For example, don't send a patch that adds a function, but then doesn't use it anywhere. Instead wait until you've got the corresponding usage of that function in another patch. If you know you're likely to end up with a set of 10 patches to complete a particular feature, generally you would wait to get the complete set ready, rather than drip-feeding the patches one at a time. A counter example though. If when working on your feature, you see a bunch of existing code that would benefit from some refactoring or cleanup, then often people might send a series of patches just todo the cleanup/refactoring, then send the impl of their desired feature as a separate series later. This might sound complex, but don't worry about it too much. This kind of thing is a learning experiance for everyone, so we're not expecting people to get it perfect each time. We'll try to give positive suggestions if there improvements that could be made. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Nov 20, 2020 at 09:41:44 +0000, Daniel Berrange wrote:
On Thu, Nov 19, 2020 at 03:48:48PM -0600, Dustan B Helm wrote:
[image: Dustan Helm] <https://gitlab.com/dustan.helm> Dustan Helm <https://gitlab.com/dustan.helm> @dustan.helm <https://gitlab.com/dustan.helm> · just now <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432> <https://gitlab.com/libvirt/libvirt/-/issues/90#>
Before we start making changes and solidifying our XML parameter choices, we have a few clarifying questions about the issue we'd like to get out of the way.
1.
In src/qemu/qemu_capabilities.h, we found the string "/* -drive file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS declaration. What is the purpose of these strings, and how do we modify them to make sense for nfs? Would we simply mirror what is done for VXHS, adding nfs as the protocol instead?
These comments are just a hint/reminder to readers about what QEMU command line option(s), the capability check is tracking the availability of. That particular example might be a bit misleading, since in the qemu_capabilities.c file, we're actually looking for the blockdev-arg/arg-type/+vxhs feature, not -drive. QEMU has 2 ways of configuring disks, -drive is the historical main way, and -blockdev is the modern way that libvirt introduced support for relatively recently. We actually end up having to support both approachs in libvirt currrently, as we try to make libvirt work with both old and new QEMU versions.
For new features though I strongly prefer if we no longer update the old code. Implement the new protocol only for -blockdev.
Peter can probably offer better suggestions than me about what specific thing to probe for 'nfs'.
The simplest way to probe for nfs protocol support is to use the following query string in virQEMUCapsQMPSchemaQueries[]: "blockdev-add/arg-type/+nfs" Looking at the @BlockdevOptionsNfs struct in qemu.git/qapi/block-core.json it seems that all properties were introduced at same time, so the check doesn't need to be more specific. I can provide more insight on how virQEMUCapsQMPSchemaQueries[] works if you are interested, but the above will work.
2.
Where is domain XML parsed and formatted? Is that what is referred to by the schema formats in domaincommon.rng?
The domaincommon.rng file provides the RelaxNG schema, which is used for (optionally) validating XML files before parsing.
The actual parser lives in src/conf/domain_conf.{c,h} files.
There are also docs for users about the schema in docs/formatdomain.rst
Adding the NFS protocol itself is rather trivial because we use enum to string convertors which cause a compilation failure if you don't populate the strings, so adding the protocol type will be enough to figure out where. If you want to implement other properties of the nfs protocol driver such as @user or @group. I suggest you first send a RFC mail with your proposed XML addition for review before diving into the rng schema and XML/formatter parser. Looking at the options in @BlockdevOptionsNfs @user and @group seem a bit interesting. I'd not worry with the rest probably.
3.
In src/qemu/qemu_block.c, the json object arguments currently present in qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the example commit. What is the reason for this change, and how should we take it into account when implementing a new protocol type?
I'll leave thi question to Peter too.
I'm not going to outline why we've changed the old commit. You are implementing new code. You'll need to add a handler to qemuBlockStorageSourceGetBackendProps which converts the appropriate fields of virStorageSource to a virJSONValue object which maps to the qemu properties according to the @BlockdevOptionsNfs qemu struct. To verify that it's correct you can add a TEST_DISK_TO_JSON case to tests/qemublocktest.c (input files are in libvirt/tests/qemublocktestdata/xml2json/ ) where you provide a disk XML snippet and the output is what we'd use with qemu. Note that the 'onlytarget' boolean formats a string which is written into the qcow2 header file if you create an overlay/external snapshot, so it must not include any transient or authentication data. All of the above is QMP-schema validated so you'll be notified if it doesn't conform to qemu's monitor schema. You'll also need to implement a backing store string parser (that is the string which qemu writes to the overlay as noted above). The parser entry point is virStorageSourceParseBackingJSON and the backends for it are registered in virStorageSourceJSONDriverParser struct. The tests for the above are in tests/virstoragetest.c as TEST_BACKING_PARSE. Then depending on whether you actually want to add support for image creation (e.g. to support creating snapshots backed by the NFS storage directly) you then need to implement hanling in qemuBlockStorageSourceCreateGetStorageProps. You'll definitely see all the places that might need implementing once you add the new protocol entry to enum virStorageNetProtocol as we in many cases use switch statements with proper type where the compiler reminds you that you need to add the handling for the new value in the given patch. Since implementation of the qemu bits should be in a separate commit from the one adding the parser bits and thus the new enum, it's okay to just add the enum value to the swithc case and implement it later.

On Fri, Nov 20, 2020 at 09:41:44 +0000, Daniel Berrange wrote:
On Thu, Nov 19, 2020 at 03:48:48PM -0600, Dustan B Helm wrote:
[image: Dustan Helm] <https://gitlab.com/dustan.helm> Dustan Helm <https://gitlab.com/dustan.helm> @dustan.helm <https://gitlab.com/dustan.helm> · just now <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432> <https://gitlab.com/libvirt/libvirt/-/issues/90#>
Before we start making changes and solidifying our XML parameter choices, we have a few clarifying questions about the issue we'd like to get out of the way.
1.
In src/qemu/qemu_capabilities.h, we found the string "/* -drive file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS declaration. What is the purpose of these strings, and how do we modify them to make sense for nfs? Would we simply mirror what is done for VXHS, adding nfs as the protocol instead?
These comments are just a hint/reminder to readers about what QEMU command line option(s), the capability check is tracking the availability of. That particular example might be a bit misleading, since in the qemu_capabilities.c file, we're actually looking for the blockdev-arg/arg-type/+vxhs feature, not -drive. QEMU has 2 ways of configuring disks, -drive is the historical main way, and -blockdev is the modern way that libvirt introduced support for relatively recently. We actually end up having to support both approachs in libvirt currrently, as we try to make libvirt work with both old and new QEMU versions.
For new features though I strongly prefer if we no longer update the old code. Implement the new protocol only for -blockdev.
Peter can probably offer better suggestions than me about what specific thing to probe for 'nfs'.
The simplest way to probe for nfs protocol support is to use the following query string in virQEMUCapsQMPSchemaQueries[]:
"blockdev-add/arg-type/+nfs"
Looking at the @BlockdevOptionsNfs struct in qemu.git/qapi/block-core.json it seems that all properties were introduced at same time, so the check doesn't need to be more specific.
I can provide more insight on how virQEMUCapsQMPSchemaQueries[] works if you are interested, but the above will work.
2.
Where is domain XML parsed and formatted? Is that what is referred to by the schema formats in domaincommon.rng?
The domaincommon.rng file provides the RelaxNG schema, which is used for (optionally) validating XML files before parsing.
The actual parser lives in src/conf/domain_conf.{c,h} files.
There are also docs for users about the schema in docs/formatdomain.rst
Adding the NFS protocol itself is rather trivial because we use enum to string convertors which cause a compilation failure if you don't populate the strings, so adding the protocol type will be enough to figure out where.
If you want to implement other properties of the nfs protocol driver such as @user or @group. I suggest you first send a RFC mail with your proposed XML addition for review before diving into the rng schema and XML/formatter parser.
Looking at the options in @BlockdevOptionsNfs @user and @group seem a bit interesting. I'd not worry with the rest probably.
3.
In src/qemu/qemu_block.c, the json object arguments currently present in qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the example commit. What is the reason for this change, and how should we take it into account when implementing a new protocol type?
I'll leave thi question to Peter too.
I'm not going to outline why we've changed the old commit. You are implementing new code. You'll need to add a handler to qemuBlockStorageSourceGetBackendProps which converts the appropriate fields of virStorageSource to a virJSONValue object which maps to the qemu properties according to the @BlockdevOptionsNfs qemu struct.
To verify that it's correct you can add a TEST_DISK_TO_JSON case to tests/qemublocktest.c (input files are in libvirt/tests/qemublocktestdata/xml2json/ ) where you provide a disk XML snippet and the output is what we'd use with qemu.
Note that the 'onlytarget' boolean formats a string which is written into the qcow2 header file if you create an overlay/external snapshot, so it must not include any transient or authentication data.
All of the above is QMP-schema validated so you'll be notified if it doesn't conform to qemu's monitor schema.
You'll also need to implement a backing store string parser (that is the string which qemu writes to the overlay as noted above). The parser entry point is virStorageSourceParseBackingJSON and the backends for it are registered in virStorageSourceJSONDriverParser struct.
The tests for the above are in tests/virstoragetest.c as TEST_BACKING_PARSE.
Then depending on whether you actually want to add support for image creation (e.g. to support creating snapshots backed by the NFS storage directly) you then need to implement hanling in qemuBlockStorageSourceCreateGetStorageProps.
You'll definitely see all the places that might need implementing once you add the new protocol entry to enum virStorageNetProtocol as we in many cases use switch statements with proper type where the compiler reminds you that you need to add the handling for the new value in the given patch.
Since implementation of the qemu bits should be in a separate commit from the one adding the parser bits and thus the new enum, it's okay to just add the enum value to the swithc case and implement it later.
We weren't exactly sure what you meant by submitting our proposed XML additions if we are to avoid diving into the schemas. Our idea is to have
On Fri, Nov 20, 2020 at 5:33 AM Peter Krempa <pkrempa@redhat.com> wrote: the NFS generate XML based on issue 90, where you have a network disk, a source protocol, a host, and a new NFS tag which has a user attribute and a group attribute (both required). In terms of the rng schema, we would make it look similar to the VxHS schema (diskSourceNetworkProtocolVxHS) except that below the diskSourceNetworkHost we would also interweave a diskSourceNFS reference, which would require both user and group.

On 11/20/20 1:07 PM, Dustan B Helm wrote:
On Fri, Nov 20, 2020 at 5:33 AM Peter Krempa <pkrempa@redhat.com <mailto:pkrempa@redhat.com>> wrote:
On Fri, Nov 20, 2020 at 09:41:44 +0000, Daniel Berrange wrote: > On Thu, Nov 19, 2020 at 03:48:48PM -0600, Dustan B Helm wrote: > > [image: Dustan Helm] <https://gitlab.com/dustan.helm <https://gitlab.com/dustan.helm>> > > Dustan Helm <https://gitlab.com/dustan.helm <https://gitlab.com/dustan.helm>> @dustan.helm > > <https://gitlab.com/dustan.helm <https://gitlab.com/dustan.helm>> · just now > > <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432 <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432>> > > <https://gitlab.com/libvirt/libvirt/-/issues/90# <https://gitlab.com/libvirt/libvirt/-/issues/90#>> > > > > Before we start making changes and solidifying our XML parameter choices, > > we have a few clarifying questions about the issue we'd like to get out of > > the way. > > > > 1. > > > > In src/qemu/qemu_capabilities.h, we found the string "/* -drive > > file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS > > declaration. What is the purpose of these strings, and how do we modify > > them to make sense for nfs? Would we simply mirror what is done for VXHS, > > adding nfs as the protocol instead? > > These comments are just a hint/reminder to readers about what QEMU command > line option(s), the capability check is tracking the availability of. That > particular example might be a bit misleading, since in the qemu_capabilities.c > file, we're actually looking for the blockdev-arg/arg-type/+vxhs feature, > not -drive. QEMU has 2 ways of configuring disks, -drive is the historical > main way, and -blockdev is the modern way that libvirt introduced support > for relatively recently. We actually end up having to support both approachs > in libvirt currrently, as we try to make libvirt work with both old and new > QEMU versions.
For new features though I strongly prefer if we no longer update the old code. Implement the new protocol only for -blockdev.
> Peter can probably offer better suggestions than me about what specific > thing to probe for 'nfs'.
The simplest way to probe for nfs protocol support is to use the following query string in virQEMUCapsQMPSchemaQueries[]:
"blockdev-add/arg-type/+nfs"
Looking at the @BlockdevOptionsNfs struct in qemu.git/qapi/block-core.json it seems that all properties were introduced at same time, so the check doesn't need to be more specific.
I can provide more insight on how virQEMUCapsQMPSchemaQueries[] works if you are interested, but the above will work.
> > 2. > > > > Where is domain XML parsed and formatted? Is that what is referred to by > > the schema formats in domaincommon.rng? > > The domaincommon.rng file provides the RelaxNG schema, which is used for > (optionally) validating XML files before parsing. > > The actual parser lives in src/conf/domain_conf.{c,h} files. > > There are also docs for users about the schema in docs/formatdomain.rst >
Adding the NFS protocol itself is rather trivial because we use enum to string convertors which cause a compilation failure if you don't populate the strings, so adding the protocol type will be enough to figure out where.
If you want to implement other properties of the nfs protocol driver such as @user or @group. I suggest you first send a RFC mail with your proposed XML addition for review before diving into the rng schema and XML/formatter parser.
Looking at the options in @BlockdevOptionsNfs @user and @group seem a bit interesting. I'd not worry with the rest probably.
> > 3. > > > > In src/qemu/qemu_block.c, the json object arguments currently present in > > qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the > > example commit. What is the reason for this change, and how should we take > > it into account when implementing a new protocol type? > > I'll leave thi question to Peter too.
I'm not going to outline why we've changed the old commit. You are implementing new code. You'll need to add a handler to qemuBlockStorageSourceGetBackendProps which converts the appropriate fields of virStorageSource to a virJSONValue object which maps to the qemu properties according to the @BlockdevOptionsNfs qemu struct.
To verify that it's correct you can add a TEST_DISK_TO_JSON case to tests/qemublocktest.c (input files are in libvirt/tests/qemublocktestdata/xml2json/ ) where you provide a disk XML snippet and the output is what we'd use with qemu.
Note that the 'onlytarget' boolean formats a string which is written into the qcow2 header file if you create an overlay/external snapshot, so it must not include any transient or authentication data.
All of the above is QMP-schema validated so you'll be notified if it doesn't conform to qemu's monitor schema.
You'll also need to implement a backing store string parser (that is the string which qemu writes to the overlay as noted above). The parser entry point is virStorageSourceParseBackingJSON and the backends for it are registered in virStorageSourceJSONDriverParser struct.
The tests for the above are in tests/virstoragetest.c as TEST_BACKING_PARSE.
Then depending on whether you actually want to add support for image creation (e.g. to support creating snapshots backed by the NFS storage directly) you then need to implement hanling in qemuBlockStorageSourceCreateGetStorageProps.
You'll definitely see all the places that might need implementing once you add the new protocol entry to enum virStorageNetProtocol as we in many cases use switch statements with proper type where the compiler reminds you that you need to add the handling for the new value in the given patch.
Since implementation of the qemu bits should be in a separate commit from the one adding the parser bits and thus the new enum, it's okay to just add the enum value to the swithc case and implement it later.
We weren't exactly sure what you meant by submitting our proposed XML additions if we are to avoid diving into the schemas. Our idea is to have the NFS generate XML based on issue 90, where you have a network disk, a source protocol, a host, and a new NFS tag which has a user attribute and a group attribute (both required). In terms of the rng schema, we would make it look similar to the VxHS schema (diskSourceNetworkProtocolVxHS) except that below the diskSourceNetworkHost we would also interweave a diskSourceNFS reference, which would require both user and group.
(since it's already quite late in the day on a Friday where Peter is located, I'll make an attempt to answer for him :-) I believe what he's asking for is just an email that says something like (names and organizations completely fabricated on the spot for sake of example): "Our idea is to implement this new feature by adding a new value "blorg" to the <bipple> element "blox", and an optional <bumble> subelement of <bipple) that contains blahblahbobloblaw details, like this: <device something='xyzzy'> <bipple blox='blorg'> <blorg blahblah='bobloblaw'>lawblog</blorg> </bipple> ... So, what do you think?" or whatever. i.e., not a vague description or a formal RNG representation of the changes you want to make, but short and specific description along with an example of what those changes will look like in an actual XML config document - something like the descriptions and example XML bits in https://www.libvirt.org/formatdomain.html. This is *much* quicker to parse and discuss than an RNG grammar :-)

On Fri, Nov 20, 2020 at 3:22 PM Laine Stump <laine@redhat.com> wrote:
On 11/20/20 1:07 PM, Dustan B Helm wrote:
On Fri, Nov 20, 2020 at 5:33 AM Peter Krempa <pkrempa@redhat.com> wrote:
On Fri, Nov 20, 2020 at 09:41:44 +0000, Daniel Berrange wrote:
On Thu, Nov 19, 2020 at 03:48:48PM -0600, Dustan B Helm wrote:
[image: Dustan Helm] <https://gitlab.com/dustan.helm> Dustan Helm <https://gitlab.com/dustan.helm> @dustan.helm <https://gitlab.com/dustan.helm> · just now <https://gitlab.com/libvirt/libvirt/-/issues/90#note_451306432> <https://gitlab.com/libvirt/libvirt/-/issues/90#>
Before we start making changes and solidifying our XML parameter choices, we have a few clarifying questions about the issue we'd like to get out of the way.
1.
In src/qemu/qemu_capabilities.h, we found the string "/* -drive file.driver=vxhs via query-qmp-schema */" after the QEMU_CAPS_VXHS declaration. What is the purpose of these strings, and how do we modify them to make sense for nfs? Would we simply mirror what is done for VXHS, adding nfs as the protocol instead?
These comments are just a hint/reminder to readers about what QEMU command line option(s), the capability check is tracking the availability of. That particular example might be a bit misleading, since in the qemu_capabilities.c file, we're actually looking for the blockdev-arg/arg-type/+vxhs feature, not -drive. QEMU has 2 ways of configuring disks, -drive is the historical main way, and -blockdev is the modern way that libvirt introduced support for relatively recently. We actually end up having to support both approachs in libvirt currrently, as we try to make libvirt work with both old and new QEMU versions.
For new features though I strongly prefer if we no longer update the old code. Implement the new protocol only for -blockdev.
Peter can probably offer better suggestions than me about what specific thing to probe for 'nfs'.
The simplest way to probe for nfs protocol support is to use the following query string in virQEMUCapsQMPSchemaQueries[]:
"blockdev-add/arg-type/+nfs"
Looking at the @BlockdevOptionsNfs struct in qemu.git/qapi/block-core.json it seems that all properties were introduced at same time, so the check doesn't need to be more specific.
I can provide more insight on how virQEMUCapsQMPSchemaQueries[] works if you are interested, but the above will work.
2.
Where is domain XML parsed and formatted? Is that what is referred to by the schema formats in domaincommon.rng?
The domaincommon.rng file provides the RelaxNG schema, which is used for (optionally) validating XML files before parsing.
The actual parser lives in src/conf/domain_conf.{c,h} files.
There are also docs for users about the schema in docs/formatdomain.rst
Adding the NFS protocol itself is rather trivial because we use enum to string convertors which cause a compilation failure if you don't populate the strings, so adding the protocol type will be enough to figure out where.
If you want to implement other properties of the nfs protocol driver such as @user or @group. I suggest you first send a RFC mail with your proposed XML addition for review before diving into the rng schema and XML/formatter parser.
Looking at the options in @BlockdevOptionsNfs @user and @group seem a bit interesting. I'd not worry with the rest probably.
3.
In src/qemu/qemu_block.c, the json object arguments currently present in qemuBlockStorageSourceGetVxHSProps(...) are not the same ones listed in the example commit. What is the reason for this change, and how should we take it into account when implementing a new protocol type?
I'll leave thi question to Peter too.
I'm not going to outline why we've changed the old commit. You are implementing new code. You'll need to add a handler to qemuBlockStorageSourceGetBackendProps which converts the appropriate fields of virStorageSource to a virJSONValue object which maps to the qemu properties according to the @BlockdevOptionsNfs qemu struct.
To verify that it's correct you can add a TEST_DISK_TO_JSON case to tests/qemublocktest.c (input files are in libvirt/tests/qemublocktestdata/xml2json/ ) where you provide a disk XML snippet and the output is what we'd use with qemu.
Note that the 'onlytarget' boolean formats a string which is written into the qcow2 header file if you create an overlay/external snapshot, so it must not include any transient or authentication data.
All of the above is QMP-schema validated so you'll be notified if it doesn't conform to qemu's monitor schema.
You'll also need to implement a backing store string parser (that is the string which qemu writes to the overlay as noted above). The parser entry point is virStorageSourceParseBackingJSON and the backends for it are registered in virStorageSourceJSONDriverParser struct.
The tests for the above are in tests/virstoragetest.c as TEST_BACKING_PARSE.
Then depending on whether you actually want to add support for image creation (e.g. to support creating snapshots backed by the NFS storage directly) you then need to implement hanling in qemuBlockStorageSourceCreateGetStorageProps.
You'll definitely see all the places that might need implementing once you add the new protocol entry to enum virStorageNetProtocol as we in many cases use switch statements with proper type where the compiler reminds you that you need to add the handling for the new value in the given patch.
Since implementation of the qemu bits should be in a separate commit from the one adding the parser bits and thus the new enum, it's okay to just add the enum value to the swithc case and implement it later.
We weren't exactly sure what you meant by submitting our proposed XML additions if we are to avoid diving into the schemas. Our idea is to have the NFS generate XML based on issue 90, where you have a network disk, a source protocol, a host, and a new NFS tag which has a user attribute and a group attribute (both required). In terms of the rng schema, we would make it look similar to the VxHS schema (diskSourceNetworkProtocolVxHS) except that below the diskSourceNetworkHost we would also interweave a diskSourceNFS reference, which would require both user and group.
(since it's already quite late in the day on a Friday where Peter is located, I'll make an attempt to answer for him :-)
I believe what he's asking for is just an email that says something like (names and organizations completely fabricated on the spot for sake of example):
"Our idea is to implement this new feature by adding a new value "blorg" to the <bipple> element "blox", and an optional <bumble> subelement of <bipple) that contains blahblahbobloblaw details, like this:
<device something='xyzzy'>
<bipple blox='blorg'>
<blorg blahblah='bobloblaw'>lawblog</blorg>
</bipple>
...
So, what do you think?"
or whatever. i.e., not a vague description or a formal RNG representation of the changes you want to make, but short and specific description along with an example of what those changes will look like in an actual XML config document - something like the descriptions and example XML bits in https://www.libvirt.org/formatdomain.html. This is *much* quicker to parse and discuss than an RNG grammar :-)
We plan to support NFS protocol according to the example XML from Issue 90 <http://gitlab.com/libvirt/libvirt/-/issues/90>. Since there is already support for network disks of different protocol types and host information, we think that the only new XML information we will add is an <nfs> element which will be a subelement of <source>, with attributes “user” and “group” (both strings). This element will only be generated if the source protocol is “nfs” and we assume that both “user” and “group” will be required. Here is the XML example given in the issue for reference: <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='nfs' name='PATH'> <host name='example.com' port='2049'/ <nfs user='USER' group='GROUP'/> </source> <target dev='vda' bus='virtio'/> </disk> What do you think of these proposed changes? Should either of the <nfs> tag's string attributes be optional?

On Sat, Nov 21, 2020 at 11:20:57 -0600, Dustan B Helm wrote:
We plan to support NFS protocol according to the example XML from Issue 90 <http://gitlab.com/libvirt/libvirt/-/issues/90>. Since there is already support for network disks of different protocol types and host information, we think that the only new XML information we will add is an <nfs> element which will be a subelement of <source>, with attributes “user” and “group” (both strings). This element will only be generated if the source protocol is “nfs” and we assume that both “user” and “group” will be required.
Here is the XML example given in the issue for reference:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='nfs' name='PATH'>
<host name='example.com' port='2049'/
<nfs user='USER' group='GROUP'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Sounds reasonable to me. We tend to name elements equivalent to <nfs> you propose by their purpose (such as <auth> <initiator> <cookies> for other protocols) but in this case I don't have a better suggestion so going with <nfs> is reasonable. Since you are proposing 'user' and 'group' to be strings while qemu accepts only numeric UID/GID, please use the same conversion code we have for the <inituser> and <initgroup> values in regards to forcing numeric value to skip being interpreded: https://www.libvirt.org/formatdomain.html#container-boot
What do you think of these proposed changes? Should either of the <nfs> tag's string attributes be optional?
In this case qemu doesn't mandate the use of the user/group field so you can make the nfs element and both user and group optional especially since it's only a workaround for the broken-by design NFS "security". You can claim that a hypervisor-default uid/gid is used when the fields are not present. You also probably want to mention in the documentation that in most cases qemu is running as non-root and thus doesn't have access to privileged ports. Thus the export has to use the 'insecure' option to allow non-privileged ports. One further thing possibly worth mentioning is that the name='' attribute starts with the NFS export name.

We plan to support NFS protocol according to the example XML from Issue 90 <http://gitlab.com/libvirt/libvirt/-/issues/90>. Since there is already support for network disks of different protocol types and host information, we think that the only new XML information we will add is an <nfs> element which will be a subelement of <source>, with attributes “user” and “group” (both strings). This element will only be generated if the source
On Sat, Nov 21, 2020 at 11:20:57 -0600, Dustan B Helm wrote: protocol
is “nfs” and we assume that both “user” and “group” will be required.
Here is the XML example given in the issue for reference:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='nfs' name='PATH'>
<host name='example.com' port='2049'/
<nfs user='USER' group='GROUP'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Sounds reasonable to me. We tend to name elements equivalent to <nfs> you propose by their purpose (such as <auth> <initiator> <cookies> for other protocols) but in this case I don't have a better suggestion so going with <nfs> is reasonable.
Since you are proposing 'user' and 'group' to be strings while qemu accepts only numeric UID/GID, please use the same conversion code we have for the <inituser> and <initgroup> values in regards to forcing numeric value to skip being interpreded:
https://www.libvirt.org/formatdomain.html#container-boot
The linked documentation <https://www.libvirt.org/formatdomain.html#container-boot> suggests that
On Mon, Nov 23, 2020 at 2:33 AM Peter Krempa <pkrempa@redhat.com> wrote: providing a “+” prefix will force attribute data to be interpreted numerically. Since QEMU requires that the group and user attributes be numeric id values, would we want to simply insert a “+” prefix to these attributes when they are provided explicitly (instead of using the hypervisor default values, which we assume will just be numeric data)?
What do you think of these proposed changes? Should either of the <nfs>
tag's string attributes be optional?
In this case qemu doesn't mandate the use of the user/group field so you can make the nfs element and both user and group optional especially since it's only a workaround for the broken-by design NFS "security".
You can claim that a hypervisor-default uid/gid is used when the fields are not present.
We have assumed that because “group” and “user” are optional, they are not mutually required- that is, one or the other may be provided explicitly, in which case the other is assumed to be the hypervisor-default. Is this correct?
Additionally, when both default values for the hypervisor are to be used (that is, there is no explicitly given nfs-user or nfs-group attribute), would we simply remove the entire <nfs> tag as it would be empty? Or is it still necessary in order to bypass the “broken by design” NFS security?
You also probably want to mention in the documentation that in most cases qemu is running as non-root and thus doesn't have access to privileged ports. Thus the export has to use the 'insecure' option to allow non-privileged ports.
One further thing possibly worth mentioning is that the name='' attribute starts with the NFS export name.
Does the NFS export requiring the “insecure” option in most cases have implications on the code we add to support the protocol beyond documentation of the usage semantics? Also, what specifically is referred to by the “NFS export”- is this simply the directory containing all the files/folders to be shared across the NFS (i.e. etc/exports/), or some representation thereof in memory?

On Mon, Nov 23, 2020 at 17:17:15 -0600, Dustan B Helm wrote:
On Mon, Nov 23, 2020 at 2:33 AM Peter Krempa <pkrempa@redhat.com> wrote:
We plan to support NFS protocol according to the example XML from Issue 90 <http://gitlab.com/libvirt/libvirt/-/issues/90>. Since there is already support for network disks of different protocol types and host information, we think that the only new XML information we will add is an <nfs> element which will be a subelement of <source>, with attributes “user” and “group” (both strings). This element will only be generated if the source
On Sat, Nov 21, 2020 at 11:20:57 -0600, Dustan B Helm wrote: protocol
is “nfs” and we assume that both “user” and “group” will be required.
Here is the XML example given in the issue for reference:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='nfs' name='PATH'>
<host name='example.com' port='2049'/
<nfs user='USER' group='GROUP'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Sounds reasonable to me. We tend to name elements equivalent to <nfs> you propose by their purpose (such as <auth> <initiator> <cookies> for other protocols) but in this case I don't have a better suggestion so going with <nfs> is reasonable.
Since you are proposing 'user' and 'group' to be strings while qemu accepts only numeric UID/GID, please use the same conversion code we have for the <inituser> and <initgroup> values in regards to forcing numeric value to skip being interpreded:
https://www.libvirt.org/formatdomain.html#container-boot
The linked documentation <https://www.libvirt.org/formatdomain.html#container-boot> suggests that providing a “+” prefix will force attribute data to be interpreted numerically. Since QEMU requires that the group and user attributes be numeric id values, would we want to simply insert a “+” prefix to these attributes when they are provided explicitly (instead of using the hypervisor default values, which we assume will just be numeric data)?
No, not at all. QEMU requires you to provide an integer according to the schema. That means that for any '+'-prefixed value, you strip the + and convert it to int. For a username you look it up in the system to get the uid and use that. We have helpers for this, but I can't recall the name. You'll have to do some digging.
What do you think of these proposed changes? Should either of the <nfs>
tag's string attributes be optional?
In this case qemu doesn't mandate the use of the user/group field so you can make the nfs element and both user and group optional especially since it's only a workaround for the broken-by design NFS "security".
You can claim that a hypervisor-default uid/gid is used when the fields are not present.
We have assumed that because “group” and “user” are optional, they are not mutually required- that is, one or the other may be provided explicitly, in which case the other is assumed to be the hypervisor-default. Is this correct?
Yes.
Additionally, when both default values for the hypervisor are to be used (that is, there is no explicitly given nfs-user or nfs-group attribute), would we simply remove the entire <nfs> tag as it would be empty? Or is it still necessary in order to bypass the “broken by design” NFS security?
Yes, you want to format the element only when any of the two attributes are present. This is simple to achieve, our XML formatting helper (virXMLFormatElement) does the correct thing if no attributes and no subelements are passed for formatting.
You also probably want to mention in the documentation that in most cases qemu is running as non-root and thus doesn't have access to privileged ports. Thus the export has to use the 'insecure' option to allow non-privileged ports.
One further thing possibly worth mentioning is that the name='' attribute starts with the NFS export name.
Does the NFS export requiring the “insecure” option in most cases have implications on the code we add to support the protocol beyond documentation of the usage semantics? Also, what specifically is referred
No. I just wanted that you point out in the docs that the NFS client used in qemu has no access to privileged ports in the general use case and thus may have implications on what is used. We tend to use a vague terminology such as. "Note that hypervisors may not be able to use privileged ports to access the NFS export and thus may require to use the ``insecure`` option for the export to allow access"
to by the “NFS export”- is this simply the directory containing all the files/folders to be shared across the NFS (i.e. etc/exports/), or some representation thereof in memory?
NFS export is one exported directory, thus maps to one line in /etc/exports. Since there are multiple exports possible on a host you have to use it when addressing the specific file. Example: Assume we have the following exports (leftmost column of /etc/exports) /some/export *:... /foo/bar/baz *:... And 'ls /foo/bar/baz' on the NFS server: image.qcow2 image2.qcow2 So in your proposal, if you want to point to 'image2.qcow2' you'd use: <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='nfs' name='/foo/bar/baz/image.qcow2'> <host name='example.com' port='2049'/ <nfs user='+123' group='wheel'/> </source> <target dev='vda' bus='virtio'/> </disk> Here the export name is part of the "path" in the `name` attribute. An alternative is to expose 'export' as a separate attribute: <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='nfs' name='image.qcow2'> <host name='example.com' port='2049'/ <nfs export='/foo/bar/baz' user='+123' group='wheel'/> </source> <target dev='vda' bus='virtio'/> </disk> Here export is stored explicitly. Note that qemu takes the export concatenated with the filename as argument so you'd have to concatenate them when formatting the JSON struct representing the backend of the block device. I don't have a preference for either of them, but I'd like you to document that bit. Specifically we have very similar documentation for other protocols such as iSCSI, etc. which also store specifics in the name filed.

On Mon, Nov 23, 2020 at 17:17:15 -0600, Dustan B Helm wrote:
On Mon, Nov 23, 2020 at 2:33 AM Peter Krempa <pkrempa@redhat.com> wrote:
We plan to support NFS protocol according to the example XML from Issue 90 <http://gitlab.com/libvirt/libvirt/-/issues/90>. Since there is already support for network disks of different protocol types and host information, we think that the only new XML information we will add is an <nfs> element which will be a subelement of <source>, with attributes “user” and “group” (both strings). This element will only be generated if the source
On Sat, Nov 21, 2020 at 11:20:57 -0600, Dustan B Helm wrote: protocol
is “nfs” and we assume that both “user” and “group” will be required.
Here is the XML example given in the issue for reference:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='nfs' name='PATH'>
<host name='example.com' port='2049'/
<nfs user='USER' group='GROUP'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Sounds reasonable to me. We tend to name elements equivalent to <nfs> you propose by their purpose (such as <auth> <initiator> <cookies> for other protocols) but in this case I don't have a better suggestion so going with <nfs> is reasonable.
Since you are proposing 'user' and 'group' to be strings while qemu accepts only numeric UID/GID, please use the same conversion code we have for the <inituser> and <initgroup> values in regards to forcing numeric value to skip being interpreded:
https://www.libvirt.org/formatdomain.html#container-boot
The linked documentation <https://www.libvirt.org/formatdomain.html#container-boot> suggests that providing a “+” prefix will force attribute data to be interpreted numerically. Since QEMU requires that the group and user attributes be numeric id values, would we want to simply insert a “+” prefix to these attributes when they are provided explicitly (instead of using the hypervisor default values, which we assume will just be numeric data)?
No, not at all. QEMU requires you to provide an integer according to the schema. That means that for any '+'-prefixed value, you strip the + and convert it to int. For a username you look it up in the system to get the uid and use that. We have helpers for this, but I can't recall the name. You'll have to do some digging.
We found methods to convert user and group strings into integer IDs in src/util/virutil.c (the getUserID and getGroupID methods). We plan on checking if the parameter provided has a + at the front of the string, in which case we will call these methods to convert the user or group name into an ID. However, we aren’t sure where the _virStorageSource data is actually being initialized. We found the qemuBlockStorageSourceGetBackendProps method in src/qemu/qemu_block.c, for which we would need to provide an NFS props method. At this point we need
On Tue, Nov 24, 2020 at 12:59 AM Peter Krempa <pkrempa@redhat.com> wrote: the _virStorageSource to be initialized and contain the nfs_user and nfs_group parameters, but we can’t find where to actually set these. You had mentioned some kind of parser in domain_conf.c, but since domain_conf is an extensive file, we had some trouble narrowing down our concerns to a specific parsing/formatting mechanism. What method(s) would we change here? Would that be satisfactory to set our _virStorageSource for use in the JSON initialization?

On Tue, Nov 24, 2020 at 15:25:01 -0600, Dustan B Helm wrote:
On Tue, Nov 24, 2020 at 12:59 AM Peter Krempa <pkrempa@redhat.com> wrote:
On Mon, Nov 23, 2020 at 17:17:15 -0600, Dustan B Helm wrote:
[...]
We found methods to convert user and group strings into integer IDs in src/util/virutil.c (the getUserID and getGroupID methods). We plan on checking if the parameter provided has a + at the front of the string, in which case we will call these methods to convert the user or group name into an ID.
virGetUserID/virGetGroupID have internal handling of the '+' prefix. You can use them without adding any logic on top.
However, we aren’t sure where the _virStorageSource data is actually being initialized. We found the qemuBlockStorageSourceGetBackendProps method in src/qemu/qemu_block.c, for which we would need to provide an NFS props method.
qemuBlockStorageSourceGetBackendProps is the place that convers virStorageSource to a JSON object describing the source. Please note that this function is not the place for any logic such as the translation from username to the uid number. The function shall take a virStorageSource and output a JSON struct, that's all.
At this point we need the _virStorageSource to be initialized and contain the nfs_user and nfs_group parameters, but we can’t find where to actually set these. You had mentioned some kind of parser in domain_conf.c, but since domain_conf is an extensive file, we had some trouble narrowing down our concerns to a specific parsing/formatting mechanism. What method(s) would we change here? Would that be satisfactory to set our _virStorageSource for use in the JSON initialization?
There are two steps here: 1) you must parse/format the XML bits into a virStorage source This is done in src/conf/domain_conf.c. Specifically you are dealing with a storage source so the relevant functions are: virDomainDiskSourceFormat and virDomainStorageSourceParse 2) The actual lookup from username/groupname to uid/gid qemuDomainPrepareStorageSourceBlockdev The two steps above means that you actually will have to add 4 fields to virStorageSource. 2 strings for the values from the XML parser and then 2 integers for internal use for the virStorageSource -> JSON conversion. Some notes: - make sure that patches are split into logical parts and the code compiles successfully after every patch as per contributor guidelines https://www.libvirt.org/coding-style.html https://www.libvirt.org/hacking.html - when you add VIR_STORAGE_NET_PROTOCOL_NFS to enum virStorageNetProtocol there will be a lot of places the compiler will notify that need to be changed. Don't try to implement all of them at once. Add the entry to the switch statement and come back to comply with splitting the patch into logical pieces - any XML addition requires tests to be added In this case you'll be also needing to add a qemu commandline test later so I suggest you add your XML test as a new file to: tests/qemuxml2argvdata, and enable the test in tests/qemuxml2xmltest.c (output file is stored in tests/qemuxml2xmloutdata, you can also run the test with VIR_TEST_REGENREATE_OUTPUT=1 env variable set which generates the proper output file) Always use one of the _CAPS_LATEST macros for invoking the test regardless of prior art. You can add multiple disks to the tested XML to test any combination of the <nfs> parameters. Please make sure to list at least one example of the NFS disk being a <backingStore> of a <source> (see tests/qemuxml2argvdata/disk-backing-chains.xml for inspiration) Don't forget to also document the new elements in docs/formatdomain.rst - you'll also need to add proper backing store string parser (virStorageSourceJSONDriverParser). Tests for the parser are in tests/virstoragetest.c invoked via TEST_BACKING_PARSE - there's no need to add a specific capability for the NFS protocol as it predates libvirt's use of -blockdev (QEMU_CAPS_BLOCKDEV). You have to add a check for it to qemuDomainValidateStorageSource based on the above capabapility. Don't bother with a negative test case for this. - Enabling support for external snapshots stored on NFS requires that you also implement the support for blockdev-create (qemuBlockStorageSourceCreateGetStorageProps, tests are in tests/qemublocktest.c invoked via TEST_IMAGE_CREATE) - once you add qemuBlockStorageSourceGetBackendProps, enable the test case added for qemuxml2xmltest.c above also for qemuxml2argvtest.c You can use the same trick with VIR_TEST_REGENERATE_OUTPUT env variable to obtain the output file. As noted, use _CAPS_LATEST macros to invoke the test. qemuBlockStorageSourceGetBackendProps was historically tested by TEST_DISK_TO_JSON cases in tests/qemublocktest.c for compliance with the qemu monitor protocol schema, but that's no longer required. qemuxml2argvtest does the same check for every <disk> - add an entry to NEWS.rst outlining the addintion (separate commit)

On Wed, Nov 25, 2020 at 12:44 AM Peter Krempa <pkrempa@redhat.com> wrote:
On Tue, Nov 24, 2020 at 15:25:01 -0600, Dustan B Helm wrote:
On Tue, Nov 24, 2020 at 12:59 AM Peter Krempa <pkrempa@redhat.com> wrote:
On Mon, Nov 23, 2020 at 17:17:15 -0600, Dustan B Helm wrote:
[...]
- any XML addition requires tests to be added
In this case you'll be also needing to add a qemu commandline test later so I suggest you add your XML test as a new file to: tests/qemuxml2argvdata, and enable the test in tests/qemuxml2xmltest.c (output file is stored in tests/qemuxml2xmloutdata, you can also run the test with VIR_TEST_REGENREATE_OUTPUT=1 env variable set which generates the proper output file)
Always use one of the _CAPS_LATEST macros for invoking the test regardless of prior art.
You can add multiple disks to the tested XML to test any combination of the <nfs> parameters.
Please make sure to list at least one example of the NFS disk being a <backingStore> of a <source> (see tests/qemuxml2argvdata/disk-backing-chains.xml for inspiration)
Don't forget to also document the new elements in docs/formatdomain.rst
We don’t understand exactly how the testing frameworks are functioning. With regards to our XML testing, the tests/qemuxml2argvdata/ directory contains for each test both an xml file and an args file- is this .args file the expected output qemu command we want to generate from the input .xml file? If so, how do we invoke the VIR_TEST_REGENERATE_OUTPUT flag to create this .args file, and how concerned should we be about non-disk information provided in the .xml file as input? Additionally, since our source name (a file path) needs to include the NFS export at the beginning, how would we set up the XML input? Can the file path and NFS export be chosen arbitrarily?

On Thu, Dec 03, 2020 at 16:03:55 -0600, Dustan B Helm wrote: [...]
We don’t understand exactly how the testing frameworks are functioning. With regards to our XML testing, the tests/qemuxml2argvdata/ directory contains for each test both an xml file and an args file- is this .args file the expected output qemu command we want to generate from the input .xml file?
Yes, qemuxml2argvtest exercises the XML parser and command line formatter. In case of disk specification there are also provisions for validating whether the generated -blockdev commandline actually conforms to the schema of QEMU.
If so, how do we invoke the VIR_TEST_REGENERATE_OUTPUT flag to create this .args file,
It's an environment variable that the test program is looking for, thus: VIR_TEST_REGENERATE_OUTPUT=1 ./tests/qemuxml2argvtest in your build-directory. (obviously you need to add invocation of the test first and compile the program)
and how concerned should we be about non-disk information provided in the .xml file as input?
Don't worry about them. At the same time there's no use in cramming too much irrelevant non-disk config into the test. Focus rather on adding multiple disks covering various combinations of the configuration to avoid adding multiple test files.
Additionally, since our source name (a file path) needs to include the NFS export at the beginning, how would we set up the XML input? Can the file path and NFS export be chosen arbitrarily?
Those are purely unit-tests, thus it doesn't actually try accessing the storage in the first place. That means that it doesn't really matter what path you choose.
participants (4)
-
Daniel P. Berrangé
-
Dustan B Helm
-
Laine Stump
-
Peter Krempa