On 1/16/19 7:00 AM, Daniel P. Berrangé wrote:
On Tue, Jan 15, 2019 at 08:09:12PM -0500, John Ferlan wrote:
>
https://bugzilla.redhat.com/show_bug.cgi?id=1584663
>
> Add the ability to parse/manage some defined NFS Storage Pool mount
> options. Keep the set of defined options limited to noexec, nosuid,
> nodev, and ro. Subsequent patches will add the ability to provide the
> NFS nfsvers value and eventally any option via storage pool namespaces.
>
> Signed-off-by: John Ferlan <jferlan(a)redhat.com>
> ---
> docs/formatstorage.html.in | 21 +++++++++
> docs/schemas/storagepool.rng | 20 ++++++++
> src/conf/storage_conf.c | 47 +++++++++++++++++++
> src/conf/storage_conf.h | 13 +++++
> .../pool-netfs-mountopts.xml | 24 ++++++++++
> .../pool-netfs-mountopts.xml | 24 ++++++++++
> tests/storagepoolxml2xmltest.c | 1 +
> 7 files changed, 150 insertions(+)
> create mode 100644 tests/storagepoolxml2xmlin/pool-netfs-mountopts.xml
> create mode 100644 tests/storagepoolxml2xmlout/pool-netfs-mountopts.xml
>
> diff --git a/docs/formatstorage.html.in b/docs/formatstorage.html.in
> index b6bf3edbd2..97c90f0797 100644
> --- a/docs/formatstorage.html.in
> +++ b/docs/formatstorage.html.in
> @@ -121,6 +121,19 @@
> </source>
> ...</pre>
>
> + <pre>
> +...
> + <source>
> + <host name='localhost'/>
> + <dir path='/var/lib/libvirt/images'/>
> + <format type='nfs'/>
> + <mount_opts>
> + <option name='nodev'/>
> + <option name='nosuid'/>
> + </mount_opts>
> + </source>
> +...</pre>
> +
> <dl>
> <dt><code>device</code></dt>
> <dd>Provides the source for pools backed by physical devices
> @@ -386,6 +399,14 @@
> LVM metadata type. All drivers are required to have a default
> value for this, so it is optional. <span class="since">Since
0.4.1</span></dd>
>
> + <dt><code>mount_opts</code></dt>
> + <dd>Provide an optional set of mount options for the
<code>netfs</code>
> + pool startup or creation to be provided via the "-o" option.
> + The desired options are specified by using the subelement
> + <code>option</code> with the attribute
<code>name</code> to
> + provide the option. Supported option names are "noexec",
"nosuid",
> + "nodev", and "ro".
> + <span class="since">Since 5.1.0</span></dd>
Hmm, this has gone back to the "Mount options" concept from v1
which I don't think is appropriate. In v2 feedback I had suggested
that we should have some more explicit XML description for the subset
of options we wish to support, with each option potentially using a
different syntax, no generic mount options syntax.
In v1 it seems to me at least the objection was arbitrarily named
mount_opts, so specifically named mount_opts felt like a fair trade-off.
In v2 you noted the features element to be similar to the domain
features, but then pointed out that you felt we could just default to
using them. A followup pointed out one concern over OS syntax
differences, which is essentially why I took this approach - force
someone to provide the option. If that option used different syntax on
some other OS, then someone else who cares about that OS could send a
patch to change the name generated (and we avoid making any assumptions).
For 'ro' we should use our normal <readonly/> empty element syntax.
True we could, but believe it or not readonly is not even defined in
storagepool.rng. It's a domaincommon.rng to be used for hypervisor
options. Adding ro to the mount_opts list was essentially a shortcut of
patches to include ro with the others. I saw nfsvers a bit different
since it would provide more than a boolean option.
Still beyond NFS I'm trying to envision how we'd supply readonly such
that it would matter for the pool. Wouldn't networked pools require the
server side to have set up readonly? For local pools how much of the
exposure of the volume would be controllable such that the pool could
manage the readonly-ness? A virDomainDiskTranslateSourcePool "option" I
suppose that could be associated with the domain readonly attribute.
Perhaps ro or readonly should just be separate then because it involves
a lot more interaction. Does the overhead of code addition outweigh the
usefulness of a generic pool readonly option? No one's asked for such a
feature, so sneaking in ro to/for an NFS mount option felt reasonable.
For nosetuid, nodev, noexec, I'm still inclined to say we should
enable them by default. Despite it being a semantic change, I think
it is the right thing for a volume that is being used for storing
VM disk images.
Again, the fear of doing that is some OS has different syntax (from your
v2 response it's possible that FreeBSD mount command options may differ
from Linux mount command options).
So, using named options seemed to be a better tradeoff than just
altering the command generation to add those options for NFS mounts.
I'm not deadset against altering the defaults we set. It's certainly the
easiest approach. Drop patch 1 and 3... Changes patch 2 to a very
static operation, changing the existing .args output for NFS pools, and
still leaves patches 4-11 mostly unchanged other than to "merge" the
changes from patch 2.
If we did want it in the XML though, I think it should be via a
<features> element, not <mount_opts>, as the inverse - iow opt-in
to the insecure behaviour
<features>
<allow-suid/>
<allow-devnode/>
<allow-exec/>
</features>
I really don't see the point of differently named XML. As much as you
don't seem to like mount_opts I'm not a fan of features. At least with
mount_opts it's obvious what the use is. I just don't see mount options
as features in the same light as hypervisor features. It's a command
option not a feature for the pool. Other pools use specifically named
elements to support their specific features/needs <name>, <host>,
<adapter>, <auth>, <initiator>, etc. But something that is very
specific
to how the NFS is being used is required to use a new, generic
"features" element.
If by defaulting to nosuid, nodev, and noexec we run into trouble with
some other OS, then that requires every pool on that OS to be modified
to add these because we changed the default. That seems counter to other
examples where we force to opt-in. Although because these are security
related, I can see an argument for defaulting to them other than of
course that NFS doesn't default that way. Rock and hard place.
FWIW: AIUI, the default for the listed option options is the inverse.
That is "suid" is the default for NFS... Likewise, "dev",
"exec", and
"rw". So supplying "suid" instead of "nosuid" doesn't
mean anything
since "suid" is the default. It does mean though code to "remove"
default options which in a way is no different than code to supply an
option that isn't default. Restated - knowing that "suid" is the
default, we don't supply it. Of course we could, but that hasn't been
the norm thus far. We just take the NFS defaults.
Still if this were to follow the domain model, then why wouldn't it be:
<features>
<netfs>
<option name='suid' state='{on|off}'/>
<option name='exec' state='{on|off}'/>
...
(or <start_option .../> since this isn't create, define, build, destroy,
or undefine)
I would think that would be far more confusing, but seems to me to
follow how features are assigned/used for specific hypervisors. It still
requires some sort of mapping of names and we're not allowing arbitrary
ones. Does nosuid, noexec, or nodev have meaning beyond that of the
meaning used for NFS? If the answer is no, then it's a very specific
feature to a very specific pool.
This provides a generic framework that will work for other storage
pool backend features where there's no mount concept involved eg
LVM, iSCSI, etc
And what "features" would we consider allowing without having to craft a
different backend?
Allowing some provided option for LVM commands could result in needing
to write code that would be able to handle multiple different formats.
In particular I'm thinking about how thin provisioning is supported.
Very different way to configure and a completely different means to scan
the generated output looking for usable volumes. Previous attempts to
co-mingle in the current logical backend were NACK'd and no one has
taken up the create a new backend that supports thin pools vs thin
snapshots (I think the latter is the terminology used, it's been a few
years).
For iSCSI we now have the iscsi-direct backend which perhaps could make
use of something, but it'd be something fairly specific to some iSCSI
feature which is no different (to me at least) than providing some
specific option for netfs.
John
Regards,
Daniel