07-Oct-16 20:09, Olga Krishtal пишет:
On 24/09/16 00:12, John Ferlan wrote:
> On 09/23/2016 11:56 AM, Olga Krishtal wrote:
>> On 21/09/16 19:17, Maxim Nestratov wrote:
>>>> 20 сент. 2016 г., в 23:52, John Ferlan<jferlan(a)redhat.com>
написал(а):
>>>>
>>>>
>>>>
>>>>> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
>>>>> Hi everyone, we would like to propose the first implementation of
fspool
>>>>> with directory backend.
>>>>>
>>>>> Filesystem pools is a facility to manage filesystems resources
similar
>>>>> to how storage pools manages volume resources. Furthermore new API
follows
>>>>> storage API closely where it makes sense. Uploading/downloading
operations
>>>>> are not defined yet as it is not obvious how to make it properly. I
guess
>>>>> we can use some kind of tar to make a stream from a filesystem.
Please share
>>>>> you thoughts on this particular issue.
>>>> So how do you differentiate between with the existing <pool
type="fs">
>>> Pool type=fs still provides volumes, i. e. block devices rather than
filesystem, though this storage pool can mount file systems resided on a source block
device.
>>>
>>>>
http://libvirt.org/storage.html#StorageBackendFS
>>>>
>>>> Sure the existing fs pool requires/uses a source block device as the
>>>> source path and this new variant doesn't require that source but
seems
>>>> to use some item in order to dictate how to "define" the source
on the
>>>> fly. Currently only a "DIR" is created - so how does that
differ from a
>>>> "dir" pool.
>>>>
>>> Same here, storage "dir" provides files, which are in fact block
devices for guests. While filesystem pool "dir" provides guests with file
systems.
>>>
>>>
>>>> I think it'll be confusing to have and differentiate fspool and pool
>>>> commands.
>> Some time ago, we wrote the proposal description and asked for
>> everyone's advice and opinion.
>> The aim of fspool is to provide filesystems, not volumes. The simplest
>> type of fspool is directory pool and
>> it do has a lot in common with storage_backend_fs. However, in the
>> proposal description we said that
>> the plan is to use other backends: eg, storage volumes from storage
>> pool as the source of fs, zfs, etc.
>> The final api for fspool will be significantly different, because of
>> the other backends needs.
> Can you please try to create an extra line after the paragraph you're
> responding to and the start of your paragraph and then one after.
Thanks for noticing. It looks better.
> Anyway, as I pointed out - that description wasn't in my (short term)
> memory. Keeping a trail of pointers to previous stuff helps those that
> want to refresh their memory on the history.
I will hold this links through the next versions.
https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html
> If you're going to "reuse" things, then using the 'src/util/*'
is the
> way to go rather than trying to drag in storage_{driver|backend*} APIs.
> Crossing driver boundaries is something IIRC we try to avoid.
As I have written before at the moment we have only one backend for fspool -
directory. It is the simplest backend and only the starting point.
I think that it is too early to decide which parts should be moved to src/util/*.
Moreover, as fspool items and storage pool volumes are pretty different, it
could be possible that they have very little in common. That said, I would leave
things as they are, but if you insist I can try.
If you meant the resulting code will have very little in common, then I would agree here.
More backends implementation will show us where common parts are
and we will have more basis for splitting out common parts.
>>>> I didn't dig through all the patches, but from
the few I did look at it
>>>> seems as though all that's done is to rip out the guts of stuff not
>>>> desired from the storage pool driver and replace it with this new code
>>>> attributing all the work to the new author/copyright. IOW: Lots of
>>>> places where StoragePool appears to be exactly the same as the FSPool.
I have written this lines as a part of GPLv2+ boilerplate:
https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html, which I took from
other libvirt parts. And I guess it was naturally to change name and company, don't
you?
And again, if you insist I can leave out the author/copyright as it wasn't the aim of
this series.
Indeed, storage pool is very similar to FS pool but their items are not - volumes (block
devices)
versus filesystems (directory trees). And intention here was to introduce a *new API*,
which is
also very different from storage pool one, effectivly introducing a new driver. As driver
boundaries crossing isn't favored, the code was simply borrowed, following earlier
practice used
by libvirt to get new drivers implemented.
John, keeping all said above in mind, do you think it's worth trying to reuse common
code while
introducing a new API? It won't allow us to leave existing code untouched and it will
increase the
series even more.
>>>> I think you need to find a different means to do what you want. It's
not
>>>> 100% what the end goal is.I did download/git am the patches and scan a
few patches...
>>>> * In patch 2 you've totally missed how to modify
libvirt_public.syms
>>>> * In patch 3, the build breaks in "conf/fs_conf" since the
"if { if {}
>>>> }" aren't done properly in virFSPoolDefFormatBuf.
>>>> * In patch 5 the remote_protocol_structs fails check/syntax-check... I
>>>> stopped there in my build each patch test.
>> According to the guide I have to do:
>> |make check| and |make syntax-check for every patch|
> Always a good plan!
>
>> And it was done.
> And yet as we find out *all the time* some compilers complain more than
> others. Watch the list - we have a CI environment in which we find all
> sorts of oddities. In any case, the code in question is:
>
> + if (def->target.perms.mode != (mode_t) -1 ||
> + def->target.perms.uid != (uid_t) -1 ||
> + def->target.perms.gid != (gid_t) -1 ||
> + def->target.perms.label) {
> + virBufferAddLit(buf, "<permissions>\n");
> + virBufferAdjustIndent(buf, 2);
> + if (def->target.perms.mode != (mode_t) -1)
> + virBufferAsprintf(buf, "<mode>0%o</mode>\n",
> + def->target.perms.mode);
> + if (def->target.perms.uid != (uid_t) -1)
> + virBufferAsprintf(buf, "<owner>%d</owner>\n",
> + (int) def->target.perms.uid);
> + if (def->target.perms.gid != (gid_t) -1)
> + virBufferAsprintf(buf, "<group>%d</group>\n",
> + (int) def->target.perms.gid);
> + virBufferEscapeString(buf, "<label>%s</label>\n",
> + def->target.perms.label);
> +
> + virBufferAdjustIndent(buf, -2);
> + virBufferAddLit(buf, "</permissions>\n");
> + }
> +
> + virBufferAdjustIndent(buf, -2);
> + virBufferAddLit(buf, "</target>\n");
>
> So do you "see" the problem? The first if has an open {, but the second
> one doesn't, although the code is indented and would seemingly want to
> have one. The second one has a close }, which gives the impression
> something is missing.
Thanks for pointing this out. I will be more attentive.
>> At libvirt_public.syms I will look one more time.
> You added all those API's into LIBVIRT_2.0.0 { above the
> "LIBVIRT_1.3.3", but you'll notice there's a LIBVIRT_2.2.0 {
afterwards
> which all those API's should have gone "at least for now".
>
> IOW: When adding new API's you have to add to the version specific
> stanza. As of now you'd be adding API's to 2.3.0, but I doubt we'll make
> that cut-off.
Thanks. Will do.
> Just because you added them in 2.0.0 internally, once you go to upstream
> them - they need to be in the latest.
>
> So this brings me back to my other recent response. I think if we can
> figure out what the driver will need, then work through the external API
> portion. There's just so much going on - trying to get a sense of it all
> at once is well overwhelming.
Actually, I thought we have already figured it out and it was decided to have completely
separate API
to manage filesystems but similar to storage pool's, and only this approach let
containers get the greatest benefit
from it.
Let me share my thoughts here.
The minimum that this new API needs is ability to:
- define (create a persistent FS pool),
- create (create a transisional FS pool),
- build (for directory/remotefs pools it is simply directory creation, for block
device backend, for instanse, it will be a mkfs call )
- start/stop (activate/deactivate),
- create/delete items (subdirectories),
- undefine,
- maybe ability to use storage pool volumes as sources for FS pools.
Create and build flags will control whether we should overwrite
existing directory content or leave it untouched. But currently they are not used in code
and it is really difficult to guess what is their purpose. This certainly should be fixed
in
the next revision of the series.
Maxim
> John
>
> [...]
--
Best regards,
Olga