Am 19.05.2021 um 14:19 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2021 14:44, Kevin Wolf wrote:
> Am 17.05.2021 um 14:44 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > Hi all!
> >
> > I'd like to be sure that we know where we are going to.
> >
> > In blockdev-era where qemu user is aware about block nodes, all nodes have good
names and controlled by user we can efficiently use block filters.
> >
> > We already have some useful filters: copy-on-read, throttling, compress. In my
parallel series I make backup-top filter public and useful without backup block jobs. But
now filters could be inserted only together with opening their child. We can specify
filters in qemu cmdline, or filter can take place in the block node chain created by
blockdev-add.
> >
> > Still, it would be good to insert/remove filters on demand.
> >
> > Currently we are going to use x-blockdev-reopen for this. Still it can't be
used to insert a filter above root node (as x-blockdev-reopen can change only block node
options and their children). In my series "[PATCH 00/21] block: publish backup-top
filter" I propose (as Kevin suggested) to modify qom-set, so that it can set drive
option of running device. That's not difficult, but it means that we have different
scenario of inserting/removing filters:
> >
> > 1. filter above root node X:
> >
> > inserting:
> >
> > - do blockdev-add to add a filter (and specify X as its child)
> > - do qom-set to set new filter as a rood node instead of X
> >
> > removing
> >
> > - do qom-set to make X a root node again
> > - do blockdev-del to drop a filter
> >
> > 2. filter between two block nodes P and X. (For example, X is a backing child
of P)
> >
> > inserting
> >
> > - do blockdev-add to add a filter (and specify X as its child)
> > - do blockdev-reopen to set P.backing = filter
> >
> > remvoing
> >
> > - do blockdev-reopen to set P.backing = X
> > - do blockdev-del to drop a filter
> >
> >
> > And, probably we'll want transaction support for all these things.
> >
> >
> > Is it OK? Or do we need some kind of additional blockdev-replace command, that
can replace one node by another, so in both cases we will do
> >
> > inserting:
> > - blockdev-add filter
> > - blockdev-replace (make all parents of X to point to the new filter instead
(except for the filter itself of course)
> >
> > removing
> > - blockdev-replace (make all parante of filter to be parents of X instead)
> > - blockdev-del filter
> >
> > It's simple to implement, and it seems for me that it is simpler to use.
Any thoughts?
>
> One reason I remember why we didn't decide to go this way in the many
> "dynamic graph reconfiguration" discussions we had, is that it's not
> generic enough to cover all cases. But I'm not sure if we ever
> considered root nodes as a separate case. I acknowledge that having two
> different interfaces is inconvenient, and integrating qom-set in a
> transaction is rather unlikely to happen.
>
> The reason why it's not generic is that it restricts you to doing the
> same thing for all parents. Imagine this:
>
> +- virtio-blk
> |
> file <- qcow2 <-+
> |
> +- NBD export
>
> Now you want to throttle the NBD export so that it doesn't interfere
> with your VM too much. With your simple blockdev-replace this isn't
> possible. You would have to add the filter to both users or to none.
>
> In theory, blockdev-replace could take a list of the edges that should
> be changed to the new node. The problem is that edges don't have names,
> and even the parents don't necessarily have one (and if they do, they
> are in separate namespaces, so a BlockBackend, a job and an export could
> all have the same name), so finding a good way to refer to them in QMP
> doesn't sound trivial.
>
Hm. I like the idea. And it seems feasible to me:
Both export and block jobs works through BlockBackend.
So, for block-jobs, we can add optional parameters like
source-blk-name, and target-blk-name. If parameters specified, blk's
will be named, and user will be able to do blockdev-replace.
I'm not sure if giving them a name is a good idea. Wouldn't it make the
BlockBackend accessible for the user who could then make a device use
it?
For export it's a bit trickier: it would be strange to add
separate
argument for export blk, as export already has id. So, I'd do the
following:
1. make blk named (with same name as the export itself) iff name does
not conflict with other blks
2. deprecate duplicating existing blk names by export name.
Yes, if we decide that giving them a name is a good idea, it's possible,
but still a change that requires deprecation, as you say.
The third one is devices (which is what I actually meant when I said
BlockBackend), which also have anonymous BlockBackends in the -blockdev
world. The same approach could work, but it would essentially mean
unifying the QOM and the block namespace, which sounds more likely to
produce conflicts than exports.
Then, blockdev-replace take a parents list, where parent is either
node-name or blk name.
Note that you need both a node-name and a child name to unambiguously
identify an edge.
I guess you could do something like the following, it's just a bit
verbose:
{ 'enum': 'BlockEdgeParentType',
'data': ['node', 'device', 'export', 'job'] }
{ 'struct': 'BlockEdgeNode',
'data': { 'node-name': 'str', 'child-name':
'str' } }
{ 'struct': 'BlockEdgeDevice', 'data': { 'qdev':
'str' } }
{ 'struct': 'BlockEdgeExport', 'data': { 'id':
'str' } }
{ 'struct': 'BlockEdgeJob',
'data': { 'id': 'str',
'role': '...some enum...?' } }
{ 'union': 'BlockEdge',
'base': { 'parent-type': 'BlockEdgeParentType' },
'discriminator': 'parent-type',
'data': {
'block-node': 'BlockEdgeNode',
'device': 'BlockEdgeDevice',
'export': 'BlockEdgeExport',
'job': 'BlockEdgeJob'
} }
Maybe BlockEdgeJob (where the correct definition isn't obvious) is
actually unnecessary if we can make sure that jobs always go through
their filter instead of owning a BlockBackend. That's what they really
should be doing anyway.
Kevin