On Wed, Nov 15, 2017 at 06:21:11PM -0500, John Ferlan wrote:
On 11/07/2017 08:38 AM, John Ferlan wrote:
>
>
> On 11/07/2017 04:36 AM, Ján Tomko wrote:
>> On Mon, Nov 06, 2017 at 03:53:08PM -0500, John Ferlan wrote:
>>> Resolve a storage driver crash as a result of a long running
>>> storageVolCreateXML when the virStorageVolPoolRefreshThread is
>>> run as a result of a storageVolUpload complete and ran the
>>> virStoragePoolObjClearVols without checking if the creation
>>> code was currently processing a buildVol after incrementing
>>> the driver->asyncjob count.
>>
>> I thought the whole point of refreshing the pool in a thread
>> was to have the storage driver usable for other things.
>> This effectively disables pool refresh during long-running
>> jobs.
>>
Jan -
So I'm curious to know whether your comments amount to a NAK of the
design of the patches of if the explanation provided here was sufficient?
NACK to the use of usleep for this.
I loathe usleep. It is reasonable for waiting on something external or racy
things like session daemon startup, but I do not see why it is necessary
here.
Tks -
John
[...]
>
> An "alternative" is to refresh the pool after a create vol rather than
> just altering the pool available/allocation values from the created
> volume target allocation. That way the VolPoolRefreshThread could just
> "exit" knowing that volCreate will do the update. However, this refresh
> could only happen when there are no asyncjobs outstanding. In the mean
> time the pool sizing values probably would still need to be updated if
> asyncjobs > 0 to avoid multiple create jobs from "exhausting" the pool
> because the sizing values weren't properly updated.
>
I did not realize VolPoolRefreshThread was only called after volume
upload. I think quietly doing nothing in this case would be reasonable,
considering that actual storagePoolRefresh errors out in that case.
As far as noble goals go, this is volUpload being more agressive
in keeping the pool allocation/capacity values up to date.
Since volCreate only changes the values based on the request by
the user, it does not take into account how much space is required
on-disk (e.g. metadata for LUKS and qcow2 volumes).
I am not sure how to reasonably solve this without refreshing the
whole pool every time (which gives us the advantage of catching
changes out of our scope - e.g. a fs pool that does not reside
on a separate partition).
Jan