On 09/25/2014 06:57 PM, Kevin Wolf wrote:
Am 25.09.2014 um 10:41 hat Alexey Kardashevskiy geschrieben:
> On 09/24/2014 07:48 PM, Kevin Wolf wrote:
>> Am 23.09.2014 um 10:47 hat Alexey Kardashevskiy geschrieben:
>>> On 09/19/2014 06:47 PM, Kevin Wolf wrote:> Am 16.09.2014 um 14:59 hat
Paolo Bonzini geschrieben:
>>>>> Il 16/09/2014 14:52, Kevin Wolf ha scritto:
>>>>>> Yes, that's true. We can't fix this problem in qcow2,
though, because
>>>>>> it's a more general one. I think we must make sure that
>>>>>> bdrv_invalidate_cache() doesn't yield.
>>>>>>
>>>>>> Either by forbidding to run bdrv_invalidate_cache() in a
coroutine and
>>>>>> moving the problem to the caller (where and why is it even called
from a
>>>>>> coroutine?), or possibly by creating a new coroutine for the
driver
>>>>>> callback and running that in a nested event loop that only
handles
>>>>>> bdrv_invalidate_cache() callbacks, so that the NBD server
doesn't get a
>>>>>> chance to process new requests in this thread.
>>>>>
>>>>> Incoming migration runs in a coroutine (the coroutine entry point is
>>>>> process_incoming_migration_co). But everything after qemu_fclose()
can
>>>>> probably be moved into a separate bottom half, so that it gets out
of
>>>>> coroutine context.
>>>>
>>>> Alexey, you should probably rather try this (and add a bdrv_drain_all()
>>>> in bdrv_invalidate_cache) than messing around with qcow2 locks. This
>>>> isn't a problem that can be completely fixed in qcow2.
>>>
>>>
>>> Ok. Tried :) Not very successful though. The patch is below.
>>>
>>> Is that the correct bottom half? When I did it, I started getting crashes
>>> in various sport on accesses to s->l1_cache which is NULL after
qcow2_close.
>>> Normally the code would check s->l1_size and then use but they are out of
sync.
>>
>> No, that's not the place we were talking about.
>>
>> What Paolo meant is that in process_incoming_migration_co(), you can
>> split out the final part that calls bdrv_invalidate_cache_all() into a
>> BH (you need to move everything until the end of the function into the
>> BH then). His suggestion was to move everything below the qemu_fclose().
>
> Ufff. I took it very literally. Ok. Let it be
> process_incoming_migration_co(). But there is something I am missing about
> BHs. Here is a patch:
>
You need to call qemu_bh_schedule().
Right. Cool. So is below what was suggested? I am doublechecking as it does
not solve the original issue - the bottomhalf is called first and then
nbd_trip() crashes in qcow2_co_flush_to_os().
diff --git a/block.c b/block.c
index d06dd51..1e6dfd1 100644
--- a/block.c
+++ b/block.c
@@ -5037,20 +5037,22 @@ void bdrv_invalidate_cache(BlockDriverState *bs,
Error **errp)
if (local_err) {
error_propagate(errp, local_err);
return;
}
ret = refresh_total_sectors(bs, bs->total_sectors);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not refresh total sector count");
return;
}
+
+ bdrv_drain_all();
}
void bdrv_invalidate_cache_all(Error **errp)
{
BlockDriverState *bs;
Error *local_err = NULL;
QTAILQ_FOREACH(bs, &bdrv_states, device_list) {
AioContext *aio_context = bdrv_get_aio_context(bs);
diff --git a/migration.c b/migration.c
index 6db04a6..dc026a9 100644
--- a/migration.c
+++ b/migration.c
@@ -81,49 +81,64 @@ void qemu_start_incoming_migration(const char *uri,
Error **errp)
else if (strstart(uri, "unix:", &p))
unix_start_incoming_migration(p, errp);
else if (strstart(uri, "fd:", &p))
fd_start_incoming_migration(p, errp);
#endif
else {
error_setg(errp, "unknown migration protocol: %s", uri);
}
}
+static QEMUBH *migration_complete_bh;
+static void process_incoming_migration_complete(void *opaque);
+
static void process_incoming_migration_co(void *opaque)
{
QEMUFile *f = opaque;
- Error *local_err = NULL;
int ret;
ret = qemu_loadvm_state(f);
qemu_fclose(f);
free_xbzrle_decoded_buf();
if (ret < 0) {
error_report("load of migration failed: %s", strerror(-ret));
exit(EXIT_FAILURE);
}
qemu_announce_self();
bdrv_clear_incoming_migration_all();
+
+ migration_complete_bh = aio_bh_new(qemu_get_aio_context(),
+ process_incoming_migration_complete,
+ NULL);
+ qemu_bh_schedule(migration_complete_bh);
+}
+
+static void process_incoming_migration_complete(void *opaque)
+{
+ Error *local_err = NULL;
+
/* Make sure all file formats flush their mutable metadata */
bdrv_invalidate_cache_all(&local_err);
if (local_err) {
qerror_report_err(local_err);
error_free(local_err);
exit(EXIT_FAILURE);
}
if (autostart) {
vm_start();
} else {
runstate_set(RUN_STATE_PAUSED);
}
+ qemu_bh_delete(migration_complete_bh);
+ migration_complete_bh = NULL;
}
--
Alexey