Am 25.05.2018 um 00:53 hat Greg Kurz geschrieben: > Removing a drive with drive_del while it is being used to run an I/O > intensive workload can cause QEMU to crash. > > An AIO flush can yield at some point: > > blk_aio_flush_entry() > blk_co_flush(blk) > bdrv_co_flush(blk->root->bs) > ... > qemu_coroutine_yield() > > and let the HMP command to run, free blk->root and give control > back to the AIO flush: > > hmp_drive_del() > blk_remove_bs() > bdrv_root_unref_child(blk->root) > child_bs = blk->root->bs > bdrv_detach_child(blk->root) > bdrv_replace_child(blk->root, NULL) > blk->root->bs = NULL > g_free(blk->root) <============== blk->root becomes stale > bdrv_unref(child_bs) > bdrv_delete(child_bs) > bdrv_close() > bdrv_drained_begin() > bdrv_do_drained_begin() > bdrv_drain_recurse() > aio_poll() > ... > qemu_coroutine_switch() > > and the AIO flush completion ends up dereferencing blk->root: > > blk_aio_complete() > scsi_aio_complete() > blk_get_aio_context(blk) > bs = blk_bs(blk) > ie, bs = blk->root ? blk->root->bs : NULL > ^^^^^ > stale > > The problem is that we should avoid making block driver graph > changes while we have in-flight requests. This patch hence adds > a drained section to bdrv_detach_child(), so that we're sure > all requests have been drained before blk->root becomes stale. > > Signed-off-by: Greg Kurz <gr...@kaod.org> > --- > v3: - start drained section before modifying the graph (Stefan) > > v2: - drain I/O requests when detaching the BDS (Stefan, Paolo) > --- > block.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/block.c b/block.c > index 501b64c8193f..715c1b56c1e2 100644 > --- a/block.c > +++ b/block.c > @@ -2127,12 +2127,16 @@ BdrvChild *bdrv_attach_child(BlockDriverState > *parent_bs, > > static void bdrv_detach_child(BdrvChild *child) > { > + BlockDriverState *child_bs = child->bs; > + > + bdrv_drained_begin(child_bs); > if (child->next.le_prev) { > QLIST_REMOVE(child, next); > child->next.le_prev = NULL; > } > > bdrv_replace_child(child, NULL); > + bdrv_drained_end(child_bs); > > g_free(child->name); > g_free(child);
I wonder if the better fix would be calling blk_drain() in blk_remove_bs() (which would also better be blk_drained_begin/end...). Doing the proposed change in bdrv_detach_child() should fix the problem that you're seeing, but at first sight it promises that callers don't have to care about shutting down their activity on the child node first. This isn't necessarily correct if the parent may still issue a new request (e.g. in response to the completion of an old one). What really needs to be drained is the parent's use of the child, not the activity of the child. Another minor problem with your approach: If a child node is used by more than one parent, this patch would unnecessarily quiesce those other parents and wait for the completion of their requests. Kevin