On 17/03/2016 13:39, Christian Borntraeger wrote: > As an interesting side note, I updated my system from F20 to F23 some days ago > (after the initial report). While To Bo is still on a F20 system. I was not > able > to reproduce the original crash on f23. but going back to F20 made this > problem re-appear. > > Stack trace of thread 26429: > #0 0x00000000802008aa tracked_request_begin > (qemu-system-s390x) > #1 0x0000000080203f3c bdrv_co_do_preadv (qemu-system-s390x) > #2 0x000000008020567c bdrv_co_do_readv (qemu-system-s390x) > #3 0x000000008025d0f4 coroutine_trampoline > (qemu-system-s390x) > #4 0x000003ff943d150a __makecontext_ret (libc.so.6) > > this is with patch 2-4 plus the removal of virtio_queue_host_notifier_read. > > Without removing virtio_queue_host_notifier_read, I get the same mutex lockup > (as expected). > > Maybe we have two independent issues here and this is some old bug in glibc or > whatever?
I'm happy to try and reproduce on x86 if you give me some instruction (RHEL7 should be close enough to Fedora 20). Can you add an assert in virtio_blk_handle_output to catch reentrancy, like diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index a7ec572..96ea896 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -591,6 +591,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) return; } + int x = atomic_fetch_inc(&s->test); + assert(x == 0); blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s))) { @@ -602,6 +604,7 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) } blk_io_unplug(s->blk); + atomic_dec(&s->test); } static void virtio_blk_dma_restart_bh(void *opaque) diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index ae84d92..6472503 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -48,6 +48,7 @@ typedef struct VirtIOBlock { BlockBackend *blk; VirtQueue *vq; void *rq; + int test; QEMUBH *bh; VirtIOBlkConf conf; unsigned short sector_mask; ? Paolo