On 03/02/2017 04:32 PM, Paolo Bonzini wrote: > > > On 02/03/2017 15:49, Cornelia Huck wrote: >> On Thu, 2 Mar 2017 14:04:22 +0100 >> Halil Pasic <pa...@linux.vnet.ibm.com> wrote: >> >>> diff --git a/hw/block/dataplane/virtio-blk.c >>> b/hw/block/dataplane/virtio-blk.c >>> index 5556f0e..13dd14d 100644 >>> --- a/hw/block/dataplane/virtio-blk.c >>> +++ b/hw/block/dataplane/virtio-blk.c >>> @@ -258,9 +258,16 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) >>> virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, NULL); >>> } >>> >>> - /* Drain and switch bs back to the QEMU main loop */ >>> + /* Drain and switch bs back to the QEMU main loop. After drain, the >>> + * device will not submit (nor comple) any requests until dataplane >> >> s/comple/complete/ >>
Will fix. >>> + * starts again. >>> + */ >>> blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); I'm wondering if synchronization is needed for batch_notify_vqs. I think the set_bit can be from the iothread, but the notify_guest_bh below is main event loop. Is it OK like this (could we miss bits set in other thread)? Does blk_set_aio_context include a barrier? >>> >>> + /* Notify guest before the guest notifiers get cleaned up */ >>> + qemu_bh_cancel(s->bh); >>> + notify_guest_bh(s); >>> + >> >> Hm... does virtio-scsi dataplane need a similar treatment? Or am I >> missing something? > > No, the BH optimization is specific to virtio-blk. Thanks for the patch > Halil! > You are welcome ;)! Regards, Halil