On Thu, Dec 06, 2012 at 08:35:55AM +0100, Paolo Bonzini wrote: > Il 05/12/2012 21:47, Stefan Hajnoczi ha scritto: > > + > > +/* Block until pending requests have completed > > + * > > + * The vring continues to be serviced so ensure no new requests will be > > added > > + * to avoid races. > > + */ > > +void virtio_blk_data_plane_drain(VirtIOBlockDataPlane *s) > > +{ > > + qemu_mutex_lock(&s->num_reqs_lock); > > + while (s->num_reqs > 0) { > > + qemu_cond_wait(&s->no_reqs_cond, &s->num_reqs_lock); > > + } > > + qemu_mutex_unlock(&s->num_reqs_lock); > > +} > > Hi Stefan, > > so this was not changed from v4?
BTW I should go into slightly more detail about why I stopped short of implementing the notify+join approach. notify+join means stopping the event loop and data plane thread so that the caller is sure that virtio-blk-data-plane is quiesced. Unfortunately this doesn't map nicely to bdrv_drain_all() where the caller has the global mutex, quiesces I/O, and then performs a critical operation. I/O resumes after the caller returns or releases the global mutex: bdrv_drain_all(); critical_operation(); return; /* now it's okay to process I/O again */ We cannot use notify+join here because bdrv_drain_all() would stop the data plane thread but nothing restarts it! Perhaps we'd need a "resume" call after the critical operation so that the data plane thread is restarted - but this sounds invasive and is a departure from how existing I/O and emulated devices work. Stefan