On Wed, 07/29 13:02, Paolo Bonzini wrote:
> 
> 
> On 29/07/2015 12:57, Fam Zheng wrote:
> > On Wed, 07/29 09:37, Paolo Bonzini wrote:
> >>
> >>
> >> On 29/07/2015 06:42, Fam Zheng wrote:
> >>> @@ -2613,6 +2613,8 @@ bool bdrv_aio_poll(AioContext *ctx, bool blocking)
> >>>  {
> >>>      bool ret;
> >>>  
> >>> +    aio_disable_clients(ctx, AIO_CLIENT_DATAPLANE | 
> >>> AIO_CLIENT_NBD_SERVER);
> >>>      ret = aio_poll(ctx, blocking);
> >>> +    aio_enable_clients(ctx, AIO_CLIENT_DATAPLANE | 
> >>> AIO_CLIENT_NBD_SERVER);
> >>>      return ret;
> >>
> >> This is not enough, because another thread's aio_poll can sneak in
> >> between calls to bdrv_aio_poll if the AioContext lock is released, and
> >> they will use the full set of clients.
> >>
> >> Similar to your v1, it works with the large critical sections we
> >> currently have, but it has the same problem in the longer term.
> > 
> > Can we add more disable/enable pairs around bdrv_drain on top?
> 
> Yes, though I think you'd end up reverting patches 10 and 11 in the end.

We will add outer disable/enable pairs to prevent another threads's aio_poll
from sneaking in between bdrv_aio_poll calls, but we needn't obsolete
bdrv_aio_poll() because of that - it can be useful by itself. For example
bdrv_aio_cancel shouldn't look at ioeventfd, otherwise it could spin for too
long on high load.  Does that make sense?

Fam

> 
> All 11 patches are okay, though it would be great if you could post the
> full series before this is applied.
> 
> Paolo

Reply via email to