On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> On 9/8/23 8:34 AM, Ming Lei wrote:
> > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> >> On 9/8/23 3:30 AM, Ming Lei wrote:
> >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> >>> index ad636954abae..95a3d31a1ef1 100644
> >>> --- a/io_uring/io_uring.c
> >>> +++ b/io_uring/io_uring.c
> >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> >>>           }
> >>>   }
> >>>  
> >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> >>> +         issue_flags |= IO_URING_F_NONBLOCK;
> >>> +
> >>
> >> I think this comment deserves to be more descriptive. Normally we
> >> absolutely cannot block for polled IO, it's only OK here because io-wq
> > 
> > Yeah, we don't do that until commit 2bc057692599 ("block: don't make 
> > REQ_POLLED
> > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > io_uring.
> > 
> >> is the issuer and not necessarily the poller of it. That generally falls
> >> upon the original issuer to poll these requests.
> >>
> >> I think this should be a separate commit, coming before the main fix
> >> which is below.
> > 
> > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > approach in V2 doesn't need this change.
> > 
> >>
> >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> >>> cancel_all, struct io_sq_data *sqd)
> >>>           finish_wait(&tctx->wait, &wait);
> >>>   } while (1);
> >>>  
> >>> + /*
> >>> +  * Reap events from each ctx, otherwise these requests may take
> >>> +  * resources and prevent other contexts from being moved on.
> >>> +  */
> >>> + xa_for_each(&tctx->xa, index, node)
> >>> +         io_iopoll_try_reap_events(node->ctx);
> >>
> >> The main issue here is that if someone isn't polling for them, then we
> > 
> > That is actually what this patch is addressing, :-)
> 
> Right, that part is obvious :)
> 
> >> get to wait for a timeout before they complete. This can delay exit, for
> >> example, as we're now just waiting 30 seconds (or whatever the timeout
> >> is on the underlying device) for them to get timed out before exit can
> >> finish.
> > 
> > For the issue on null_blk, device timeout handler provides
> > forward-progress, such as requests are released, so new IO can be
> > handled.
> > 
> > However, not all devices support timeout, such as virtio device.
> 
> That's a bug in the driver, you cannot sanely support polled IO and not
> be able to deal with timeouts. Someone HAS to reap the requests and
> there are only two things that can do that - the application doing the
> polled IO, or if that doesn't happen, a timeout.

OK, then device driver timeout handler has new responsibility of covering
userspace accident, :-)

We may document this requirement for driver.

So far the only one should be virtio-blk, and the two virtio storage
drivers never implement timeout handler.

> 
> > Here we just call io_iopoll_try_reap_events() to poll submitted IOs
> > for releasing resources, so no need to rely on device timeout handler
> > any more, and the extra exit delay can be avoided.
> > 
> > But io_iopoll_try_reap_events() may not be enough because io_wq
> > associated with current context can get released resource immediately,
> > then new IOs are submitted successfully, but who can poll these new
> > submitted IOs, then all device resources can be held by this (freed)io_wq
> > for nothing.
> > 
> > I guess we may have to take the approach in patch V2 by only canceling
> > polled IO for avoiding the thread_exit regression, or other ideas?
> 
> Ideally the behavior seems like it should be that if a task goes away,
> any pending polled IO it has should be reaped. With the above notion
> that a driver supporting poll absolutely must be able to deal with
> timeouts, it's not a strict requirement as we know that requests will be
> reaped.

Then looks the io_uring fix is less important, and I will see if one
easy fix can be figured out, one way is to reap event when exiting both
current task and the associated io_wq.

Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to