Hi Keith
On 07/19/2018 01:45 AM, Keith Busch wrote:
>>> + list_for_each_entry(q, >tag_list, tag_set_list) {
>>> /*
>>> * Request timeouts are handled as a forward rolling timer. If
>>> * we end up here it means that no requests are pending and
>>> @@ -881,7
On Wed, 2018-07-18 at 11:45 -0600, Keith Busch wrote:
> On Wed, Jul 18, 2018 at 05:18:45PM +, Bart Van Assche wrote:
> > On Wed, 2018-07-18 at 11:00 -0600, Keith Busch wrote:
> > > - cancel_work_sync(>timeout_work);
> > > -
> > > if (q->mq_ops) {
> > > struct blk_mq_hw_ctx *hctx;
>
On Wed, Jul 18, 2018 at 05:18:45PM +, Bart Van Assche wrote:
> On Wed, 2018-07-18 at 11:00 -0600, Keith Busch wrote:
> > - cancel_work_sync(>timeout_work);
> > -
> > if (q->mq_ops) {
> > struct blk_mq_hw_ctx *hctx;
> > int i;
> > @@ -415,6 +412,8 @@ void
On Wed, 2018-07-18 at 11:00 -0600, Keith Busch wrote:
> void blk_sync_queue(struct request_queue *q)
> {
> - del_timer_sync(>timeout);
> - cancel_work_sync(>timeout_work);
> -
> if (q->mq_ops) {
> struct blk_mq_hw_ctx *hctx;
> int i;
> @@ -415,6 +412,8
This patch removes the per-request_queue timeout handling and uses the
tagset instead. This simplifies the timeout handling since a shared tag
set can handle all timed out requests in a single work queue rather than
iterate the same set in different work for all the users of that set.
The long