On Tue, Aug 22, 2017 at 08:57:03PM +0000, Bart Van Assche wrote:
> On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> > +static inline void blk_mq_do_dispatch_ctx(struct request_queue *q,
> > +                                     struct blk_mq_hw_ctx *hctx)
> > +{
> > +   LIST_HEAD(rq_list);
> > +   struct blk_mq_ctx *ctx = NULL;
> > +
> > +   do {
> > +           struct request *rq;
> > +
> > +           rq = blk_mq_dispatch_rq_from_ctx(hctx, ctx);
> > +           if (!rq)
> > +                   break;
> > +           list_add(&rq->queuelist, &rq_list);
> > +
> > +           /* round robin for fair dispatch */
> > +           ctx = blk_mq_next_ctx(hctx, rq->mq_ctx);
> > +   } while (blk_mq_dispatch_rq_list(q, &rq_list));
> > +}
> 
> An additional question about this patch: shouldn't request dequeuing start
> at the software queue next to the last one from which a request got dequeued
> instead of always starting at the first software queue (struct blk_mq_ctx
> *ctx = NULL) to be truly round robin?

Looks a good idea, will introduce a ctx hint in hctx for starting when
blk_mq_dispatch_rq_list() returns zero. No lock is needed since
it is just a hint.

-- 
Ming

Reply via email to