On Mon, Aug 17, 2020 at 12:09:18PM +0800, Baolin Wang wrote:
>               unsigned int nr_segs)
>  {
> @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, 
> struct bio *bio,
>                       !list_empty_careful(&ctx->rq_lists[type])) {
>               /* default per sw-queue merge */
>               spin_lock(&ctx->lock);
> -             ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs);
> +             /*
> +              * Reverse check our software queue for entries that we could
> +              * potentially merge with. Currently includes a hand-wavy stop
> +              * count of 8, to not spend too much time checking for merges.
> +              */
> +             if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, 
> nr_segs)) {
> +                     ctx->rq_merged++;
> +                     ret = true;
> +             }
> +
>               spin_unlock(&ctx->lock);

This adds an overly long line.  That being said the whole thing could
be nicely simplified to:

        ...

        if (e && e->type->ops.bio_merge)
                return e->type->ops.bio_merge(hctx, bio, nr_segs);

        if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
            list_empty_careful(&ctx->rq_lists[hctx->type]))
                return false;

        /*
         * Reverse check our software queue for entries that we could
         * potentially merge with. Currently includes a hand-wavy stop count of
         * 8, to not spend too much time checking for merges.
         */
        spin_lock(&ctx->lock);
        ret = blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs);
        if (ret)
                ctx->rq_merged++;
        spin_unlock(&ctx->lock);

Also I think it would make sense to move the locking into
blk_mq_bio_list_merge.

Reply via email to