Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-19 Thread Ming Lei
On Tue, Sep 19, 2017 at 12:25:15PM -0700, Omar Sandoval wrote:
> On Sat, Sep 02, 2017 at 11:17:15PM +0800, Ming Lei wrote:
> > Hi,
> > 
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
> > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > SRP...)
> > 
> > Turns out one big issue causes the performance regression: requests
> > are still dequeued from sw queue/scheduler queue even when ldd's
> > queue is busy, so I/O merge becomes quite difficult to make, then
> > sequential IO degrades a lot.
> > 
> > The 1st five patches improve this situation, and brings back
> > some performance loss.
> 
> Sorry it took so long, I've reviewed or commented on patches 1-6. When
> you send v5, could you just send patches 1-6, and split the rest as
> their own series?

Sure, no problem.

Thanks for your review!

-- 
Ming


Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-19 Thread Omar Sandoval
On Sat, Sep 02, 2017 at 11:17:15PM +0800, Ming Lei wrote:
> Hi,
> 
> In Red Hat internal storage test wrt. blk-mq scheduler, we
> found that I/O performance is much bad with mq-deadline, especially
> about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> SRP...)
> 
> Turns out one big issue causes the performance regression: requests
> are still dequeued from sw queue/scheduler queue even when ldd's
> queue is busy, so I/O merge becomes quite difficult to make, then
> sequential IO degrades a lot.
> 
> The 1st five patches improve this situation, and brings back
> some performance loss.

Sorry it took so long, I've reviewed or commented on patches 1-6. When
you send v5, could you just send patches 1-6, and split the rest as
their own series?


Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-06 Thread Tom Nguyen
Likewise with no problems on my work laptop with 4 days uptime.

Tested-by: Tom Nguyen 


On 09/07/2017 04:09 AM, Oleksandr Natalenko wrote:
> Feel free to add:
>
> Tested-by: Oleksandr Natalenko 
>
> since I'm running this on 4 machines without issues.
>
>> Hi Jens,
>>
>> Ping...




Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-06 Thread Oleksandr Natalenko
Feel free to add:

Tested-by: Oleksandr Natalenko 

since I'm running this on 4 machines without issues.

> Hi Jens,
>
> Ping...


Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-06 Thread Ming Lei
On Tue, Sep 05, 2017 at 09:39:51AM +0800, Ming Lei wrote:
> On Mon, Sep 04, 2017 at 11:12:49AM +0200, Paolo Valente wrote:
> > 
> > > Il giorno 02 set 2017, alle ore 17:17, Ming Lei  ha 
> > > scritto:
> > > 
> > > Hi,
> > > 
> > > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > > found that I/O performance is much bad with mq-deadline, especially
> > > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > > SRP...)
> > > 
> > > Turns out one big issue causes the performance regression: requests
> > > are still dequeued from sw queue/scheduler queue even when ldd's
> > > queue is busy, so I/O merge becomes quite difficult to make, then
> > > sequential IO degrades a lot.
> > > 
> > > The 1st five patches improve this situation, and brings back
> > > some performance loss.
> > > 
> > > Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> > > scheduler queue depth.
> > > 
> > > Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> > > which makes bio merge more efficient than current approch
> > > in which only the last 8 requests are checked. Since patch
> > > 6~14 converts to the scheduler way of dequeuing one request
> > > from sw queue one time for SCSI device, and the times of
> > > acquring ctx->lock is increased, and merging bio via hash
> > > table decreases holding time of ctx->lock and should eliminate
> > > effect from patch 14. 
> > > 
> > > With this changes, SCSI-MQ sequential I/O performance is
> > > improved much, Paolo reported that mq-deadline performance
> > > improved much[2] in his dbench test wrt V2. Also performanc
> > > improvement on lpfc/qla2xx was observed with V1.[1]
> > > 
> > > Also Bart worried that this patchset may affect SRP, so provide
> > > test data on SCSI SRP this time:
> > > 
> > > - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> > > - system(16 cores, dual sockets, mem: 96G)
> > > 
> > >  |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
> > > -
> > > IOPS(K)  |  DEADLINE   |NONE |NONE 
> > > -
> > > read  |  587.81 |  511.96 |  518.51 
> > > -
> > > randread  |  116.44 |  142.99 |  142.46 
> > > -
> > > write |  580.87 |   536.4 |  582.15 
> > > -
> > > randwrite |  104.95 |  124.89 |  123.99 
> > > -
> > > 
> > > 
> > >  |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
> > > -
> > > IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
> > > -
> > > read  |  587.81 |   158.7 |  450.41 
> > > -
> > > randread  |  116.44 |  142.04 |  142.72 
> > > -
> > > write |  580.87 |  136.61 |  569.37 
> > > -
> > > randwrite |  104.95 |  123.14 |  124.36 
> > > -
> > > 
> > > *: v4.13-rc6+ means v4.13-rc6 with block for-next
> > > 
> > > 
> > > Please consider to merge to V4.4.
> > > 
> > > [1] http://marc.info/?l=linux-block=150151989915776=2
> > > [2] https://marc.info/?l=linux-block=150217980602843=2
> > > 
> > > V4:
> > >   - add Reviewed-by tag
> > >   - some trival change: typo fix in commit log or comment,
> > >   variable name, no actual functional change
> > > 
> > > V3:
> > >   - totally round robin for picking req from ctx, as suggested
> > >   by Bart
> > >   - remove one local variable in __sbitmap_for_each_set()
> > >   - drop patches of single dispatch list, which can improve
> > >   performance on mq-deadline, but cause a bit degrade on
> > >   none because all hctxs need to be checked after ->dispatch
> > >   is flushed. Will post it again once it is mature.
> > >   - rebase on v4.13-rc6 with block for-next
> > > 
> > > V2:
> > >   - dequeue request from sw queues in round roubin's style
> > >   as suggested by Bart, and introduces one helper in sbitmap
> > >   for this purpose
> > >   - improve bio merge via hash table from sw queue
> > >   - add comments about using DISPATCH_BUSY state in lockless way,
> > >   simplifying handling on busy state,
> > >   - hold ctx->lock when clearing ctx busy bit as suggested
> > >   by Bart
> > > 
> > > 
> > 
> > Tested-by: Paolo Valente 
> 
> Hi Jens,
> 
> Is there any chance to make this patchset merged to V4.4?

Hi Jens,

Ping...

Thanks,
Ming


Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-04 Thread Ming Lei
On Mon, Sep 04, 2017 at 11:12:49AM +0200, Paolo Valente wrote:
> 
> > Il giorno 02 set 2017, alle ore 17:17, Ming Lei  ha 
> > scritto:
> > 
> > Hi,
> > 
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
> > about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> > SRP...)
> > 
> > Turns out one big issue causes the performance regression: requests
> > are still dequeued from sw queue/scheduler queue even when ldd's
> > queue is busy, so I/O merge becomes quite difficult to make, then
> > sequential IO degrades a lot.
> > 
> > The 1st five patches improve this situation, and brings back
> > some performance loss.
> > 
> > Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> > scheduler queue depth.
> > 
> > Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> > which makes bio merge more efficient than current approch
> > in which only the last 8 requests are checked. Since patch
> > 6~14 converts to the scheduler way of dequeuing one request
> > from sw queue one time for SCSI device, and the times of
> > acquring ctx->lock is increased, and merging bio via hash
> > table decreases holding time of ctx->lock and should eliminate
> > effect from patch 14. 
> > 
> > With this changes, SCSI-MQ sequential I/O performance is
> > improved much, Paolo reported that mq-deadline performance
> > improved much[2] in his dbench test wrt V2. Also performanc
> > improvement on lpfc/qla2xx was observed with V1.[1]
> > 
> > Also Bart worried that this patchset may affect SRP, so provide
> > test data on SCSI SRP this time:
> > 
> > - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> > - system(16 cores, dual sockets, mem: 96G)
> > 
> >  |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
> > -
> > IOPS(K)  |  DEADLINE   |NONE |NONE 
> > -
> > read  |  587.81 |  511.96 |  518.51 
> > -
> > randread  |  116.44 |  142.99 |  142.46 
> > -
> > write |  580.87 |   536.4 |  582.15 
> > -
> > randwrite |  104.95 |  124.89 |  123.99 
> > -
> > 
> > 
> >  |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
> > -
> > IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
> > -
> > read  |  587.81 |   158.7 |  450.41 
> > -
> > randread  |  116.44 |  142.04 |  142.72 
> > -
> > write |  580.87 |  136.61 |  569.37 
> > -
> > randwrite |  104.95 |  123.14 |  124.36 
> > -
> > 
> > *: v4.13-rc6+ means v4.13-rc6 with block for-next
> > 
> > 
> > Please consider to merge to V4.4.
> > 
> > [1] http://marc.info/?l=linux-block=150151989915776=2
> > [2] https://marc.info/?l=linux-block=150217980602843=2
> > 
> > V4:
> > - add Reviewed-by tag
> > - some trival change: typo fix in commit log or comment,
> > variable name, no actual functional change
> > 
> > V3:
> > - totally round robin for picking req from ctx, as suggested
> > by Bart
> > - remove one local variable in __sbitmap_for_each_set()
> > - drop patches of single dispatch list, which can improve
> > performance on mq-deadline, but cause a bit degrade on
> > none because all hctxs need to be checked after ->dispatch
> > is flushed. Will post it again once it is mature.
> > - rebase on v4.13-rc6 with block for-next
> > 
> > V2:
> > - dequeue request from sw queues in round roubin's style
> > as suggested by Bart, and introduces one helper in sbitmap
> > for this purpose
> > - improve bio merge via hash table from sw queue
> > - add comments about using DISPATCH_BUSY state in lockless way,
> > simplifying handling on busy state,
> > - hold ctx->lock when clearing ctx busy bit as suggested
> > by Bart
> > 
> > 
> 
> Tested-by: Paolo Valente 

Hi Jens,

Is there any chance to make this patchset merged to V4.4?


Thanks,
Ming


Re: [PATCH V4 00/14] blk-mq-sched: improve SCSI-MQ performance

2017-09-04 Thread Paolo Valente

> Il giorno 02 set 2017, alle ore 17:17, Ming Lei  ha 
> scritto:
> 
> Hi,
> 
> In Red Hat internal storage test wrt. blk-mq scheduler, we
> found that I/O performance is much bad with mq-deadline, especially
> about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
> SRP...)
> 
> Turns out one big issue causes the performance regression: requests
> are still dequeued from sw queue/scheduler queue even when ldd's
> queue is busy, so I/O merge becomes quite difficult to make, then
> sequential IO degrades a lot.
> 
> The 1st five patches improve this situation, and brings back
> some performance loss.
> 
> Patch 6 ~ 7 uses q->queue_depth as hint for setting up
> scheduler queue depth.
> 
> Patch 8 ~ 15 improve bio merge via hash table in sw queue,
> which makes bio merge more efficient than current approch
> in which only the last 8 requests are checked. Since patch
> 6~14 converts to the scheduler way of dequeuing one request
> from sw queue one time for SCSI device, and the times of
> acquring ctx->lock is increased, and merging bio via hash
> table decreases holding time of ctx->lock and should eliminate
> effect from patch 14. 
> 
> With this changes, SCSI-MQ sequential I/O performance is
> improved much, Paolo reported that mq-deadline performance
> improved much[2] in his dbench test wrt V2. Also performanc
> improvement on lpfc/qla2xx was observed with V1.[1]
> 
> Also Bart worried that this patchset may affect SRP, so provide
> test data on SCSI SRP this time:
> 
> - fio(libaio, bs:4k, dio, queue_depth:64, 64 jobs)
> - system(16 cores, dual sockets, mem: 96G)
> 
>  |v4.13-rc6+*  |v4.13-rc6+   | patched v4.13-rc6+ 
> -
> IOPS(K)  |  DEADLINE   |NONE |NONE 
> -
> read  |  587.81 |  511.96 |  518.51 
> -
> randread  |  116.44 |  142.99 |  142.46 
> -
> write |  580.87 |   536.4 |  582.15 
> -
> randwrite |  104.95 |  124.89 |  123.99 
> -
> 
> 
>  |v4.13-rc6+   |v4.13-rc6+   | patched v4.13-rc6+ 
> -
> IOPS(K)  |  DEADLINE   |MQ-DEADLINE  |MQ-DEADLINE  
> -
> read  |  587.81 |   158.7 |  450.41 
> -
> randread  |  116.44 |  142.04 |  142.72 
> -
> write |  580.87 |  136.61 |  569.37 
> -
> randwrite |  104.95 |  123.14 |  124.36 
> -
> 
> *: v4.13-rc6+ means v4.13-rc6 with block for-next
> 
> 
> Please consider to merge to V4.4.
> 
> [1] http://marc.info/?l=linux-block=150151989915776=2
> [2] https://marc.info/?l=linux-block=150217980602843=2
> 
> V4:
>   - add Reviewed-by tag
>   - some trival change: typo fix in commit log or comment,
>   variable name, no actual functional change
> 
> V3:
>   - totally round robin for picking req from ctx, as suggested
>   by Bart
>   - remove one local variable in __sbitmap_for_each_set()
>   - drop patches of single dispatch list, which can improve
>   performance on mq-deadline, but cause a bit degrade on
>   none because all hctxs need to be checked after ->dispatch
>   is flushed. Will post it again once it is mature.
>   - rebase on v4.13-rc6 with block for-next
> 
> V2:
>   - dequeue request from sw queues in round roubin's style
>   as suggested by Bart, and introduces one helper in sbitmap
>   for this purpose
>   - improve bio merge via hash table from sw queue
>   - add comments about using DISPATCH_BUSY state in lockless way,
>   simplifying handling on busy state,
>   - hold ctx->lock when clearing ctx busy bit as suggested
>   by Bart
> 
> 

Tested-by: Paolo Valente 

> Ming Lei (14):
>  blk-mq-sched: fix scheduler bad performance
>  sbitmap: introduce __sbitmap_for_each_set()
>  blk-mq: introduce blk_mq_dispatch_rq_from_ctx()
>  blk-mq-sched: move actual dispatching into one helper
>  blk-mq-sched: improve dispatching from sw queue
>  blk-mq-sched: don't dequeue request until all in ->dispatch are
>flushed
>  blk-mq-sched: introduce blk_mq_sched_queue_depth()
>  blk-mq-sched: use q->queue_depth as hint for q->nr_requests
>  block: introduce rqhash helpers
>  block: move actual bio merge code into __elv_merge
>  block: add check on elevator for supporting bio merge via hashtable
>from blk-mq sw queue
>  block: introduce .last_merge and .hash to blk_mq_ctx