On Sat, Oct 14, 2017 at 09:39:21AM -0600, Jens Axboe wrote:
> On 10/14/2017 03:22 AM, Ming Lei wrote:
> > Hi Jens,
> >
> > In Red Hat internal storage test wrt. blk-mq scheduler, we found that I/O
> > performance is much bad with mq-deadline, especially about sequential I/O
> > on some multi-queue
On Sat, Oct 14, 2017 at 07:38:29PM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> By any chance, could this be backported to 4.14? I'm confused with "SCSI:
> allow to pass null rq to scsi_prep_state_check()" since it uses refactored
> flags.
>
> ===
> if (req && !(req->rq_flags & RQF_PREEMPT))
> =
Hi.
By any chance, could this be backported to 4.14? I'm confused with "SCSI:
allow to pass null rq to scsi_prep_state_check()" since it uses refactored
flags.
===
if (req && !(req->rq_flags & RQF_PREEMPT))
===
Is it safe to revert to REQ_PREEMPT here, or rq_flags should also be replaced
with
On 10/14/2017 03:22 AM, Ming Lei wrote:
> Hi Jens,
>
> In Red Hat internal storage test wrt. blk-mq scheduler, we found that I/O
> performance is much bad with mq-deadline, especially about sequential I/O
> on some multi-queue SCSI devcies(lpfc, qla2xxx, SRP...)
>
> Turns out one big issue causes
Hi Jens,
In Red Hat internal storage test wrt. blk-mq scheduler, we found that I/O
performance is much bad with mq-deadline, especially about sequential I/O
on some multi-queue SCSI devcies(lpfc, qla2xxx, SRP...)
Turns out one big issue causes the performance regression: requests are
still dequeu
5 matches
Mail list logo