On 7/15/19 12:18 PM, Paolo Valente wrote:
Didn't I simply move it forward in that commit?
Il giorno 15 lug 2019, alle ore 12:16, Holger Hoffstätte
ha scritto:
Paolo,
The function idling_needed_for_service_guarantees() was just removed in
5.3-commit
3726112ec731 ("block, bfq: r
Paolo,
The function idling_needed_for_service_guarantees() was just removed in
5.3-commit
3726112ec731 ("block, bfq: re-schedule empty queues if they deserve I/O
plugging").
See [1].
cheers
Holger
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/block/bfq-iosch
On 5/11/19 1:17 AM, Eric Wheeler wrote:
On Fri, 10 May 2019, Sasha Levin wrote:
On Fri, May 10, 2019 at 10:56:32AM -0700, Eric Wheeler wrote:
From: Jens Axboe
commit 77f1e0a52d26242b6c2dba019f6ebebfb9ff701e upstream
A previous commit moved the shallow depth and BFQ depth map calculations
to
On 3/7/19 5:25 PM, Paolo Valente wrote:
Hi,
since I didn't make it to submit these ones for 5.1, let me be
early for 5.2 :)
These patches fix some bug affecting performance, reduce execution
time a little bit, and boost throughput and responsiveness.
They are meant to be applied on top of the l
On 11/01/18 18:43, Konstantin Khlebnikov wrote:
With default 8ms idle slice BFQ is up to 10 times slower than CFQ
for massive random read workloads for common SATA SSD.
For now zero idle slice gives better out of box experience.
CFQ employs this since commit 41c0126b3f22 ("block: Make CFQ defaul
On 08/22/18 21:46, Jens Axboe wrote:
On 8/22/18 1:37 PM, Holger Hoffstätte wrote:
On 08/22/18 21:17, Jens Axboe wrote:
So the obvious suspect is the new return of UINT_MAX from get_limit() to
__wbt_wait(). I first suspected that I mispatched something, but it's all
like in mainline or
On 08/22/18 21:17, Jens Axboe wrote:
So the obvious suspect is the new return of UINT_MAX from get_limit() to
__wbt_wait(). I first suspected that I mispatched something, but it's all
like in mainline or your tree. Even the recently moved-around atomic loop
inside rq_wait_inc_below() is 1:1 the s
On 08/22/18 19:28, Jens Axboe wrote:
On 8/22/18 8:27 AM, Jens Axboe wrote:
On 8/22/18 6:54 AM, Holger Hoffstätte wrote:
On 08/22/18 06:10, Jens Axboe wrote:
[...]
If you have time, please look at the 3 patches I posted earlier today.
Those are for mainline, so should be OK :-)
I'm
On 05/22/18 19:46, Jens Axboe wrote:
> On 5/22/18 10:20 AM, Jens Axboe wrote:
>> On 5/22/18 10:17 AM, Holger Hoffstätte wrote:
>>> On 05/22/18 16:48, Jianchao Wang wrote:
>>>> Currently, kyber is very unfriendly with merging. kyber depends
>>>> on ctx rq
On 05/22/18 16:48, Jianchao Wang wrote:
> Currently, kyber is very unfriendly with merging. kyber depends
> on ctx rq_list to do merging, however, most of time, it will not
> leave any requests in ctx rq_list. This is because even if tokens
> of one domain is used up, kyber will try to dispatch req
On 04/24/18 19:34, Christoph Hellwig wrote:
On Sat, Apr 21, 2018 at 02:54:05PM +0200, Jan Kara wrote:
- if (iocb->ki_flags & IOCB_DSYNC)
+ if (iocb->ki_flags & IOCB_DSYNC) {
dio->flags |= IOMAP_DIO_NEED_SYNC;
+ /*
+
nce 4.9,
with some minor changes to accommodate initialization order in 4.14.
Please consider for 4.17.
Signed-off-by: Holger Hoffstätte
cheers,
Holger
diff -rup linux-4.16-rc4/drivers/block/loop.c
linux-4.16-rc4-loop/drivers/block/loop.c
--- linux-4.16-rc4/drivers/block/loop.c 2018-03-04 23:54:11.
On 02/06/18 15:55, Paolo Valente wrote:
>
>
>> Il giorno 06 feb 2018, alle ore 14:40, Holger Hoffstätte
>> ha scritto:
>>
>>
>> The plot thickens!
>>
>
> Yep, the culprit seems clearer, though ...
>
>> Just as I was about to post that
The plot thickens!
Just as I was about to post that I didn't have any problems - because
I didn't have any - I decided to do a second test, activated bfq on my
workstation, on a hunch typed "sync" and .. the machine locked up, hard.
Rebooted, activated bfq, typed sync..sync hangs. Luckily this t
On 02/06/18 13:26, Paolo Valente wrote:
(..)
> As Oleksadr asked too, is it deadline or mq-deadline?
You can use deadline as alias as long as blk-mq is active.
This doesn't work when mq-deadline is built as a module, but that
doesn't seem to be the problem here.
>> [ 484.179292] BUG: unable to
On 01/12/18 06:58, Paolo Valente wrote:
>
>
>> Il giorno 28 dic 2017, alle ore 15:00, Holger Hoffstätte
>> ha scritto:
>>
>>
>> On 12/28/17 12:19, Paolo Valente wrote:
>> (snip half a tech report ;)
>>
>> So either this or the previous pa
block/bfq-iosched.c | 3 +++
> 2 files changed, 8 insertions(+), 2 deletions(-)
Gave this a try and can't reproduce the leak anymore, so for both patches:
Tested-by: Holger Hoffstätte
cheers!
Holger
On 01/09/18 00:27, Holger Hoffstätte wrote:
> On 01/08/18 23:55, Jens Axboe wrote:
>> the good old
>>
>> int srcu_idx = srcu_idx;
>>
>> should get the job done.
>
> (Narrator: It didn't.)
Narrator: we retract our previous statement and apologi
On 01/08/18 23:55, Jens Axboe wrote:
> On 1/8/18 1:15 PM, Jens Axboe wrote:
>> On 1/8/18 12:57 PM, Holger Hoffstätte wrote:
>>> On 01/08/18 20:15, Tejun Heo wrote:
>>>> Currently, blk-mq protects only the issue path with RCU. This patch
>>>> puts the com
On 01/08/18 20:15, Tejun Heo wrote:
> Currently, blk-mq protects only the issue path with RCU. This patch
> puts the completion path under the same RCU protection. This will be
> used to synchronize issue/completion against timeout by later patches,
> which will also add the comments.
>
> Signed
On 12/28/17 12:19, Paolo Valente wrote:
(snip half a tech report ;)
So either this or the previous patch ("limit tags for writes and async I/O"
can lead to a hard, unrecoverable hang with heavy writes. Since I couldn't
log into the affected system anymore I couldn't get any stack traces, blk-mq
d
So plugging in a device on USB with BFQ as scheduler now works without
hiccup (probably thanks to Ming Lei's last patch), but of course I found
another problem. Unmounting the device after use, changing the scheduler
back to deadline or kyber and rmmod'ing the BFQ module reproducibly gives me:
ke
*hctx)
> out_put_device:
> put_device(&sdev->sdev_gendev);
> out:
> + if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev))
> + blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);
> return false;
> }
So just to follow up on this: with this patch I haven't encountered any
new hangs with blk-mq, regardless of medium (SSD/rotating disk) or scheduler.
I cannot speak for other hangs that may be reproducible by other means,
but for now here's my:
Tested-by: Holger Hoffstätte
cheers,
Holger
On 12/05/17 06:16, Ming Lei wrote:
> On Mon, Dec 04, 2017 at 11:48:07PM +0000, Holger Hoffstätte wrote:
>> On Tue, 05 Dec 2017 06:45:08 +0800, Ming Lei wrote:
>>
>>> On Mon, Dec 04, 2017 at 03:09:20PM +, Bart Van Assche wrote:
>>>> On Sun, 2017-12-03 at 00:3
On Tue, 05 Dec 2017 06:45:08 +0800, Ming Lei wrote:
> On Mon, Dec 04, 2017 at 03:09:20PM +, Bart Van Assche wrote:
>> On Sun, 2017-12-03 at 00:31 +0800, Ming Lei wrote:
>> > Fixes: 0df21c86bdbf ("scsi: implement .get_budget and .put_budget for
>> > blk-mq")
>>
>> It might be safer to revert
On Mon, 20 Nov 2017 18:19:37 +, Holger Hoffstätte wrote:
> Sorry if this is a dumb question, but I've started playing with bcc
> (release 0.4.0) and can trace everything (cpu, xfs, net..) so it works
> fine - except apparently block traffic. I've tried all the bio* tools
Sorry if this is a dumb question, but I've started playing with bcc
(release 0.4.0) and can trace everything (cpu, xfs, net..) so it works
fine - except apparently block traffic. I've tried all the bio* tools
and none of them seem to trace/collect anything, despite the fact
that they contain blk-m
On Tue, 11 Apr 2017 11:38:27 +0200, Jan Kara wrote:
> when testing my fix for 0-day reports with writeback throttling I came
> across somewhat unexpected behavior with user interface of writeback
> throttling. So currently if CFQ is used as an IO scheduler, we disable
> writeback throttling becaus
28 matches
Mail list logo