Jens Axboe <ax...@kernel.dk> writes:

> On Sat, Aug 29, 2020 at 7:08 PM OGAWA Hirofumi <hirof...@mail.parknet.co.jp> 
> wrote:
>>
>> On one system, there was bdi->io_pages==0. This seems to be the bug of
>> a driver somewhere, and should fix it though. Anyway, it is better to
>> avoid the divide-by-zero Oops.
>>
>> So this check it.
>>
>> Signed-off-by: OGAWA Hirofumi <hirof...@mail.parknet.co.jp>
>> Cc: <sta...@vger.kernel.org>
>> ---
>>  fs/fat/fatent.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
>> index f7e3304..98a1c4f 100644
>> --- a/fs/fat/fatent.c   2020-08-30 06:52:47.251564566 +0900
>> +++ b/fs/fat/fatent.c   2020-08-30 06:54:05.838319213 +0900
>> @@ -660,7 +660,7 @@ static void fat_ra_init(struct super_blo
>>         if (fatent->entry >= ent_limit)
>>                 return;
>>
>> -       if (ra_pages > sb->s_bdi->io_pages)
>> +       if (sb->s_bdi->io_pages && ra_pages > sb->s_bdi->io_pages)
>>                 ra_pages = rounddown(ra_pages, sb->s_bdi->io_pages);
>>         reada_blocks = ra_pages << (PAGE_SHIFT - sb->s_blocksize_bits + 1);
>
> I don't think we should work-around this here. What device is this on?
> Something like the below may help.

The reported bug is from nvme stack, and the below patch (I submitted
same patch to you) fixed the reported case though. But I didn't verify
all possible path, so I'd liked to use safer side.

If block layer can guarantee io_pages!=0 instead, and can apply to
stable branch (5.8+). It would work too.

Thanks.

> diff --git a/block/blk-core.c b/block/blk-core.c
> index d9d632639bd1..10c08ac50697 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -539,6 +539,7 @@ struct request_queue *blk_alloc_queue(int node_id)
>               goto fail_stats;
>  
>       q->backing_dev_info->ra_pages = VM_READAHEAD_PAGES;
> +     q->backing_dev_info->io_pages = VM_READAHEAD_PAGES;
>       q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK;
>       q->node = node_id;


-- 
OGAWA Hirofumi <hirof...@mail.parknet.co.jp>

Reply via email to