On 6/25/25 17:50, Huang Jianan wrote:
> On 2025/6/25 17:48, Jianan Huang wrote:
>> On 2025/6/25 16:45, Chao Yu wrote:
>>>
>>> On 6/25/25 14:49, Jianan Huang wrote:
>>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>>> bios and then merged at the block level. In this case, nr_cpages should
>>>> be used to pre-allocate bvecs.
>>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>>> enough for all compressed pages.
>>>>
>>>> Signed-off-by: Jianan Huang <huangjia...@xiaomi.com>
>>>> Signed-off-by: Sheng Yong <shengyo...@xiaomi.com>
>>>> ---
>>>> Changes since v2:
>>>> - Initialize index only for compressed files.
>>>> Changes since v1:
>>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>>
>>>>   fs/f2fs/data.c | 12 ++++++++++--
>>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>> index 31e892842625..d071d9f6a811 100644
>>>> --- a/fs/f2fs/data.c
>>>> +++ b/fs/f2fs/data.c
>>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx 
>>>> *cc, struct bio **bio_ret,
>>>>                }
>>>>
>>>>                if (!bio) {
>>>> -                     bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>>> +                     bio = f2fs_grab_read_bio(inode, blkaddr, 
>>>> nr_pages - i,
>>>
>>> Jianan,
>>>
>>> Another case:
>>>
>>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>>
>>> nr_pages=4
>>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>>
>>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>>> that time, max_nr_pages equals to 1 as well.
>>>
>>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity, 
>>> however,
>>> we need at least 3 vecs to merge all cpages, right?
>>>
>>
>> Hi, chao,
>>
>> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
>> we have nr_pages pages left, which belong to other clusters.
>> If this is the last page, we can simply pass nr_pages = 0.
>>
>> When allocating bio, we need:
>> 1. The cpages remaining in the current cluster, which should be 
>> (nr_capges - i).
>> 2. The maximum cpages remaining in other clusters, which should be 
>> max(nr_pages, cc->nr_cpages).
>>
> 
> align(nr_pages, cc->nr_cpages), sorry for this.
> 
>> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all 
>> vecs?

Jianan,

What about getting rid of below change? and just passing max_nr_pages to
f2fs_read_multi_pages? Maybe there is a little bit waste for bio vector space,
but it will be more safe to reserve enough margin.

+       for (; nr_pages; nr_pages--, max_nr_pages--) {

Thanks,

>>
>> Thanks,
>>
>>
>>> Thanks,
>>>
>>>>                                        f2fs_ra_op_flags(rac),
>>>>                                        folio->index, for_write);
>>>>                        if (IS_ERR(bio)) {
>>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode 
>>>> *inode,
>>>>        unsigned max_nr_pages = nr_pages;
>>>>        int ret = 0;
>>>>
>>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>>> +     if (f2fs_compressed_file(inode)) {
>>>> +             index = rac ? readahead_index(rac) : folio->index;
>>>> +             max_nr_pages = round_up(index + nr_pages, 
>>>> cc.cluster_size) -
>>>> +                             round_down(index, cc.cluster_size);
>>>> +     }
>>>> +#endif
>>>> +
>>>>        map.m_pblk = 0;
>>>>        map.m_lblk = 0;
>>>>        map.m_len = 0;
>>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode 
>>>> *inode,
>>>>        map.m_seg_type = NO_CHECK_TYPE;
>>>>        map.m_may_create = false;
>>>>
>>>> -     for (; nr_pages; nr_pages--) {
>>>> +     for (; nr_pages; nr_pages--, max_nr_pages--) {
>>>>                if (rac) {
>>>>                        folio = readahead_folio(rac);
>>>>                        prefetchw(&folio->flags);
>>>
>>
> 



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to