On 2025/6/17 21:13, Sheng Yong wrote:
> 
> On 6/17/25 19:37, Chao Yu via Linux-f2fs-devel wrote:
>> On 6/17/25 13:55, Jianan Huang wrote:
>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>> bios and then merged at the block level. In this case, nr_cpages should
>>> be used to pre-allocate bvecs.
>>>
>>> Signed-off-by: Jianan Huang <huangjia...@xiaomi.com>
>>> Signed-off-by: Sheng Yong <shengyo...@xiaomi.com>
>>> ---
>>>   fs/f2fs/data.c | 10 +++++-----
>>>   1 file changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 31e892842625..c7773b09d83f 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx 
>>> *cc, struct bio **bio_ret,
>>>              }
>>>
>>>              if (!bio) {
>>> -                    bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>> +                    bio = f2fs_grab_read_bio(inode, blkaddr,
>>> +                                    max(nr_pages, cc->nr_cpages) - i,
>>
>> Hi Jianan,
>>
>> e.g.
>>
>> User wants to read page [1, 5],
>> page #1,2,3,4 locates in compressed block #1000,1001,1003,
>> page #5 locate in compressed block #1004,1005
>>
>> It submits first bio w/ block #1000,1001
>> It allocates second bio w/ size of max(nr_pages=1, nr_cpages=3) - 2 = 1 ?
>> However block #1003 and block #1004,1005 can be readed in one bio, we
>> should allocate larger bio for last continuous blocks which cross 
>> clusters.
> 
> Hi, Chao,
> 
> I think `max(nr_pages, cc->nr_cpages) - i` can reserve enough vectors in 
> bio
> for later reads. IIUC, the case above is:
> 
> read page #1,2,3,4 at blkaddr #1000,1001,1003:
>    * nr_pages=5, cpages=3, for the first bio1, vec=max(5,3)-0=5 (2 vecs 
> are used)
>                            for the second bio2, vec=max(5,3)-2=3 (1 vec 
> is used)
> read page #5 at blkaddr #1004,1005, prev bio2 is still available
>    * nr_pages=1, cpages=2, prev bio2, 2 vecs left
> 
> 
> For case: page #1,2,3,4 at compressed blkaddr #1000,1001,1003
>            page #5,6,7,8 at compressed blkaddr #1004,1005,1006
> If we are reading page[1,5], we could do calculation as the following?
> 
>      max_nr_pages=align(nr_pages, cluster_size)
>      max(max_nr_pages, cc->nr_cpages) - i
> 

f2fs_read_multi_pages only process the current cluster, so there are two 
cases:
1. When all the remaining pages belong to the current cluster, 
(nr_cpages - i) is enough for all vecs.
2. When there are pages from other clusters yet to be processed, 
(align(nr_pages, cluster_size) - i) is enough for all vecs.
For case 1, nr_cpages should be equal to align(nr_pages, cluster_size).
Therefore, max is not needed.

Thanks,
Jianan

> 
> thanks,
> shengyong
>>
>>>                                      f2fs_ra_op_flags(rac),
>>>                                      folio->index, for_write);
>>>                      if (IS_ERR(bio)) {
>>> @@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>      pgoff_t index;
>>>   #endif
>>>      unsigned nr_pages = rac ? readahead_count(rac) : 1;
>>> -    unsigned max_nr_pages = nr_pages;
>>
>> Maybe we can align both start and end of read range w/ cluster_size, 
>> and use
>> start and end for max_nr_pages calculation, then pass it to
>> f2fs_read_{multi,single}_pages(), something like this?
>>
>> max_nr_pages = round_up(end_idx, cluster_size) -
>>               round_down(start_idx, cluster_size);
>>
>> Its size should always cover size of all cpage and/or rpage.
>>
>> Thanks,
>>
>>>      int ret = 0;
>>>
>>>      map.m_pblk = 0;
>>> @@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>              /* there are remained compressed pages, submit them */
>>>              if (!f2fs_cluster_can_merge_page(&cc, index)) {
>>>                      ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                                            max_nr_pages,
>>> +                                            nr_pages,
>>>                                              &last_block_in_bio,
>>>                                              rac, false);
>>>                      f2fs_destroy_compress_ctx(&cc, false);
>>> @@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>   read_single_page:
>>>   #endif
>>>
>>> -            ret = f2fs_read_single_page(inode, folio, max_nr_pages, 
>>> &map,
>>> +            ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
>>>                                      &bio, &last_block_in_bio, rac);
>>>              if (ret) {
>>>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>>> @@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>                      /* last page */
>>>                      if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
>>>                              ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                                                    max_nr_pages,
>>> +                                                    nr_pages,
>>>                                                      &last_block_in_bio,
>>>                                                      rac, false);
>>>                              f2fs_destroy_compress_ctx(&cc, false);
>>
>>
>>
>> _______________________________________________
>> Linux-f2fs-devel mailing list
>> Linux-f2fs-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to