On 2021/5/12 5:50, Jaegeuk Kim wrote:
On 05/11, changfeng...@vivo.com wrote:
Hi Jaegeuk:

If there're existing clusters beyond i_size, may cause data corruption, but
will this happen in normal? maybe some error can cause this, if i_size is
error the data beyond size still can't handle properly.  Is there normal
case can casue existing clusters beyond i_size?

We don't have a rule to sync between i_size and i_blocks.

I can't image a case that compressed cluster may cross filesize, it looks it's
a bug if that happened, but I'm not sure I have considered all cases. So, I
prefer to add a check condition as below, then testing w/ xfstest/por_fsstress
for a while.

Subject: [PATCH] f2fs: compress: compressed cluster should not cross i_size

---
 fs/f2fs/data.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 06d1e58d3882..9acca358d578 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3325,6 +3325,8 @@ static int f2fs_write_begin(struct file *file, struct 
address_space *mapping,
                        err = ret;
                        goto fail;
                } else if (ret) {
+                       f2fs_bug_on(sbi, index >=
+                               DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE));
                        return 0;
                }
        }
--
2.29.2





Thanks.

-----邮件原件-----
发件人: Jaegeuk Kim <jaeg...@kernel.org>
发送时间: 2021年5月10日 23:44
收件人: Fengnan Chang <changfeng...@vivo.com>
抄送: c...@kernel.org; linux-f2fs-devel@lists.sourceforge.net
主题: Re: [PATCH v4] f2fs: compress: avoid unnecessary check in
f2fs_prepare_compress_overwrite

On 05/07, Fengnan Chang wrote:
when write compressed file with O_TRUNC, there will be a lot of
unnecessary check valid blocks in f2fs_prepare_compress_overwrite,
especially when written in page size, remove it.

This patch will not bring significant performance improvements, I test
this on mobile phone, use androbench, the sequential write test case
was open file with O_TRUNC, set write size to 4KB,  performance
improved about 2%-3%. If write size set to 32MB, performance improved
about 0.5%.

Signed-off-by: Fengnan Chang <changfeng...@vivo.com>
---
  fs/f2fs/data.c | 8 ++++++++
  1 file changed, 8 insertions(+)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
cf935474ffba..b9ec7b182f45 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3303,9 +3303,17 @@ static int f2fs_write_begin(struct file *file,
struct address_space *mapping,  #ifdef CONFIG_F2FS_FS_COMPRESSION
        if (f2fs_compressed_file(inode)) {
                int ret;
+               pgoff_t end = (i_size_read(inode) + PAGE_SIZE - 1) >>
PAGE_SHIFT;

                *fsdata = NULL;

+               /*
+                * when write pos is bigger than inode size
,f2fs_prepare_compress_overwrite
+                * always return 0, so check pos first to avoid this.
+                */
+               if (index >= end)
+                       goto repeat;

What if there're existing clusters beyond i_size? Given performance impacts,
do we really need this?

+
                ret = f2fs_prepare_compress_overwrite(inode, pagep,
                                                        index, fsdata);
                if (ret < 0) {
--
2.29.0



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to