On Tue, May 11, 2021 at 02:50:54PM -0700, Jaegeuk Kim wrote:
> On 05/11, changfeng...@vivo.com wrote:
> > Hi Jaegeuk:
> > 
> > If there're existing clusters beyond i_size, may cause data corruption, but
> > will this happen in normal? maybe some error can cause this, if i_size is
> > error the data beyond size still can't handle properly.  Is there normal
> > case can casue existing clusters beyond i_size?
> 
> We don't have a rule to sync between i_size and i_blocks.

Hmmm.. Again, it's still unclear what's the on-disk format like when
post-EOF. Also, corrupted/crafted/malicious on-disk data needs to be
handled at least to make sure it cannot crash the kernel and corrupt
the fs itself even further, especially some optimization patch like
this targeted on the specific logic to challenge the stablility.

So without details, in the beginning, it smelled somewhat dangerous
to me anyway. But considering some performance impact, I just leave
some message here.

Thanks,
Gao Xiang

> 
> > 
> > Thanks.
> > 
> > -----邮件原件-----
> > 发件人: Jaegeuk Kim <jaeg...@kernel.org> 
> > 发送时间: 2021年5月10日 23:44
> > 收件人: Fengnan Chang <changfeng...@vivo.com>
> > 抄送: c...@kernel.org; linux-f2fs-devel@lists.sourceforge.net
> > 主题: Re: [PATCH v4] f2fs: compress: avoid unnecessary check in
> > f2fs_prepare_compress_overwrite
> > 
> > On 05/07, Fengnan Chang wrote:
> > > when write compressed file with O_TRUNC, there will be a lot of 
> > > unnecessary check valid blocks in f2fs_prepare_compress_overwrite, 
> > > especially when written in page size, remove it.
> > > 
> > > This patch will not bring significant performance improvements, I test 
> > > this on mobile phone, use androbench, the sequential write test case 
> > > was open file with O_TRUNC, set write size to 4KB,  performance 
> > > improved about 2%-3%. If write size set to 32MB, performance improved
> > about 0.5%.
> > > 
> > > Signed-off-by: Fengnan Chang <changfeng...@vivo.com>
> > > ---
> > >  fs/f2fs/data.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 
> > > cf935474ffba..b9ec7b182f45 100644
> > > --- a/fs/f2fs/data.c
> > > +++ b/fs/f2fs/data.c
> > > @@ -3303,9 +3303,17 @@ static int f2fs_write_begin(struct file *file, 
> > > struct address_space *mapping,  #ifdef CONFIG_F2FS_FS_COMPRESSION
> > >   if (f2fs_compressed_file(inode)) {
> > >           int ret;
> > > +         pgoff_t end = (i_size_read(inode) + PAGE_SIZE - 1) >>
> > PAGE_SHIFT;
> > > 
> > >           *fsdata = NULL;
> > > 
> > > +         /*
> > > +          * when write pos is bigger than inode size
> > ,f2fs_prepare_compress_overwrite
> > > +          * always return 0, so check pos first to avoid this.
> > > +          */
> > > +         if (index >= end)
> > > +                 goto repeat;
> > 
> > What if there're existing clusters beyond i_size? Given performance impacts,
> > do we really need this?
> > 
> > > +
> > >           ret = f2fs_prepare_compress_overwrite(inode, pagep,
> > >                                                   index, fsdata);
> > >           if (ret < 0) {
> > > --
> > > 2.29.0
> > 
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to