hi,

On 10/11/2016 11:49 PM, Chris Murphy wrote:
On Tue, Oct 11, 2016 at 12:47 AM, Wang Xiaoguang
<wangxg.f...@cn.fujitsu.com> wrote:
If we use mount option "-o max_inline=sectorsize", say 4096, indeed
even for a fresh fs, say nodesize is 16k, we can not make the first
4k data completely inline, I found this conditon causing this issue:
   !compressed_size && (actual_end & (root->sectorsize - 1)) == 0

If it retuns true, we'll not make data inline. For 4k sectorsize,
0~4094 dara range, we can make it inline, but 0~4095, it can not.
I don't think this limition is useful, so here remove it which will
make max inline data can be equal to sectorsize.

Signed-off-by: Wang Xiaoguang <wangxg.f...@cn.fujitsu.com>
---
  fs/btrfs/inode.c | 2 --
  1 file changed, 2 deletions(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index ea15520..c0db393 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -267,8 +267,6 @@ static noinline int cow_file_range_inline(struct btrfs_root 
*root,
         if (start > 0 ||
             actual_end > root->sectorsize ||
             data_len > BTRFS_MAX_INLINE_DATA_SIZE(root) ||
-           (!compressed_size &&
-           (actual_end & (root->sectorsize - 1)) == 0) ||
             end + 1 < isize ||
             data_len > root->fs_info->max_inline) {
                 return 1;
--
2.9.0

Before making any further changes to inline data, does it make sense
to find the source of corruption Zygo has been experiencing? That's in
the "btrfs rare silent data corruption with kernel data leak" thread.
Yes, agree.
Also Zygo has sent a patch to fix that bug this morning :)

Regards,
XIaoguang Wang






--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to