After cloning the required extents, we truncate all the pages that map the file range being cloned. In subpage-blocksize scenario, we could have dirty blocks before and/or after the clone range in the leading/trailing pages. Truncating these pages would lead to data loss. Hence this commit forces such dirty blocks to be flushed to disk before performing the clone operation.
Signed-off-by: Chandan Rajendra <chan...@linux.vnet.ibm.com> --- fs/btrfs/ioctl.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 4ff7cf8..7d39cba 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -3849,6 +3849,7 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, int ret; u64 len = olen; u64 bs = root->fs_info->sb->s_blocksize; + u64 dest_end; int same_inode = src == inode; /* @@ -3909,6 +3910,21 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, goto out_unlock; } + if ((round_down(destoff, PAGE_SIZE) < inode->i_size) && + !IS_ALIGNED(destoff, PAGE_SIZE)) { + ret = filemap_write_and_wait_range(inode->i_mapping, + round_down(destoff, PAGE_SIZE), + destoff - 1); + } + + dest_end = destoff + len - 1; + if ((dest_end < inode->i_size) && + !IS_ALIGNED(dest_end + 1, PAGE_SIZE)) { + ret = filemap_write_and_wait_range(inode->i_mapping, + dest_end + 1, + round_up(dest_end, PAGE_SIZE)); + } + if (destoff > inode->i_size) { ret = btrfs_cont_expand(inode, inode->i_size, destoff); if (ret) -- 2.1.0 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html