On 21/03/19 09:52AM, Shiyang Ruan wrote: > Punch hole on a reflinked file needs dax_copy_edge() too. Otherwise, > data in not aligned area will be not correct. So, add the srcmap to > dax_iomap_zero() and replace memset() as dax_copy_edge(). > > Signed-off-by: Shiyang Ruan <ruansy.f...@fujitsu.com> > --- > fs/dax.c | 9 +++++++-- > fs/iomap/buffered-io.c | 2 +- > include/linux/dax.h | 3 ++- > 3 files changed, 10 insertions(+), 4 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index cfe513eb111e..348297b38f76 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -1174,7 +1174,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state > *xas, struct vm_fault *vmf, > } > #endif /* CONFIG_FS_DAX_PMD */ > > -s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap) > +s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap, > + struct iomap *srcmap)
Do we know why does dax_iomap_zero() operates on PAGE_SIZE range? IIUC, dax_zero_page_range() can take nr_pages as a parameter. But we still always use one page at a time. Why is that? > { > sector_t sector = iomap_sector(iomap, pos & PAGE_MASK); > pgoff_t pgoff; > @@ -1204,7 +1205,11 @@ s64 dax_iomap_zero(loff_t pos, u64 length, struct > iomap *iomap) > } > > if (!page_aligned) { > - memset(kaddr + offset, 0, size); > + if (iomap->addr != srcmap->addr) > + dax_iomap_cow_copy(offset, size, PAGE_SIZE, srcmap, > + kaddr, true); > + else > + memset(kaddr + offset, 0, size); > dax_flush(iomap->dax_dev, kaddr + offset, size); > } > dax_read_unlock(id); > Maybe the above could be simplified to this? if (page_aligned) { rc = dax_zero_page_range(iomap->dax_dev, pgoff, 1); } else { rc = dax_direct_access(iomap->dax_dev, pgoff, 1, &kaddr, NULL); if (iomap->addr != srcmap->addr) dax_iomap_cow_copy(offset, size, PAGE_SIZE, srcmap, kaddr, true); else memset(kaddr + offset, 0, size); dax_flush(iomap->dax_dev, kaddr + offset, size); } dax_read_unlock(id); return rc < 0 ? rc : size; Other than that looks good. Feel free to add. Reviewed-by: Ritesh Harjani <ritesh.l...@gmail.com>