On Fri, Jan 30, 2015 at 03:47:54PM +0800, Wang, Yalin wrote: > This patch change smaps/pagemap_read pagetable walk behavior, to make > sure not skip VM_PFNMAP pagetables, > so that we can calculate COW pages of VM_PFNMAP as normal pages. > > Signed-off-by: Yalin Wang <yalin.w...@sonymobile.com>
Hi Yalin, The original motivation of the VM_PFNMAP code in pagewalk.c comes from the following patch: commit a9ff785e4437c83d2179161e012f5bdfbd6381f0 Author: Cliff Wickman <c...@sgi.com> Date: Fri May 24 15:55:36 2013 -0700 mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas , where Cliff stated that some kind of vma(VM_PFNMAP) caused kernel panic when walk_page_range() was called over it. So I don't think that re-enabling to walk over every vma(VM_PFNMAP) unexceptionally is a good idea. If you really want to get some information from a vma(VM_PFNMAP) via these interfaces, I recommend you to implement proper judging code which returns 0 for your vma(VM_PFNMAP) and returns 1 for Cliff's vma(VM_PFNMAP). Thanks, Naoya Horiguchi > --- > fs/proc/task_mmu.c | 2 ++ > include/linux/mm.h | 2 ++ > mm/pagewalk.c | 5 +++++ > 3 files changed, 9 insertions(+) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index c7267e9..e7d7c43 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -616,6 +616,7 @@ static int show_smap(struct seq_file *m, void *v, int > is_pid) > struct mem_size_stats mss; > struct mm_walk smaps_walk = { > .pmd_entry = smaps_pte_range, > + .test_walk = generic_walk_page_test_no_skip, > .mm = vma->vm_mm, > .private = &mss, > }; > @@ -1264,6 +1265,7 @@ static ssize_t pagemap_read(struct file *file, char > __user *buf, > > pagemap_walk.pmd_entry = pagemap_pte_range; > pagemap_walk.pte_hole = pagemap_pte_hole; > + pagemap_walk.test_walk = generic_walk_page_test_no_skip; > #ifdef CONFIG_HUGETLB_PAGE > pagemap_walk.hugetlb_entry = pagemap_hugetlb_range; > #endif > diff --git a/include/linux/mm.h b/include/linux/mm.h > index b976d9f..07f71c5 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1191,6 +1191,8 @@ struct mm_walk { > void *private; > }; > > +int generic_walk_page_test_no_skip(unsigned long start, unsigned long end, > + struct mm_walk *walk); > int walk_page_range(unsigned long addr, unsigned long end, > struct mm_walk *walk); > int walk_page_vma(struct vm_area_struct *vma, struct mm_walk *walk); > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index 75c1f28..14f38d5 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -206,6 +206,11 @@ static int __walk_page_range(unsigned long start, > unsigned long end, > return err; > } > > +int generic_walk_page_test_no_skip(unsigned long start, unsigned long end, > + struct mm_walk *walk) > +{ > + return 0; > +} > /** > * walk_page_range - walk page table with caller specific callbacks > * > -- > 2.2.2 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/