On Mon, Jun 03, 2013 at 03:19:32PM +0200, Michal Hocko wrote: > On Tue 28-05-13 15:52:50, Naoya Horiguchi wrote: > > Currently all of page table handling by hugetlbfs code are done under > > mm->page_table_lock. This is not optimal because there can be lock > > contentions between unrelated components using this lock. > > While I agree with such a change in general I am a bit afraid of all > subtle tweaks in the mm code that make hugetlb special. Maybe there are > none for page_table_lock but I am not 100% sure. So this might be > really tricky and it is not necessary for your further patches, is it?
No, this page_table_lock patch is separable from migration stuff. As you said in another email, changes going to stable should be minimal, so it's better to make 2/2 patch not depend on this patch. > How have you tested this? Other than libhugetlbfs test (that contains many workloads, but I'm not sure it can detect the possible regression of this patch,) I did simple testing where: - create a file on hugetlbfs, - create 10 processes and make each of them iterate the following: * mmap() the hugetlbfs file, * memset() the mapped range (to cause hugetlb_fault), and * munmap() the mapped range. I think that this can make racy situation which should be prevented by page table locks. > > This patch makes hugepage support split page table lock so that > > we use page->ptl of the leaf node of page table tree which is pte for > > normal pages but can be pmd and/or pud for hugepages of some architectures. > > > > Signed-off-by: Naoya Horiguchi <n-horigu...@ah.jp.nec.com> > > --- > > arch/x86/mm/hugetlbpage.c | 6 ++-- > > include/linux/hugetlb.h | 18 ++++++++++ > > mm/hugetlb.c | 84 > > ++++++++++++++++++++++++++++------------------- > > This doesn't seem to be the complete story. At least not from the > trivial: > $ find arch/ -name "*hugetlb*" | xargs git grep "page_table_lock" -- > arch/powerpc/mm/hugetlbpage.c: spin_lock(&mm->page_table_lock); > arch/powerpc/mm/hugetlbpage.c: spin_unlock(&mm->page_table_lock); > arch/tile/mm/hugetlbpage.c: spin_lock(&mm->page_table_lock); > arch/tile/mm/hugetlbpage.c: > spin_unlock(&mm->page_table_lock); > arch/x86/mm/hugetlbpage.c: * called with vma->vm_mm->page_table_lock held. This trivials should be fixed. Sorry. Naoya -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/