Re: [RFC patch] mm: hugetlb: fix __unmap_hugepage_range
On Fri, 31 Oct 2014, Hillf Danton wrote: > First, after flushing TLB, we have no need to scan pte from start again. > Second, before bail out loop, the address is forwarded one step. > > Signed-off-by: Hillf Danton Acked-by: David Rientjes -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC patch] mm: hugetlb: fix __unmap_hugepage_range
[CCing people involved in 24669e58477e2] On Fri 31-10-14 12:22:12, Hillf Danton wrote: > First, after flushing TLB, we have no need to scan pte from start again. > Second, before bail out loop, the address is forwarded one step. I can imagine a more comprehensive wording here. It is not immediately clear whether this is just an optimization or a bug fix as well (especially the second part). Anyway the optimization looks good to me. > Signed-off-by: Hillf Danton Reviewed-by: Michal Hocko > --- > > --- a/mm/hugetlb.cFri Oct 31 11:47:25 2014 > +++ b/mm/hugetlb.cFri Oct 31 11:52:42 2014 > @@ -2641,8 +2641,9 @@ void __unmap_hugepage_range(struct mmu_g > > tlb_start_vma(tlb, vma); > mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); > + address = start; > again: > - for (address = start; address < end; address += sz) { > + for (; address < end; address += sz) { > ptep = huge_pte_offset(mm, address); > if (!ptep) > continue; > @@ -2689,6 +2690,7 @@ again: > page_remove_rmap(page); > force_flush = !__tlb_remove_page(tlb, page); > if (force_flush) { > + address += sz; > spin_unlock(ptl); > break; > } > -- > > -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[RFC patch] mm: hugetlb: fix __unmap_hugepage_range
First, after flushing TLB, we have no need to scan pte from start again. Second, before bail out loop, the address is forwarded one step. Signed-off-by: Hillf Danton --- --- a/mm/hugetlb.c Fri Oct 31 11:47:25 2014 +++ b/mm/hugetlb.c Fri Oct 31 11:52:42 2014 @@ -2641,8 +2641,9 @@ void __unmap_hugepage_range(struct mmu_g tlb_start_vma(tlb, vma); mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); + address = start; again: - for (address = start; address < end; address += sz) { + for (; address < end; address += sz) { ptep = huge_pte_offset(mm, address); if (!ptep) continue; @@ -2689,6 +2690,7 @@ again: page_remove_rmap(page); force_flush = !__tlb_remove_page(tlb, page); if (force_flush) { + address += sz; spin_unlock(ptl); break; } -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/