From: Hugh Dickins <[email protected]> commit a66c0410b97c07a5708881198528ce724f7a3226 upstream.
The other pagetable walks in task_mmu.c have a cond_resched() after walking their ptes: add a cond_resched() in gather_pte_stats() too, for reading /proc/<id>/numa_maps. Only pagemap_pmd_range() has a cond_resched() in its (unusually expensive) pmd_trans_huge case: more should probably be added, but leave them unchanged for now. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Hugh Dickins <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gerald Schaefer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Reported-by: Chen si <[email protected]> Signed-off-by: Baoyou Xie <[email protected]> Signed-off-by: Wen Yang <[email protected]> Signed-off-by: Zijiang Huang <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> --- fs/proc/task_mmu.c | 1 + 1 file changed, 1 insertion(+) --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1609,6 +1609,7 @@ static int gather_pte_stats(pmd_t *pmd, } while (pte++, addr += PAGE_SIZE, addr != end); pte_unmap_unlock(orig_pte, ptl); + cond_resched(); return 0; } #ifdef CONFIG_HUGETLB_PAGE

