queue_pages_pmd_range() checks pmd_huge() to find hugepage, but this check assumes the pmd is in the normal format and does not work on migration entry whoes format is like swap entry. We can distinguish them with present bit, so we need to check it before cheking pmd_huge(). Otherwise, pmd_huge() can wrongly return false for hugepage, and the behavior is unpredictable.
This patch is against mmotm-2013-08-27. Signed-off-by: Naoya Horiguchi <n-horigu...@ah.jp.nec.com> --- mm/mempolicy.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 64d00c4..0472964 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -553,6 +553,8 @@ static inline int queue_pages_pmd_range(struct vm_area_struct *vma, pud_t *pud, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + if (!pmd_present(*pmd)) + continue; if (pmd_huge(*pmd) && is_vm_hugetlb_page(vma)) { queue_pages_hugetlb_pmd_range(vma, pmd, nodes, flags, private); -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/