On Fri, Jul 19, 2013 at 1:42 AM, Aneesh Kumar K.V
<aneesh.ku...@linux.vnet.ibm.com> wrote:
> Minchan Kim <minc...@kernel.org> writes:
>> IMHO, it's a false positive because i_mmap_mutex was held by kswapd
>> while one in the middle of fault path could be never on kswapd context.
>>
>> It seems lockdep for reclaim-over-fs isn't enough smart to identify
>> between background and direct reclaim.
>>
>> Wait for other's opinion.
>
> Is that reasoning correct ?. We may not deadlock because hugetlb pages
> cannot be reclaimed. So the fault path in hugetlb won't end up
> reclaiming pages from same inode. But the report is correct right ?
>
>
> Looking at the hugetlb code we have in huge_pmd_share
>
> out:
>         pte = (pte_t *)pmd_alloc(mm, pud, addr);
>         mutex_unlock(&mapping->i_mmap_mutex);
>         return pte;
>
> I guess we should move that pmd_alloc outside i_mmap_mutex. Otherwise
> that pmd_alloc can result in a reclaim which can call shrink_page_list ?
>
Hm, can huge pages be reclaimed, say by kswapd currently?

> Something like  ?
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 83aff0a..2cb1be3 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3266,8 +3266,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned 
> long addr, pud_t *pud)
>                 put_page(virt_to_page(spte));
>         spin_unlock(&mm->page_table_lock);
>  out:
> -       pte = (pte_t *)pmd_alloc(mm, pud, addr);
>         mutex_unlock(&mapping->i_mmap_mutex);
> +       pte = (pte_t *)pmd_alloc(mm, pud, addr);
>         return pte;
>  }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to