On 07/31, Song Liu wrote:
>
> +void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long haddr)
> +{
> +     struct vm_area_struct *vma = find_vma(mm, haddr);
> +     pmd_t *pmd = mm_find_pmd(mm, haddr);
> +     struct page *hpage = NULL;
> +     unsigned long addr;
> +     spinlock_t *ptl;
> +     int count = 0;
> +     pmd_t _pmd;
> +     int i;
> +
> +     VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
> +
> +     if (!vma || !pmd || pmd_trans_huge(*pmd))
                            ^^^^^^^^^^^^^^^^^^^^

mm_find_pmd() returns NULL if pmd_trans_huge()

> +     /* step 1: check all mapped PTEs are to the right huge page */
> +     for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
> +             pte_t *pte = pte_offset_map(pmd, addr);
> +             struct page *page;
> +
> +             if (pte_none(*pte))
> +                     continue;
> +
> +             page = vm_normal_page(vma, addr, *pte);
> +
> +             if (!PageCompound(page))
> +                     return;
> +
> +             if (!hpage) {
> +                     hpage = compound_head(page);
> +                     if (hpage->mapping != vma->vm_file->f_mapping)

Hmm. But how can we know this is still the same vma ?

If nothing else, why vma->vm_file can't be NULL?

Say, a process unmaps this memory after khugepaged_add_pte_mapped_thp()
was called, then it does mmap(haddr, MAP_PRIVATE|MAP_ANONYMOUS), then
do_huge_pmd_anonymous_page() installs a huge page at the same address,
then split_huge_pmd() is called by any reason.

No?

Reply via email to