> On 9/16/19, 04:35, "Michal Hocko" <mho...@kernel.org> wrote:
>     > diff --git a/mm/memory.c b/mm/memory.c
>     > index e0c232fe81d9..55da24f33bc4 100644
>     > --- a/mm/memory.c
>     > +++ b/mm/memory.c
>     > @@ -3311,6 +3311,8 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, 
> struct mem_cgroup *memcg,
>     >         } else {
>     >                 inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
>     >                 page_add_file_rmap(page, false);
>     > +               if (vma->vm_flags & VM_LOCKED && 
> !PageTransCompound(page))
>     > +                       mlock_vma_page(page);
>     >         }
>     >         set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
    
>     I dunno. Handling it here in alloc_set_pte sounds a bit weird to me.
>     Altough we already do mlock for CoW pages there, I thought this was more
>     of an exception.
>     Is there any real reason why this cannot be done in the standard #PF
>     path? finish_fault for example?

alloc_set_pte is called from finish_fault 
https://github.com/torvalds/linux/blob/v5.2/mm/memory.c#L3400

   vm_fault_t finish_fault(struct vm_fault *vmf)
     ...
        if (!ret)
                ret = alloc_set_pte(vmf, vmf->memcg, page);

and inside alloc_set_pte one of the branches of the if-clause already handled 
mlocked pages:
https://github.com/torvalds/linux/blob/v5.2/mm/memory.c#L3348-L3356

I added it to the else-branch as that seemed like the least intrusive change, 
but I will move this to finish_fault, probably like this (after I'm done 
testing):

   vm_fault_t finish_fault(struct vm_fault *vmf)
    ...
        if (!ret)
                ret = alloc_set_pte(vmf, vmf->memcg, page);
+       if (!ret && (vmf->vma->vm_flags & VM_LOCKED) && 
!PageTransCompound(page))
+                       mlock_vma_page(page);

Thanks for the review and suggestions!

--
Lucian

Reply via email to