On Wed, Mar 20, 2019 at 10:06:21AM +0800, Peter Xu wrote:
> From: Andrea Arcangeli <aarca...@redhat.com>
> 
> There are several cases write protection fault happens. It could be a
> write to zero page, swaped page or userfault write protected
> page. When the fault happens, there is no way to know if userfault
> write protect the page before. Here we just blindly issue a userfault
> notification for vma with VM_UFFD_WP regardless if app write protects
> it yet. Application should be ready to handle such wp fault.
> 
> v1: From: Shaohua Li <s...@fb.com>
> 
> v2: Handle the userfault in the common do_wp_page. If we get there a
> pagetable is present and readonly so no need to do further processing
> until we solve the userfault.
> 
> In the swapin case, always swapin as readonly. This will cause false
> positive userfaults. We need to decide later if to eliminate them with
> a flag like soft-dirty in the swap entry (see _PAGE_SWP_SOFT_DIRTY).
> 
> hugetlbfs wouldn't need to worry about swapouts but and tmpfs would
> be handled by a swap entry bit like anonymous memory.
> 
> The main problem with no easy solution to eliminate the false
> positives, will be if/when userfaultfd is extended to real filesystem
> pagecache. When the pagecache is freed by reclaim we can't leave the
> radix tree pinned if the inode and in turn the radix tree is reclaimed
> as well.
> 
> The estimation is that full accuracy and lack of false positives could
> be easily provided only to anonymous memory (as long as there's no
> fork or as long as MADV_DONTFORK is used on the userfaultfd anonymous
> range) tmpfs and hugetlbfs, it's most certainly worth to achieve it
> but in a later incremental patch.
> 
> v3: Add hooking point for THP wrprotect faults.
> 
> CC: Shaohua Li <s...@fb.com>
> Signed-off-by: Andrea Arcangeli <aarca...@redhat.com>
> [peterx: don't conditionally drop FAULT_FLAG_WRITE in do_swap_page]
> Reviewed-by: Mike Rapoport <r...@linux.vnet.ibm.com>
> Signed-off-by: Peter Xu <pet...@redhat.com>


Reviewed-by: Jérôme Glisse <jgli...@redhat.com>

> ---
>  mm/memory.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index e11ca9dd823f..567686ec086d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2483,6 +2483,11 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
>  {
>       struct vm_area_struct *vma = vmf->vma;
>  
> +     if (userfaultfd_wp(vma)) {
> +             pte_unmap_unlock(vmf->pte, vmf->ptl);
> +             return handle_userfault(vmf, VM_UFFD_WP);
> +     }
> +
>       vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
>       if (!vmf->page) {
>               /*
> @@ -3684,8 +3689,11 @@ static inline vm_fault_t create_huge_pmd(struct 
> vm_fault *vmf)
>  /* `inline' is required to avoid gcc 4.1.2 build error */
>  static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
>  {
> -     if (vma_is_anonymous(vmf->vma))
> +     if (vma_is_anonymous(vmf->vma)) {
> +             if (userfaultfd_wp(vmf->vma))
> +                     return handle_userfault(vmf, VM_UFFD_WP);
>               return do_huge_pmd_wp_page(vmf, orig_pmd);
> +     }
>       if (vmf->vma->vm_ops->huge_fault)
>               return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
>  
> -- 
> 2.17.1
> 

Reply via email to