On Mon, 1 Apr 2019, Dave Hansen wrote:
> diff -puN mm/mmap.c~mpx-rss-pass-no-vma mm/mmap.c
> --- a/mm/mmap.c~mpx-rss-pass-no-vma   2019-04-01 06:56:53.409411123 -0700
> +++ b/mm/mmap.c       2019-04-01 06:56:53.423411123 -0700
> @@ -2731,9 +2731,17 @@ int __do_munmap(struct mm_struct *mm, un
>               return -EINVAL;
>  
>       len = PAGE_ALIGN(len);
> +     end = start + len;
>       if (len == 0)
>               return -EINVAL;
>  
> +     /*
> +      * arch_unmap() might do unmaps itself.  It must be called
> +      * and finish any rbtree manipulation before this code
> +      * runs and also starts to manipulate the rbtree.
> +      */
> +     arch_unmap(mm, start, end);

...
  
> -static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct 
> *vma,
> -                           unsigned long start, unsigned long end)
> +static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
> +                           unsigned long end)

While you fixed up the asm-generic thing, this breaks arch/um and
arch/unicorn32. For those the fixup is trivial by removing the vma
argument.

But itt also breaks powerpc and there I'm not sure whether moving
arch_unmap() to the beginning of __do_munmap() is safe. Micheal???

Aside of that the powerpc variant looks suspicious:

static inline void arch_unmap(struct mm_struct *mm,
                              unsigned long start, unsigned long end)
{
        if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
                mm->context.vdso_base = 0;
}

Shouldn't that be: 

        if (start >= mm->context.vdso_base && mm->context.vdso_base < end)

Hmm?

Thanks,

        tglx

Reply via email to