On Wed, Mar 28, 2018 at 4:42 AM, Minchan Kim <minc...@kernel.org> wrote:
> binder_update_page_range needs down_write of mmap_sem because
> vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
> it is set. However, when I profile binder working, it seems
> every binder buffers should be mapped in advance by binder_mmap.

Yeah this is correct - before doing any binder transactions
binder_mmap() must be called, and we do fail transactions if that
hasn't been done yet. LGTM once you're removed the WARN_ON.

Martijn

> It means we could set VM_MIXEDMAP in bider_mmap time which is
> already hold a mmap_sem as down_write so binder_update_page_range
> doesn't need to hold a mmap_sem as down_write.
>
> Android suffers from mmap_sem contention so let's reduce mmap_sem
> down_write.
>
> Cc: Martijn Coenen <m...@android.com>
> Cc: Todd Kjos <tk...@google.com>
> Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
> Signed-off-by: Minchan Kim <minc...@kernel.org>
> ---
>  drivers/android/binder.c       | 2 +-
>  drivers/android/binder_alloc.c | 8 +++++---
>  2 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 764b63a5aade..9a14c6dd60c4 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -4722,7 +4722,7 @@ static int binder_mmap(struct file *filp, struct 
> vm_area_struct *vma)
>                 failure_string = "bad vm_flags";
>                 goto err_bad_arg;
>         }
> -       vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
> +       vma->vm_flags |= (VM_DONTCOPY | VM_MIXEDMAP) & ~VM_MAYWRITE;
>         vma->vm_ops = &binder_vm_ops;
>         vma->vm_private_data = proc;
>
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index 5a426c877dfb..a184bf12eb15 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -219,7 +219,7 @@ static int binder_update_page_range(struct binder_alloc 
> *alloc, int allocate,
>                 mm = alloc->vma_vm_mm;
>
>         if (mm) {
> -               down_write(&mm->mmap_sem);
> +               down_read(&mm->mmap_sem);
>                 vma = alloc->vma;
>         }
>
> @@ -229,6 +229,8 @@ static int binder_update_page_range(struct binder_alloc 
> *alloc, int allocate,
>                 goto err_no_vma;
>         }
>
> +       WARN_ON_ONCE(vma && !(vma->vm_flags & VM_MIXEDMAP));
> +
>         for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
>                 int ret;
>                 bool on_lru;
> @@ -288,7 +290,7 @@ static int binder_update_page_range(struct binder_alloc 
> *alloc, int allocate,
>                 /* vm_insert_page does not seem to increment the refcount */
>         }
>         if (mm) {
> -               up_write(&mm->mmap_sem);
> +               up_read(&mm->mmap_sem);
>                 mmput(mm);
>         }
>         return 0;
> @@ -321,7 +323,7 @@ static int binder_update_page_range(struct binder_alloc 
> *alloc, int allocate,
>         }
>  err_no_vma:
>         if (mm) {
> -               up_write(&mm->mmap_sem);
> +               up_read(&mm->mmap_sem);
>                 mmput(mm);
>         }
>         return vma ? -ENOMEM : -ESRCH;
> --
> 2.17.0.rc1.321.gba9d0f2565-goog
>

Reply via email to