On Wed, 12 Sep 2012 20:56:41 +0800
Xiao Guangrong <xiaoguangr...@linux.vnet.ibm.com> wrote:

> To make the code more clear, move release the lock out of 
> khugepaged_alloc_page
> 
> ...
>
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1854,11 +1854,6 @@ static struct page
>       *hpage  = alloc_hugepage_vma(khugepaged_defrag(), vma, address,
>                                     node, __GFP_OTHER_NODE);
> 
> -     /*
> -      * After allocating the hugepage, release the mmap_sem read lock in
> -      * preparation for taking it in write mode.
> -      */
> -     up_read(&mm->mmap_sem);
>       if (unlikely(!*hpage)) {
>               count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
>               *hpage = ERR_PTR(-ENOMEM);
> @@ -1905,7 +1900,6 @@ static struct page
>                      struct vm_area_struct *vma, unsigned long address,
>                      int node)
>  {
> -     up_read(&mm->mmap_sem);
>       VM_BUG_ON(!*hpage);
>       return  *hpage;
>  }
> @@ -1931,8 +1925,14 @@ static void collapse_huge_page(struct mm_struct *mm,
> 
>       VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> 
> -     /* release the mmap_sem read lock. */
>       new_page = khugepaged_alloc_page(hpage, mm, vma, address, node);
> +
> +     /*
> +      * After allocating the hugepage, release the mmap_sem read lock in
> +      * preparation for taking it in write mode.
> +      */
> +     up_read(&mm->mmap_sem);
> +
>       if (!new_page)
>               return;

Well that's a pretty minor improvement: one still has to go off on a
big hunt to locate the matching down_read().

And the patch will increase mmap_sem hold times by a teeny amount.  Do
we really want to do this?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to