On 6 Feb 2017, at 0:12, Naoya Horiguchi wrote:

> On Sun, Feb 05, 2017 at 11:12:39AM -0500, Zi Yan wrote:
>> From: Zi Yan <z...@nvidia.com>
>>
>> It allows splitting huge pmd while you are holding the pmd lock.
>> It is prepared for future zap_pmd_range() use.
>>
>> Signed-off-by: Zi Yan <zi....@cs.rutgers.edu>
>> ---
>>  include/linux/huge_mm.h |  2 ++
>>  mm/huge_memory.c        | 22 ++++++++++++----------
>>  2 files changed, 14 insertions(+), 10 deletions(-)
>>
> ...
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 03e4566fc226..cd66532ef667 100644
> ...
>> @@ -2036,10 +2039,9 @@ void __split_huge_pmd(struct vm_area_struct *vma, 
>> pmd_t *pmd,
>>                      clear_page_mlock(page);
>>      } else if (!pmd_devmap(*pmd))
>>              goto out;
>> -    __split_huge_pmd_locked(vma, pmd, haddr, freeze);
>> +    __split_huge_pmd_locked(vma, pmd, address, freeze);
>
> Could you explain what is intended on this change?
> If some caller (f.e. wp_huge_pmd?) could call __split_huge_pmd() with
> address not aligned with pmd border, __split_huge_pmd_locked() results in
> triggering VM_BUG_ON(haddr & ~HPAGE_PMD_MASK).

This change is intended for any caller already hold pmd lock. Now it is for this
call site only.

In Patch 2, I moved unsigned long haddr = address & HPAGE_PMD_MASK;
from __split_huge_pmd() to __split_huge_pmd_locked(), so VM_BUG_ON(haddr & 
~HPAGE_PMD_MASK)
will not be triggered.



>
> Thanks,
> Naoya Horiguchi
>
>>  out:
>>      spin_unlock(ptl);
>> -    mmu_notifier_invalidate_range_end(mm, haddr, haddr + HPAGE_PMD_SIZE);
>>  }
>>
>>  void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long 
>> address,
>> -- 
>> 2.11.0
>>


--
Best Regards
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to