On 08/22/2018 05:28 AM, Michal Hocko wrote:
> On Tue 21-08-18 18:10:42, Mike Kravetz wrote:
> [...]
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index eb477809a5c0..8cf853a4b093 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page *page, 
>> struct vm_area_struct *vma,
>>      }
>>  
>>      /*
>> -     * We have to assume the worse case ie pmd for invalidation. Note that
>> -     * the page can not be free in this function as call of try_to_unmap()
>> -     * must hold a reference on the page.
>> +     * For THP, we have to assume the worse case ie pmd for invalidation.
>> +     * For hugetlb, it could be much worse if we need to do pud
>> +     * invalidation in the case of pmd sharing.
>> +     *
>> +     * Note that the page can not be free in this function as call of
>> +     * try_to_unmap() must hold a reference on the page.
>>       */
>>      end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
>> +    if (PageHuge(page)) {
>> +            /*
>> +             * If sharing is possible, start and end will be adjusted
>> +             * accordingly.
>> +             */
>> +            (void)huge_pmd_sharing_possible(vma, &start, &end);
>> +    }
>>      mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
> 
> I do not get this part. Why don't we simply unconditionally invalidate
> the whole huge page range?

In this routine, we are only unmapping a single page.  The existing code
is limiting the invalidate range to that page size: 4K or 2M.  With shared
PMDs, we have the possibility of unmapping a PUD_SIZE area: 1G.  I don't
think we want to unconditionally invalidate 1G.  Is that what you are asking?

I do not know how often PMD sharing is exercised.  It certainly is used by
DBs for large shared areas.  I suspect it is less frequent than hugtlb pages
in general, and certainly less frequent than THP or base pages.

>>  
>>      while (page_vma_mapped_walk(&pvmw)) {
>> @@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, 
>> struct vm_area_struct *vma,
>>              subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
>>              address = pvmw.address;
>>  
>> +            if (PageHuge(page)) {
>> +                    if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
> 
> huge_pmd_unshare is documented to require a pte lock. Where do we take
> it?

It is somewhat hidden, but we are in the loop:

        while (page_vma_mapped_walk(&pvmw)) {

The routine page_vma_mapped_walk will acquire the lock, and it correctly
checks for huge pages and uses huge_pte_lockptr().

page_vma_mapped_walk_done() will release the lock.
-- 
Mike Kravetz

Reply via email to