On Mon, Aug 27, 2018 at 09:46:45AM +0200, Michal Hocko wrote:
> On Fri 24-08-18 11:08:24, Mike Kravetz wrote:
> > On 08/24/2018 01:41 AM, Michal Hocko wrote:
> > > On Thu 23-08-18 13:59:16, Mike Kravetz wrote:
> > > 
> > > Acked-by: Michal Hocko <mho...@suse.com>
> > > 
> > > One nit below.
> > > 
> > > [...]
> > >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > >> index 3103099f64fd..a73c5728e961 100644
> > >> --- a/mm/hugetlb.c
> > >> +++ b/mm/hugetlb.c
> > >> @@ -4548,6 +4548,9 @@ static unsigned long page_table_shareable(struct 
> > >> vm_area_struct *svma,
> > >>          return saddr;
> > >>  }
> > >>  
> > >> +#define _range_in_vma(vma, start, end) \
> > >> +        ((vma)->vm_start <= (start) && (end) <= (vma)->vm_end)
> > >> +
> > > 
> > > static inline please. Macros and potential side effects on given
> > > arguments are just not worth the risk. I also think this is something
> > > for more general use. We have that pattern at many places. So I would
> > > stick that to linux/mm.h
> > 
> > Thanks Michal,
> > 
> > Here is an updated patch which does as you suggest above.
> [...]
> > @@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, 
> > struct vm_area_struct *vma,
> >             subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> >             address = pvmw.address;
> >  
> > +           if (PageHuge(page)) {
> > +                   if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
> > +                           /*
> > +                            * huge_pmd_unshare unmapped an entire PMD
> > +                            * page.  There is no way of knowing exactly
> > +                            * which PMDs may be cached for this mm, so
> > +                            * we must flush them all.  start/end were
> > +                            * already adjusted above to cover this range.
> > +                            */
> > +                           flush_cache_range(vma, start, end);
> > +                           flush_tlb_range(vma, start, end);
> > +                           mmu_notifier_invalidate_range(mm, start, end);
> > +
> > +                           /*
> > +                            * The ref count of the PMD page was dropped
> > +                            * which is part of the way map counting
> > +                            * is done for shared PMDs.  Return 'true'
> > +                            * here.  When there is no other sharing,
> > +                            * huge_pmd_unshare returns false and we will
> > +                            * unmap the actual page and drop map count
> > +                            * to zero.
> > +                            */
> > +                           page_vma_mapped_walk_done(&pvmw);
> > +                           break;
> > +                   }
> 
> This still calls into notifier while holding the ptl lock. Either I am
> missing something or the invalidation is broken in this loop (not also
> for other invalidations).

mmu_notifier_invalidate_range() is done with pt lock held only the start
and end versions need to happen outside pt lock.

Cheers,
Jérôme

Reply via email to