On Thu 14-12-17 18:50:41, Anshuman Khandual wrote:
> On 12/14/2017 06:34 PM, Michal Hocko wrote:
> > On Thu 14-12-17 18:25:54, Anshuman Khandual wrote:
> >> On 12/14/2017 04:59 PM, Michal Hocko wrote:
> >>> On Thu 14-12-17 16:44:26, Anshuman Khandual wrote:
> >>>> diff --git a/mm/mprotect.c b/mm/mprotect.c
> >>>> index ec39f73..43c29fa 100644
> >>>> --- a/mm/mprotect.c
> >>>> +++ b/mm/mprotect.c
> >>>> @@ -196,6 +196,7 @@ static inline unsigned long change_pmd_range(struct 
> >>>> vm_area_struct *vma,
> >>>>                  this_pages = change_pte_range(vma, pmd, addr, next, 
> >>>> newprot,
> >>>>                                   dirty_accountable, prot_numa);
> >>>>                  pages += this_pages;
> >>>> +                cond_resched();
> >>>>          } while (pmd++, addr = next, addr != end);
> >>>>  
> >>>>          if (mni_start)
> >>> this is not exactly what I meant. See how change_huge_pmd does continue.
> >>> That's why I mentioned zap_pmd_range which does goto next...
> >> I might be still missing something but is this what you meant ?
> > yes, except
> > 
> >> Here we will give cond_resched() cover to the THP backed pages
> >> as well.
> > but there is still 
> >             if (!is_swap_pmd(*pmd) && !pmd_trans_huge(*pmd) && 
> > !pmd_devmap(*pmd)
> >                             && pmd_none_or_clear_bad(pmd))
> >                     continue;
> > 
> > so we won't have scheduling point on pmd holes. Maybe this doesn't
> > matter, I haven't checked but why should we handle those differently?
> 
> May be because it is not spending much time for those entries which
> can really trigger stalls, hence they dont need scheduling points.
> In case of zap_pmd_range(), it was spending time either in
> __split_huge_pmd() or zap_huge_pmd() hence deserved a scheduling point.

As I've said, I haven't thought much about that but the discrepancy just
hit my eyes. So if there is not a really good reason I would rather use
goto next consistently.

-- 
Michal Hocko
SUSE Labs

Reply via email to