On Fri, Oct 09, 2020 at 08:29:25AM -0400, Liang, Kan wrote:
> 
> 
> On 10/9/2020 5:09 AM, Peter Zijlstra wrote:
> > (we might not need the #ifdef gunk, but I've not yet dug out my cross
> >   compilers this morning)
> > 
> > ---
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -7009,6 +7009,7 @@ static u64 perf_virt_to_phys(u64 virt)
> >    */
> >   static u64 __perf_get_page_size(struct mm_struct *mm, unsigned long addr)
> >   {
> > +   struct page *page;
> >     pgd_t *pgd;
> >     p4d_t *p4d;
> >     pud_t *pud;
> > @@ -7030,15 +7031,27 @@ static u64 __perf_get_page_size(struct m
> >     if (!pud_present(*pud))
> >             return 0;
> > -   if (pud_leaf(*pud))
> > +   if (pud_leaf(*pud)) {
> > +#ifdef pud_page
> > +           page = pud_page(*pud);
> > +           if (PageHuge(page))
> > +                   return page_size(compound_head(page));
> 
> I think the page_size() returns the Kernel Page Size of a compound page.
> What we want is the MMU page size.
> 
> If it's for the generic code, I think it should be a problem for X86.

See the PageHuge() condition before it. It only makes sense to provide a
hugetlb page-size if the actual hardware supports it.

For x86 hugetlb only supports PMD and PUD sized pages, so the added code
is pointless and should result in identical behaviour.

For architectures that have hugetlb page sizes that do not align with
the page-table levels (arm64, sparc64 and possibly power) this will
(hopefully) give the right number.

Reply via email to