On Mon, Feb 18, 2019 at 12:29:22PM +0100, Peter Zijlstra wrote:
> On Fri, Feb 15, 2019 at 05:02:22PM +0000, Steven Price wrote:
> 
> > diff --git a/arch/arm64/include/asm/pgtable.h 
> > b/arch/arm64/include/asm/pgtable.h
> > index de70c1eabf33..09d308921625 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -428,6 +428,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, 
> > unsigned long pfn,
> >                              PMD_TYPE_TABLE)
> >  #define pmd_sect(pmd)              ((pmd_val(pmd) & PMD_TYPE_MASK) == \
> >                              PMD_TYPE_SECT)
> > +#define pmd_large(x)               pmd_sect(x)
> >  
> >  #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
> >  #define pud_sect(pud)              (0)
> > @@ -435,6 +436,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, 
> > unsigned long pfn,
> >  #else
> >  #define pud_sect(pud)              ((pud_val(pud) & PUD_TYPE_MASK) == \
> >                              PUD_TYPE_SECT)
> > +#define pud_large(x)               pud_sect(x)
> >  #define pud_table(pud)             ((pud_val(pud) & PUD_TYPE_MASK) == \
> >                              PUD_TYPE_TABLE)
> >  #endif
> 
> So on x86 p*d_large() also matches p*d_huge() and thp, But it is not
> clear to me this p*d_sect() thing does so, given your definitions.
> 
> See here why I care:
> 
>   
> http://lkml.kernel.org/r/20190201124741.ge31...@hirez.programming.kicks-ass.net

I believe it does not.

IIUC our p?d_huge() helpers implicitly handle contiguous entries. That's
where you have $N entries in the current level of table that the TLB can
cache together as one.

Our p?d_sect() helpers only match section entries. That's where we map
an entire next-level-table's worth of VA space with a single entry at
the current level.

Thanks,
Mark.

Reply via email to