On Thu, 31 Jan 2019, Liang, Kan wrote:
> > > +u64 perf_get_page_size(u64 virt)
> > > +{
> > > + unsigned long flags;
> > > + unsigned int level;
> > > + pte_t *pte;
> > > +
> > > + if (!virt)
> > > +         return 0;
> > > +
> > > + /*
> > > +  * Interrupts are disabled, so it prevents any tear down
> > > +  * of the page tables.
> > > +  * See the comment near struct mmu_table_batch.
> > > +  */
> > > + local_irq_save(flags);
> > > + if (virt >= TASK_SIZE)
> > > +         pte = lookup_address(virt, &level);
> > > + else {
> > > +         if (current->mm)
> > > +                 pte = lookup_address_in_pgd(pgd_offset(current->mm,
> > > virt),
> > > +                                             virt, &level);
> > 
> > Aside from all the missin {}, I'm fairly sure this is broken since this
> > happens from NMI context. This can interrupt switch_mm() and things like
> > use_temporary_mm().
> > 
> > Also; why does this live in the x86 code and not in the generic code?
> > 
> 
> This is x86 implementation.
> In generic code, there is a __weak function. I'll make it clear in the change
> log in v4.

No, instead of hiding it in the changelog, split the patch into two:

 #1 Adding the core stuff including the weak function

 #2 Adding the x86 implementation.

Thanks,

        tglx

Reply via email to