On Thu, 16 Jan 2020 12:19:59 +0530 "Aneesh Kumar K.V" 
<aneesh.ku...@linux.ibm.com> wrote:

> On 1/16/20 12:15 PM, Aneesh Kumar K.V wrote:
> > From: Peter Zijlstra <pet...@infradead.org>
> > 
> > Aneesh reported that:
> > 
> >     tlb_flush_mmu()
> >       tlb_flush_mmu_tlbonly()
> >         tlb_flush()                 <-- #1
> >       tlb_flush_mmu_free()
> >         tlb_table_flush()
> >           tlb_table_invalidate()
> >             tlb_flush_mmu_tlbonly()
> >               tlb_flush()           <-- #2
> > 
> > does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not
> > clear tlb->end in that case.
> > 
> > Observe that any caller to __tlb_adjust_range() also sets at least one
> > of the tlb->freed_tables || tlb->cleared_p* bits, and those are
> > unconditionally cleared by __tlb_reset_range().
> > 
> > Change the condition for actually issuing TLBI to having one of those
> > bits set, as opposed to having tlb->end != 0.
> > 
> 
> 
> We should possibly get this to stable too along with the first two 
> patches. I am not quiet sure if this will qualify for a stable backport. 
> Hence avoided adding Cc:sta...@kernel.org

I'm not seeing any description of the user-visible runtime effects. 
Always needed, especially for -stable, please.

It appears to be a small performance benefit?  If that benefit was
"large" and measurements were presented then that would be something
we might wish to backport.


Reply via email to