On Thu, Sep 04, 2025 at 01:57:36PM +0100, Kevin Brodsky wrote: > We now support nested lazy_mmu sections on all architectures > implementing the API. Update the API comment accordingly. > > Signed-off-by: Kevin Brodsky <kevin.brod...@arm.com>
Acked-by: Mike Rapoport (Microsoft) <r...@kernel.org> > --- > include/linux/pgtable.h | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 6932c8e344ab..be0f059beb4d 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) > * of the lazy mode. So the implementation must assume preemption may be > enabled > * and cpu migration is possible; it must take steps to be robust against > this. > * (In practice, for user PTE updates, the appropriate page table lock(s) are > - * held, but for kernel PTE updates, no lock is held). Nesting is not > permitted > - * and the mode cannot be used in interrupt context. > + * held, but for kernel PTE updates, no lock is held). The mode cannot be > used > + * in interrupt context. > + * > + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be > called > + * while the lazy MMU mode has already been enabled. An implementation should > + * handle this using the state returned by enter() and taken by the matching > + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indicate > + * whether this enter/leave pair is nested inside another or not. (It is up > to > + * the implementation to track whether the lazy MMU mode is enabled at any > point > + * in time.) The expectation is that leave() will flush any batched state > + * unconditionally, but only leave the lazy MMU mode if the passed state is > not > + * LAZY_MMU_NESTED. > */ > #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE > typedef int lazy_mmu_state_t; > -- > 2.47.0 > -- Sincerely yours, Mike.