Le Mon, Nov 24, 2025 at 09:20:25AM -0800, Paul E. McKenney a écrit :
> On Mon, Nov 24, 2025 at 01:04:20PM +0000, Will Deacon wrote:
> > On Mon, Nov 10, 2025 at 09:29:43AM -0800, Paul E. McKenney wrote:
> > > On Mon, Nov 10, 2025 at 11:24:07AM +0000, Will Deacon wrote:
> > > > On Sat, Nov 08, 2025 at 10:38:32AM -0800, Paul E. McKenney wrote:
> > > > > On Sat, Nov 08, 2025 at 01:07:45PM +0000, Will Deacon wrote:
> > > > > > On Wed, Nov 05, 2025 at 12:32:15PM -0800, Paul E. McKenney wrote:
> > > > > > > Some arm64 platforms have slow per-CPU atomic operations, for 
> > > > > > > example,
> > > > > > > the Neoverse V2.  This commit therefore moves SRCU-fast from 
> > > > > > > per-CPU
> > > > > > > atomic operations to interrupt-disabled 
> > > > > > > non-read-modify-write-atomic
> > > > > > > atomic_read()/atomic_set() operations.  This works because
> > > > > > > SRCU-fast-updown is not invoked from read-side primitives, which
> > > > > > > means that if srcu_read_unlock_fast() NMI handlers.  This means 
> > > > > > > that
> > > > > > > srcu_read_lock_fast_updown() and srcu_read_unlock_fast_updown() 
> > > > > > > can
> > > > > > > exclude themselves and each other
> > > > > > > 
> > > > > > > This reduces the overhead of calls to 
> > > > > > > srcu_read_lock_fast_updown() and
> > > > > > > srcu_read_unlock_fast_updown() from about 100ns to about 12ns on 
> > > > > > > an ARM
> > > > > > > Neoverse V2.  Although this is not excellent compared to about 
> > > > > > > 2ns on x86,
> > > > > > > it sure beats 100ns.
> > > > > > > 
> > > > > > > This command was used to measure the overhead:
> > > > > > > 
> > > > > > > tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale 
> > > > > > > --allcpus --duration 5 --configs NOPREEMPT --kconfig 
> > > > > > > "CONFIG_NR_CPUS=64 CONFIG_TASKS_TRACE_RCU=y" --bootargs 
> > > > > > > "refscale.loops=100000 refscale.guest_os_delay=5 
> > > > > > > refscale.nreaders=64 refscale.holdoff=30 
> > > > > > > torture.disable_onoff_at_boot 
> > > > > > > refscale.scale_type=srcu-fast-updown refscale.verbose_batched=8 
> > > > > > > torture.verbose_sleep_frequency=8 
> > > > > > > torture.verbose_sleep_duration=8 refscale.nruns=100" --trust-make
> > > > > > > 
> > > > > > > Signed-off-by: Paul E. McKenney <[email protected]>
> > > > > > > Cc: Catalin Marinas <[email protected]>
> > > > > > > Cc: Will Deacon <[email protected]>
> > > > > > > Cc: Mark Rutland <[email protected]>
> > > > > > > Cc: Mathieu Desnoyers <[email protected]>
> > > > > > > Cc: Steven Rostedt <[email protected]>
> > > > > > > Cc: Sebastian Andrzej Siewior <[email protected]>
> > > > > > > Cc: <[email protected]>
> > > > > > > Cc: <[email protected]>
> > > > > > > ---
> > > > > > >  include/linux/srcutree.h | 51 
> > > > > > > +++++++++++++++++++++++++++++++++++++---
> > > > > > >  1 file changed, 48 insertions(+), 3 deletions(-)
> > > > > > 
> > > > > > I've queued the per-cpu tweak from Catalin in the arm64 fixes tree 
> > > > > > [1]
> > > > > > for 6.18, so please can you drop this SRCU commit from your tree?
> > > > > 
> > > > > Very good!  Adding Frederic on CC since he is doing the pull request
> > > > > for the upcoming merge window.
> > > > > 
> > > > > But if this doesn't show up in -rc1, we reserve the right to put it
> > > > > back in.
> > > > > 
> > > > > Sorry, couldn't resist!   ;-)
> > > > 
> > > > I've merged it as a fix, so hopefully it will show up in v6.18-rc6.
> > > 
> > > Even better, thank you!!!
> > 
> > It landed in Linus' tree here:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/arm64?id=535fdfc5a228524552ee8810c9175e877e127c27
> 
> Again, thank you, and Breno has started backporting it for use in
> our fleet.
> 
> > Please can you drop the SRCU change from -next? It still shows up in
> > 20251121.
> 
> This one?
> 
> 11f748499236 ("srcu: Optimize SRCU-fast-updown for arm64")
> 
> if so, Frederic, could you please drop this commit?

Dropped, thanks!

(And I'm glad to do so given how error-prone it can be).

-- 
Frederic Weisbecker
SUSE Labs

Reply via email to