Le Tue, Jun 24, 2025 at 08:10:57PM -0700, Boqun Feng a écrit :
> +static void synchronize_shazptr_normal(void *ptr)
> +{
> +     int cpu;
> +     unsigned long blocking_grp_mask = 0;
> +
> +     smp_mb(); /* Synchronize with the smp_mb() in shazptr_acquire(). */
> +
> +     for_each_possible_cpu(cpu) {
> +             void **slot = per_cpu_ptr(&shazptr_slots, cpu);
> +             void *val;
> +
> +             /* Pair with smp_store_release() in shazptr_clear(). */
> +             val = smp_load_acquire(slot);
> +
> +             if (val == ptr || val == SHAZPTR_WILDCARD)
> +                     blocking_grp_mask |= 1UL << (cpu / 
> shazptr_scan.cpu_grp_size);
> +     }
> +
> +     /* Found blocking slots, prepare to wait. */
> +     if (blocking_grp_mask) {

synchronize_rcu() here would be enough since all users have preemption disabled.
But I guess this defeats the performance purpose? (If so this might need a
comment somewhere).

I guess blocking_grp_mask is to avoid allocating a cpumask (again for
performance purpose? So I guess synchronize_shazptr_normal() has some perf
expectations?)

One possibility is to have the ptr contained in:

struct hazptr {
       void *ptr;
       struct cpumask scan_mask
};

And then the caller could simply scan itself those remaining CPUs without
relying on the kthread.

But I'm sure there are good reasons for now doing that :-)

Thanks.

-- 
Frederic Weisbecker
SUSE Labs

Reply via email to