On Mon, Feb 02, 2026 at 03:45:55PM +0800, Lance Yang wrote:
> From: Lance Yang <[email protected]>
> 
> Currently, tlb_remove_table_sync_one() broadcasts IPIs to all CPUs to wait
> for any concurrent lockless page table walkers (e.g., GUP-fast). This is
> inefficient on systems with many CPUs, especially for RT workloads[1].
> 
> This patch introduces a per-CPU tracking mechanism to record which CPUs are
> actively performing lockless page table walks for a specific mm_struct.
> When freeing/unsharing page tables, we can now send IPIs only to the CPUs
> that are actually walking that mm, instead of broadcasting to all CPUs.
> 
> In preparation for targeted IPIs; a follow-up will switch callers to
> tlb_remove_table_sync_mm().
> 
> Note that the tracking adds ~3% latency to GUP-fast, as measured on a
> 64-core system.

What architecture, and that is acceptable?

> +/*
> + * Track CPUs doing lockless page table walks to avoid broadcast IPIs
> + * during TLB flushes.
> + */
> +DECLARE_PER_CPU(struct mm_struct *, active_lockless_pt_walk_mm);
> +
> +static inline void pt_walk_lockless_start(struct mm_struct *mm)
> +{
> +     lockdep_assert_irqs_disabled();
> +
> +     /*
> +      * Tell other CPUs we're doing lockless page table walk.
> +      *
> +      * Full barrier needed to prevent page table reads from being
> +      * reordered before this write.
> +      *
> +      * Pairs with smp_rmb() in tlb_remove_table_sync_mm().
> +      */
> +     this_cpu_write(active_lockless_pt_walk_mm, mm);
> +     smp_mb();

One thing to try is something like:

        xchg(this_cpu_ptr(&active_lockless_pt_walk_mm), mm);

That *might* be a little better on x86_64, on anything else you really
don't want to use this_cpu_() ops when you *know* IRQs are already
disabled.

> +}
> +
> +static inline void pt_walk_lockless_end(void)
> +{
> +     lockdep_assert_irqs_disabled();
> +
> +     /*
> +      * Clear the pointer so other CPUs no longer see this CPU as walking
> +      * the mm. Use smp_store_release to ensure page table reads complete
> +      * before the clear is visible to other CPUs.
> +      */
> +     smp_store_release(this_cpu_ptr(&active_lockless_pt_walk_mm), NULL);
> +}
> +
>  int get_user_pages_fast(unsigned long start, int nr_pages,
>                       unsigned int gup_flags, struct page **pages);
>  int pin_user_pages_fast(unsigned long start, int nr_pages,

> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 2faa23d7f8d4..35c89e4b6230 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -285,6 +285,56 @@ void tlb_remove_table_sync_one(void)
>       smp_call_function(tlb_remove_table_smp_sync, NULL, 1);
>  }
>  
> +DEFINE_PER_CPU(struct mm_struct *, active_lockless_pt_walk_mm);
> +EXPORT_PER_CPU_SYMBOL_GPL(active_lockless_pt_walk_mm);

Why the heck is this exported? Both users are firmly core code.

> +/**
> + * tlb_remove_table_sync_mm - send IPIs to CPUs doing lockless page table
> + * walk for @mm
> + *
> + * @mm: target mm; only CPUs walking this mm get an IPI.
> + *
> + * Like tlb_remove_table_sync_one() but only targets CPUs in
> + * active_lockless_pt_walk_mm.
> + */
> +void tlb_remove_table_sync_mm(struct mm_struct *mm)
> +{
> +     cpumask_var_t target_cpus;
> +     bool found_any = false;
> +     int cpu;
> +
> +     if (WARN_ONCE(!mm, "NULL mm in %s\n", __func__)) {
> +             tlb_remove_table_sync_one();
> +             return;
> +     }
> +
> +     /* If we can't, fall back to broadcast. */
> +     if (!alloc_cpumask_var(&target_cpus, GFP_ATOMIC)) {
> +             tlb_remove_table_sync_one();
> +             return;
> +     }
> +
> +     cpumask_clear(target_cpus);
> +
> +     /* Pairs with smp_mb() in pt_walk_lockless_start(). */

Pairs how? The start thing does something like:

        [W] active_lockless_pt_walk_mm = mm
        MB
        [L] page-tables

So this is:

        [L] page-tables
        RMB
        [L] active_lockless_pt_walk_mm

?

> +     smp_rmb();
> +
> +     /* Find CPUs doing lockless page table walks for this mm */
> +     for_each_online_cpu(cpu) {
> +             if (per_cpu(active_lockless_pt_walk_mm, cpu) == mm) {
> +                     cpumask_set_cpu(cpu, target_cpus);

You really don't need this to be atomic.

> +                     found_any = true;
> +             }
> +     }
> +
> +     /* Only send IPIs to CPUs actually doing lockless walks */
> +     if (found_any)
> +             smp_call_function_many(target_cpus, tlb_remove_table_smp_sync,
> +                                    NULL, 1);

Coding style wants { } here. Also, isn't this what we have
smp_call_function_many_cond() for?

> +     free_cpumask_var(target_cpus);
> +}
> +
>  static void tlb_remove_table_rcu(struct rcu_head *head)
>  {
>       __tlb_remove_table_free(container_of(head, struct mmu_table_batch, 
> rcu));
> -- 
> 2.49.0
> 

Reply via email to