On Mon, 23 Oct 2023 19:24:52 +0800
"wuqiang.matt" <wuqiang.m...@bytedance.com> wrote:

> The objpool_push can only happen on local cpu node, so only the local
> cpu can touch slot->tail and slot->last, which ensures the correctness
> of using cmpxchg without lock prefix (using try_cmpxchg_local instead
> of try_cmpxchg_acquire).
> 
> Testing with IACA found the lock version of pop/push pair costs 16.46
> cycles and local-push version costs 15.63 cycles. Kretprobe throughput
> is improved to 1.019 times of the lock version for x86_64 systems.
> 
> OS: Debian 10 X86_64, Linux 6.6rc6 with freelist
> HW: XEON 8336C x 2, 64 cores/128 threads, DDR4 3200MT/s
> 
>                  1T         2T         4T         8T        16T
>   lock:    29909085   59865637  119692073  239750369  478005250
>   local:   30297523   60532376  121147338  242598499  484620355
>                 32T        48T        64T        96T       128T
>   lock:   957553042 1435814086 1680872925 2043126796 2165424198
>   local:  968526317 1454991286 1861053557 2059530343 2171732306
> 

Yeah, slot->tail is only used on the local CPU. This looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhira...@kernel.org>

Thanks!

> Signed-off-by: wuqiang.matt <wuqiang.m...@bytedance.com>
> ---
>  lib/objpool.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/objpool.c b/lib/objpool.c
> index ce0087f64400..a032701beccb 100644
> --- a/lib/objpool.c
> +++ b/lib/objpool.c
> @@ -166,7 +166,7 @@ objpool_try_add_slot(void *obj, struct objpool_head 
> *pool, int cpu)
>               head = READ_ONCE(slot->head);
>               /* fault caught: something must be wrong */
>               WARN_ON_ONCE(tail - head > pool->nr_objs);
> -     } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1));
> +     } while (!try_cmpxchg_local(&slot->tail, &tail, tail + 1));
>  
>       /* now the tail position is reserved for the given obj */
>       WRITE_ONCE(slot->entries[tail & slot->mask], obj);
> -- 
> 2.40.1
> 


-- 
Masami Hiramatsu (Google) <mhira...@kernel.org>

Reply via email to