On Thu, Mar 21, 2019 at 8:16 PM Eric Dumazet <[email protected]> wrote:
>
> On hosts with many cpus we can observe a very serious contention
> on spinlocks used in mm slab layer.
>
> The following can happen quite often :
>
> 1) TX path
>   sendmsg() allocates one (fclone) skb on CPU A, sends a clone.
>   ACK is received on CPU B, and consumes the skb that was in the retransmit
>   queue.
>
> 2) RX path
>   network driver allocates skb on CPU C
>   recvmsg() happens on CPU D, freeing the skb after it has been delivered
>   to user space.
>
> In both cases, we are hitting the asymetric alloc/free pattern
> for which slab has to drain alien caches. At 8 Mpps per second,
> this represents 16 Mpps alloc/free per second and has a huge penalty.
>
> In an interesting experiment, I tried to use a single kmem_cache for all the 
> skbs
> (in skb_init() : skbuff_fclone_cache = skbuff_head_cache =
>                   kmem_cache_create("skbuff_fclone_cache", sizeof(struct 
> sk_buff_fclones),);
> qnd most of the contention disappeared, since cpus could better use
> their local slab per-cpu cache.
>
> But we can do actually better, in the following patches.
>
> TX : at ACK time, no longer free the skb but put it back in a tcp socket 
> cache,
>      so that next sendmsg() can reuse it immediately.
>
> RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache
>    so that it can be freed by the cpu feeding the incoming packets in BH.
>
> This increased the performance of small RPC benchmark by about 10 % on a host
> with 112 hyperthreads.
>
> v2 : - Solved a race condition : sk_stream_alloc_skb() to make sure the prior
>        clone has been freed.
>      - Really test rps_needed in sk_eat_skb() as claimed.
>      - Fixed rps_needed use in drivers/net/tun.c
>
> Eric Dumazet (3):
>   net: convert rps_needed and rfs_needed to new static branch api
>   tcp: add one skb cache for tx
>   tcp: add one skb cache for rx

Acked-by: Willem de Bruijn <[email protected]>

Thanks Eric!

Reply via email to