On Wed, 10 Feb 2021 12:25:04 +0000 Alexander Lobakin <aloba...@pm.me> wrote:
> From: Paolo Abeni <pab...@redhat.com> > Date: Wed, 10 Feb 2021 11:21:06 +0100 > > > I'm sorry for the late feedback, I could not step-in before. > > > > Also adding Jesper for awareness, as he introduced the bulk free > > infrastructure. Thanks (and Alexander Duyck also did part of the work while at Red Hat). In my initial versions of my patchsets I actually also had reuse of the SKBs that were defer freed during NAPI context. But I dropped that part because it was getting nitpicked and the merge window was getting close, so I ended up dropping that part. > > On Tue, 2021-02-09 at 20:48 +0000, Alexander Lobakin wrote: > > > @@ -231,7 +256,7 @@ struct sk_buff *__build_skb(void *data, unsigned int > > > frag_size) > > > */ > > > struct sk_buff *build_skb(void *data, unsigned int frag_size) > > > { > > > - struct sk_buff *skb = __build_skb(data, frag_size); > > > + struct sk_buff *skb = __build_skb(data, frag_size, true); > > > > I must admit I'm a bit scared of this. There are several high speed > > device drivers that will move to bulk allocation, and we don't have any > > performance figure for them. > > > > In my experience with (low end) MIPS board, cache misses cost tend to > > be much less visible there compared to reasonably recent server H/W, > > because the CPU/memory access time difference is much lower. > > > > When moving to higher end H/W the performance gain you measured could > > be completely countered by less optimal cache usage. > > > > I fear also latency spikes - I'm unsure if a 32 skbs allocation vs a > > single skb would be visible e.g. in a round-robin test. Generally > > speaking bulk allocating 32 skbs looks a bit too much. IIRC, when > > Edward added listification to GRO, he did several measures with > > different list size and found 8 to be the optimal value (for the tested > > workload). Above such number the list become too big and the pressure > > on the cache outweighted the bulking benefits. > > I can change to logics the way so it would allocate the first 8. > I think I've already seen this batch value somewhere in XDP code, > so this might be a balanced one. (Speaking about SLUB code): Bulk ALLOC side disables interrupts, and can call slow path (___slab_alloc), which is bad for latency sensitive workloads. This I don't recommend large bulk ALLOCATIONS. > Regarding bulk-freeing: can the batch size make sense when freeing > or it's okay to wipe 32 (currently 64 in baseline) in a row? (Speaking about SLUB code): You can bulk FREE large amount of object without hurting latency sensitive workloads, because it doesn't disable interrupts (I'm quite proud that this was possible). > > Perhaps giving the device drivers the ability to opt-in on this infra > > via a new helper - as done back then with napi_consume_skb() - would > > make this change safer? > > That's actually a very nice idea. There's only a little in the code > to change to introduce an ability to take heads from the cache > optionally. This way developers could switch to it when needed. Well, I actually disagree that this should be hidden behind a switch for drivers to enable, as this will take forever to get proper enabled. > Thanks for the suggestions! I'll definitely absorb them into the code > and give it a test. > > > > @@ -838,31 +863,31 @@ void __consume_stateless_skb(struct sk_buff *skb) > > > kfree_skbmem(skb); > > > } > > > > > > -static inline void _kfree_skb_defer(struct sk_buff *skb) > > > +static void napi_skb_cache_put(struct sk_buff *skb) > > > { > > > struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); > > > + u32 i; > > > > > > /* drop skb->head and call any destructors for packet */ > > > skb_release_all(skb); > > > > > > - /* record skb to CPU local list */ > > > + kasan_poison_object_data(skbuff_head_cache, skb); > > > nc->skb_cache[nc->skb_count++] = skb; > > > > > > -#ifdef CONFIG_SLUB > > > - /* SLUB writes into objects when freeing */ > > > - prefetchw(skb); > > > -#endif > > > > It looks like this chunk has been lost. Is that intentional? > > Yep. This prefetchw() assumed that skbuff_heads will be wiped > immediately or at the end of network softirq. Reusing this cache > means that heads can be reused later or may be kept in a cache for > some time, so prefetching makes no sense anymore. I agree with this statement, the prefetchw() is no-longer needed. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer