From: Edward Cree
Generally the check should be very cheap, as the sk_buff_head is in cache.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit b9f463d6c9849230043123a6335d59ac7fea4d5a)
Signed-off-by: Andrey Ryabinin
---
From: Vlastimil Babka
The previous patch ("mm: prevent potential recursive reclaim due to
clearing PF_MEMALLOC") has shown that simply setting and clearing
PF_MEMALLOC in current->flags can result in wrongly clearing a
pre-existing PF_MEMALLOC flag and potentially lead to recursive reclaim.
Let's
From: Edward Cree
First example of a layer splitting the list (rather than merely taking
individual packets off it).
Involves new list.h function, list_cut_before(), like list_cut_position()
but cuts on the other side of the given entry.
Signed-off-by: Edward Cree
Signed-off-by: David S. Mill
From: Edward Cree
Improves packet rate of 1-byte UDP receives by up to 10%.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit e090bfb9f19259b958387d2bd4938d66b324cd09)
Signed-off-by: Andrey Ryabinin
---
drivers/net/ethe
From: Edward Cree
__netif_receive_skb_core() does a depressingly large amount of per-packet
work that can't easily be listified, because the another_round looping
makes it nontrivial to slice up into smaller functions.
Fortunately, most of that work disappears in the fast path:
* Hardware devi
From: Edward Cree
Also involved adding a way to run a netfilter hook over a list of packets.
Rather than attempting to make netfilter know about lists (which would be
a major project in itself) we just let it call the regular okfn (in this
case ip_rcv_finish()) for any packets it steals, and h
From: Edward Cree
Just calls netif_receive_skb() in a loop.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit f6ad8c1bcdf014272d08c55b9469536952a0a771)
Signed-off-by: Andrey Ryabinin
---
include/linux/netdevice.h | 1 +
From: Edward Cree
ip_rcv_finish_core(), if it does not drop, sets skb->dst by either early
demux or route lookup. The last step, calling dst_input(skb), is left to
the caller; in the listified case, we split to form sublists with a common
dst, but then ip_sublist_rcv_finish() just calls dst_i
From: Edward Cree
netif_receive_skb_list_internal() now processes a list and hands it
on to the next function.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 7da517a3bc529dc5399e742688b32cafa2ca5ca0)
[ ifdef out if (s
From: Edward Cree
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 920572b73280a29e3a9f58807a8b90051b19ee60)
Signed-off-by: Andrey Ryabinin
---
include/trace/events/net.h | 7 +++
net/core/dev.c | 4 +++-
->list added into struct sk_buff in upstream by the commit
d4546c2509b1 ("net: Convert GRO SKB handling to list_head.")
It seems should be fine to backport only addition of the ->list field
without the rest of the patch which we don't need.
The ->list field needed for backport of
2d1b138505dc ("Ha
Backport of 2d1b138505dc29bbd7ac5f82f5a10635ff48bddb ("Handle multiple received
packets at each stage")
The main differencies from upstream is that our NF_HOOK() doesn't have 'struct
net *net'
argument, so I simply dropped it.
The other thing is that we don't have generic XDP, so the hunk realted
12 matches
Mail list logo