sk_callback_lock() here (on every send) seems less than
ideal, also it may sleep in cases where we hit memory pressure.
Instead of dealing with these issues in some clever way simply make
the reference counting a refcnt_t type and do proper atomic ops.
Signed-off-by: John Fastabend
---
kernel/bpf
ULP is known and done on the
kernel side. In this case the named lookup is not needed.
Remove pr_notice, user gets an error code back and should
check that rather than rely on logs.
Signed-off-by: John Fastabend
---
include/net/tcp.h |5 +
net/ipv4/tcp_ulp.c | 51
e automated side. We can push this as an independent patch
set.
---
John Fastabend (7):
net: add a UID to use for ULP socket assignment
sock: make static tls function alloc_sg generic sock helper
sockmap: convert refcnt to an atomic refcnt
net: do_tcp_sendpages flag to avo
Report bytes/sec sent as well as total bytes. Useful to get rough
idea how different configurations and usage patterns perform with
sockmap.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 37 -
1 file changed, 32 insertions(+), 5
-by: John Fastabend
---
samples/sockmap/sockmap_user.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index c3295a7..818766b 100644
--- a/samples/sockmap/sockmap_user.c
+++ b/samples/sockmap/sockmap_user.c
Avoid extra step of setting limit from cmdline and do it directly in
the program.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 818766b..a6dab97
Add a base test that does not use BPF hooks to test baseline case.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
supported, but more can be added as
needed.
The new help argument gives the following,
Usage: ./sockmap --cgroup
options:
--help -h
--cgroup -c
--rate -r
--verbose -v
--iov_count-i
--length -l
--test -t
Signed-off-by: John Fastabend
get many GBps of data which helps exercise the
sockmap code.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 58 +---
1 file changed, 42 insertions(+), 16 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap
future.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 164
1 file changed, 113 insertions(+), 51 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 7cc9d22..17400d4 100644
--- a/samples/sockmap
sighandler update,
2/7 free iov in error cases
3/7 fix bogus makefile change, bail out early on errors
Thanks Daniel and Martin for the reviews!
---
John Fastabend (7):
bpf: refactor sockmap sample program update for arg parsing
bpf: add sendmsg option for testing BPF programs
On 01/11/2018 08:31 PM, John Fastabend wrote:
> On 01/10/2018 05:25 PM, Daniel Borkmann wrote:
>> On 01/10/2018 07:39 PM, John Fastabend wrote:
>>> sockmap sample program takes arguments from cmd line but it reads them
>>> in using offsets into the array. Because we
On 01/10/2018 05:31 PM, Daniel Borkmann wrote:
> On 01/10/2018 07:39 PM, John Fastabend wrote:
>> Currently for SENDMSG tests first send completes then recv runs. This
>> does not work well for large data sizes and/or many iterations. So
>> fork the recv and send handler so
On 01/10/2018 05:25 PM, Daniel Borkmann wrote:
> On 01/10/2018 07:39 PM, John Fastabend wrote:
>> sockmap sample program takes arguments from cmd line but it reads them
>> in using offsets into the array. Because we want to add more arguments
>> in the future lets do pro
On 01/11/2018 01:10 PM, Martin KaFai Lau wrote:
> On Wed, Jan 10, 2018 at 10:40:11AM -0800, John Fastabend wrote:
>> Add a base test that does not use BPF hooks to test baseline case.
>>
>> Signed-off-by: John Fastabend
>> ---
>> samp
On 01/11/2018 01:08 PM, Martin KaFai Lau wrote:
> On Wed, Jan 10, 2018 at 10:39:37AM -0800, John Fastabend wrote:
>> Currently for SENDMSG tests first send completes then recv runs. This
>> does not work well for large data sizes and/or many iterations. So
>> fork the recv and
On 01/11/2018 01:05 PM, Martin KaFai Lau wrote:
> On Wed, Jan 10, 2018 at 10:39:04AM -0800, John Fastabend wrote:
>> sockmap sample program takes arguments from cmd line but it reads them
>> in using offsets into the array. Because we want to add more arguments
>> in the
Avoid extra step of setting limit from cmdline and do it directly in
the program.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 9496b2c..16c19c5
-by: John Fastabend
---
samples/sockmap/sockmap_user.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index eb19d14..9496b2c 100644
--- a/samples/sockmap/sockmap_user.c
+++ b/samples/sockmap/sockmap_user.c
Add a base test that does not use BPF hooks to test baseline case.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
get many GBps of data which helps exercise the
sockmap code.
Signed-off-by: John Fastabend
---
samples/sockmap/Makefile |2 +
samples/sockmap/sockmap_user.c | 58 +---
2 files changed, 43 insertions(+), 17 deletions(-)
diff --git a/samples/sockmap
Report bytes/sec sent as well as total bytes. Useful to get rough
idea how different configurations and usage patterns perform with
sockmap.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 37 -
1 file changed, 32 insertions(+), 5
future.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 142 +---
1 file changed, 103 insertions(+), 39 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 7cc9d22..5cbe7a5 100644
--- a/samples/sockmap
supported, but more can be added as
needed.
The new help argument gives the following,
Usage: ./sockmap --cgroup
options:
--help -h
--cgroup -c
--rate -r
--verbose -v
--iov_count-i
--length -l
--test -t
Signed-off-by: John Fastabend
seful, the reporting is bare bones, etc. But,
IMO lets push this now rather than sit on it for weeks until I get
time to do the above improvements. Additional patches can address
the other limitations/issues.
v2: removed bogus file added by patch 3/7
---
John Fastabend (7):
bpf: refactor sock
llowing up.
Acked-by: John Fastabend
On 01/09/2018 05:27 AM, Jesper Dangaard Brouer wrote:
> On Mon, 08 Jan 2018 10:05:58 -0800
> John Fastabend wrote:
>
>> Report bytes/sec sent as well as total bytes. Useful to get rough
>> idea how different configurations and usage patterns perform with
>> sockmap
On 01/09/2018 05:30 AM, Jesper Dangaard Brouer wrote:
> On Mon, 08 Jan 2018 10:05:07 -0800
> John Fastabend wrote:
>
>> sockmap sample program takes arguments from cmd line but it reads them
>> in using offsets into the array. Because we want to add more arguments
>> i
Avoid extra step of setting limit from cmdline and do it directly in
the program.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 0d8950f..2afbefd
-by: John Fastabend
---
samples/sockmap/sockmap_user.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index bae85f8..0d8950f 100644
--- a/samples/sockmap/sockmap_user.c
+++ b/samples/sockmap/sockmap_user.c
Report bytes/sec sent as well as total bytes. Useful to get rough
idea how different configurations and usage patterns perform with
sockmap.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 37 -
1 file changed, 32 insertions(+), 5
get many GBps of data which helps exercise the
sockmap code.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user. |0
samples/sockmap/sockmap_user.c | 55
2 files changed, 39 insertions(+), 16 deletions(-)
create mode 100644 samples
Add a base test that does not use BPF hooks to test baseline case.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
supported, but more can be added as
needed.
The new help argument gives the following,
Usage: ./sockmap --cgroup
options:
--help -h
--cgroup -c
--rate -r
--verbose -v
--iov_count-i
--length -l
--test -t
Signed-off-by: John Fastabend
seful, the reporting could be better, etc. But,
IMO lets push this now rather than sit on it for weeks until I get
time to do the above improvements.
---
John Fastabend (7):
bpf: refactor sockmap sample program update for arg parsing
bpf: add sendmsg option for testing BPF programs
future.
Signed-off-by: John Fastabend
---
samples/sockmap/sockmap_user.c | 142 +---
1 file changed, 103 insertions(+), 39 deletions(-)
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
index 7cc9d22..5cbe7a5 100644
--- a/samples/sockmap
f-by: Alexei Starovoitov
> ---
LGTM, I'll drop it on my test systems and start running with it.
Although I don't have any Variant 1 code to test, but seems that
is being covered by others.
Thanks!
Acked-by: John Fastabend
On 01/03/2018 02:25 AM, Jesper Dangaard Brouer wrote:
> This patch only introduce the core data structures and API functions.
> All XDP enabled drivers must use the API before this info can used.
>
> There is a need for XDP to know more about the RX-queue a given XDP
> frames have arrived on. For
igned-off-by: Jesper Dangaard Brouer
> Acked-by: Alexei Starovoitov
> ---
LGTM
Acked-by: John Fastabend
t; Cc: intel-wired-...@lists.osuosl.org
> Cc: Björn Töpel
> Cc: Jeff Kirsher
> Cc: Paul Menzel
> Signed-off-by: Jesper Dangaard Brouer
> Reviewed-by: Paul Menzel
> ---
Same here. LGTM.
Acked-by: John Fastabend
>
> Cc: intel-wired-...@lists.osuosl.org
> Cc: Jeff Kirsher
> Cc: Alexander Duyck
> Signed-off-by: Jesper Dangaard Brouer
> ---
Looked a bit for reset paths that might be missed but didn't
find any. LGTM.
Acked-by: John Fastabend
obvious in my opinion.
Signed-off-by: John Fastabend
---
kernel/bpf/sockmap.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 5ee2e41..1712d31 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -591,8
On 01/03/2018 03:41 PM, Cong Wang wrote:
> On Wed, Jan 3, 2018 at 10:09 AM, John Fastabend
> wrote:
>> On 01/02/2018 08:41 PM, Cong Wang wrote:
>>> Hi, John
>>>
>>> While reviewing your ptr_ring fix again today, it looks like your
>>> "lockl
This was added for some work that was eventually factored out but the
helper call was missed. Remove it now and add it back later if needed.
Signed-off-by: John Fastabend
---
kernel/bpf/sockmap.c |8
1 file changed, 8 deletions(-)
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf
The sockmap infrastructure is only aware of TCP sockets at the
moment. In the future we plan to add UDP. In both cases CONFIG_NET
should be built-in.
So lets only build sockmap if CONFIG_INET is enabled.
Signed-off-by: John Fastabend
---
include/linux/bpf.h |2 +-
include/linux
On 01/02/2018 08:41 PM, Cong Wang wrote:
> Hi, John
>
> While reviewing your ptr_ring fix again today, it looks like your
> "lockless" qdisc patchset breaks dev->tx_queue_len behavior.
>
> Before your patchset, dev->tx_queue_len is merely an integer to read,
> after your patchset, the skb array h
On 01/03/2018 07:50 AM, Michael S. Tsirkin wrote:
> On Tue, Jan 02, 2018 at 04:25:03PM -0800, John Fastabend wrote:
>>>
>>> More generally, what makes this usage safe?
>>> Is there a way to formalize it at the API level?
>>>
>>
>> Right I think
obvious in my opinion.
Signed-off-by: John Fastabend
---
kernel/bpf/sockmap.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 5ee2e41..dfbbde2 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -591,6 +591,13 @@ static
On 01/02/2018 03:12 PM, Michael S. Tsirkin wrote:
> On Tue, Jan 02, 2018 at 01:27:23PM -0800, John Fastabend wrote:
>> On 01/02/2018 09:17 AM, Michael S. Tsirkin wrote:
>>> On Tue, Jan 02, 2018 at 07:01:33PM +0200, Michael S. Tsirkin wrote:
>>>> On Tue, Jan 02, 2
On 01/02/2018 10:49 AM, David Miller wrote:
> From: Wei Yongjun
> Date: Wed, 27 Dec 2017 17:05:52 +0800
>
>> When dev_requeue_skb() is called with bluked skb list, only the
> ^^
>
> "bulked"
>
>> first skb of the list will be requeued to qdisc layer,
On 01/02/2018 09:17 AM, Michael S. Tsirkin wrote:
> On Tue, Jan 02, 2018 at 07:01:33PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Jan 02, 2018 at 11:52:19AM -0500, David Miller wrote:
>>> From: John Fastabend
>>> Date: Wed, 27 Dec 2017 19:50:25 -0800
>>&g
e normal case checks would suffer some so best to just
allocate an extra pointer.
Reported-by: Jakub Kicinski
Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
Signed-off-by: John Fastabend
---
include/linux/ptr_ring.h |7 ++-
1 file changed, 6 insertions(+), 1 delet
On 12/27/2017 10:29 AM, Cong Wang wrote:
> On Sat, Dec 23, 2017 at 10:57 PM, John Fastabend
> wrote:
>> On 12/22/2017 12:31 PM, Cong Wang wrote:
>>> I understand why you had it, but it is just not safe. You don't want
>>> to achieve performance gain by crash
On 12/24/2017 07:49 PM, Wei Yongjun wrote:
> When dev_requeue_skb() is called with bluked skb list, only the
> first skb of the list will be requeued to qdisc layer, and leak
> the others without free them.
>
> TCP is broken due to skb leak since no free skb will be considered
> as still in the ho
On 12/22/2017 12:31 PM, Cong Wang wrote:
> On Thu, Dec 21, 2017 at 7:06 PM, John Fastabend
> wrote:
>> On 12/21/2017 04:03 PM, Cong Wang wrote:
>>> __skb_array_empty() is only safe if array is never resized.
>>> pfifo_fast_dequeue() is called in TX BH context and w
n't help. And it is only a
local_bh_disable.
> Fixes: 7bbde83b1860 ("net: sched: drop qdisc_reset from dev_graft_qdisc")
> Reported-by: Jakub Kicinski
> Cc: John Fastabend
> Signed-off-by: Cong Wang
> ---
> net/sched/sch_generic.c | 5 -
> 1 file chang
Fixes: c5ad119fb6c0 ("net: sched: pfifo_fast use skb_array")
> Reported-by: Jakub Kicinski
> Cc: John Fastabend
> Signed-off-by: Cong Wang
> ---
> net/sched/sch_generic.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/net/sched/sch_generic.c b/net/s
On 12/20/2017 11:27 PM, Cong Wang wrote:
> On Wed, Dec 20, 2017 at 4:50 PM, Jakub Kicinski wrote:
>> On Wed, 20 Dec 2017 16:41:14 -0800, Jakub Kicinski wrote:
>>> Just as I hit send... :) but this looks unrelated, "Comm: sshd" -
>>> so probably from the management interface.
>>>
>>> [ 154.604041
On 12/20/2017 01:59 PM, Jakub Kicinski wrote:
> On Wed, 20 Dec 2017 12:09:19 -0800, John Fastabend wrote:
>> RCU grace period is needed for lockless qdiscs added in the commit
>> c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array").
>>
>> It is needed now tha
On 12/20/2017 03:23 PM, Cong Wang wrote:
> On Wed, Dec 20, 2017 at 3:05 PM, John Fastabend
> wrote:
>> On 12/20/2017 02:41 PM, Cong Wang wrote:
>>> On Wed, Dec 20, 2017 at 12:09 PM, John Fastabend
>>> wrote:
>>>> RCU grace period is need
On 12/20/2017 02:41 PM, Cong Wang wrote:
> On Wed, Dec 20, 2017 at 12:09 PM, John Fastabend
> wrote:
>> RCU grace period is needed for lockless qdiscs added in the commit
>> c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array").
>>
>> It is needed now
On 12/20/2017 12:17 PM, Jakub Kicinski wrote:
> On Wed, 20 Dec 2017 10:04:17 -0800, John Fastabend wrote:
>> On 12/19/2017 10:34 PM, Jakub Kicinski wrote:
>>> On Tue, 19 Dec 2017 22:22:27 -0800, Jakub Kicinski wrote:
>>>>>> I get this:
>>>&
g with the qdisc itself in:
>> qdisc_destroy->qdisc_free
>>
>> Before miniq, tp was checked in the rcu reader path. In case it was not
>> null, q was processed. In slow patch, tp is freed after rcu grace period:
>> tcf_proto_destroy->kfree_rcu
>>
>> I assumed that sin
On 12/20/2017 11:59 AM, Jiri Pirko wrote:
> Wed, Dec 20, 2017 at 07:17:50PM CET, xiyou.wangc...@gmail.com wrote:
>> On Tue, Dec 19, 2017 at 10:34 PM, Jakub Kicinski wrote:
>>> Ah, no object debug but KASAN on produces this:
>>>
>>
>>
>> I bet it is an ingress qdisc which is being freed?
>>
>>
>>
>
to RCU callback. Otherwise we risk the datapath
adding skbs during removal.
Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
Signed-off-by: John Fastabend
---
include/net/sch_generic.h |1 +
net/sched/sch_generic.c | 50 -
2 fil
On 12/19/2017 10:34 PM, Jakub Kicinski wrote:
> On Tue, 19 Dec 2017 22:22:27 -0800, Jakub Kicinski wrote:
I get this:
>>>
>>> Could you try to run it with kasan on?
>>
>> I didn't manage to reproduce it with KASAN on so far :( Even enabling
>> object debugging to get the second splat in
On 12/18/2017 08:31 PM, Cong Wang wrote:
> On Mon, Dec 18, 2017 at 7:58 PM, John Fastabend
> wrote:
>> On 12/18/2017 06:20 PM, Cong Wang wrote:
>>> On Mon, Dec 18, 2017 at 5:25 PM, John Fastabend
>>> wrote:
>>>> On 12/18/2017 02:34 PM, Cong Wang wrote
On 12/18/2017 06:20 PM, Cong Wang wrote:
> On Mon, Dec 18, 2017 at 5:25 PM, John Fastabend
> wrote:
>> On 12/18/2017 02:34 PM, Cong Wang wrote:
>>> First, the check of &q->ring.queue against NULL is wrong, it
>>> is always false. We should check the value rath
: pfifo_fast use skb_array")
> Reported-by: syzbot
> Cc: John Fastabend
> Signed-off-by: Cong Wang
> ---
> net/sched/sch_generic.c | 8 +++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
On 12/15/2017 07:53 AM, David Miller wrote:
> From: Eric Leblond
> Date: Fri, 15 Dec 2017 11:24:46 +0100
>
>> Hello,
>>
>> When using an ixgbe card with Suricata we are using the following
>> commands to get a symmetric hash on RSS load balancing:
>>
>> ./set_irq_affinity 0-15 eth3
>> ethtool -X
On 12/12/2017 09:57 AM, Paweł Staszewski wrote:
>
>
> W dniu 2017-12-11 o 23:27, Paweł Staszewski pisze:
>>
>>
>> W dniu 2017-12-11 o 23:15, John Fastabend pisze:
>>> On 12/11/2017 01:48 PM, Paweł Staszewski wrote:
>>>>
>>>>
On 12/11/2017 01:48 PM, Paweł Staszewski wrote:
>
>
> W dniu 2017-12-11 o 22:23, Paweł Staszewski pisze:
>> Hi
>>
>>
>> I just upgraded some testing host to 4.15.0-rc2+ kernel
>>
>> And after some time of traffic processing - when traffic on all ports
>> reach about 3Mpps - memleak started.
>>
graft operation occurs.
This also removes the logic used to pick the next band to dequeue from
and instead just checks a per priority array for packets from top priority
to lowest. This might need to be a bit more clever but seems to work
for now.
Signed-off-by: John Fastabend
---
net/sched
andle this case add a check when calculating
stats and aggregate the per cpu stats if needed.
Signed-off-by: John Fastabend
---
net/sched/sch_mq.c | 35 +++-
net/sched/sch_mqprio.c | 69 +---
2 files changed, 69 insertions(+), 35
This adds a peek routine to skb_array.h for use with qdisc.
Signed-off-by: John Fastabend
---
include/linux/skb_array.h |5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index 8621ffd..c7addf3 100644
--- a/include/linux/skb_array.h
ce period and letting the
qdisc_destroy operation clean up the qdisc correctly.
Note, a refcnt greater than 1 would cause the destroy operation to
be aborted however if this ever happened the reference to the qdisc
would be lost and we would have a memory leak.
Signed-off-by: John Fastabend
---
net/s
this case add a check when calculating
stats and aggregate the per cpu stats if needed.
Also exports __gnet_stats_copy_queue() to use as a helper function.
Signed-off-by: John Fastabend
---
include/net/gen_stats.h |3 +++
net/core/gen_stats.c|9 +
net/sched/sch_mq.c
-off-by: John Fastabend
---
include/net/sch_generic.h | 20
net/sched/sch_api.c |3 ++-
2 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 4717c4b..2fbae2c9 100644
--- a/include/net
I can not think of any reason to pull the bad txq skb off the qdisc if
the txq we plan to send this on is still frozen. So check for frozen
queue first and abort before dequeuing either skb_bad_txq skb or
normal qdisc dequeue() skb.
Signed-off-by: John Fastabend
---
net/sched/sch_generic.c
Similar to how gso is handled use skb list for skb_bad_tx this is
required with lockless qdiscs because we may have multiple cores
attempting to push skbs into skb_bad_tx concurrently
Signed-off-by: John Fastabend
---
include/net/sch_generic.h |2 -
net/sched/sch_generic.c | 106
once its possible to have multiple
sk_buffs here so we turn gso_skb into a queue.
This should be the edge case and if we see this frequently then
the netdev/qdisc layer needs to back off.
Signed-off-by: John Fastabend
---
include/net/sch_generic.h | 20 ++-
net/sched/sch_generic.c
ate the qdisc object so we don't have
dangling allocations after qdisc init.
Signed-off-by: John Fastabend
---
include/net/sch_generic.h |1 +
net/sched/sch_generic.c | 16
2 files changed, 17 insertions(+)
diff --git a/include/net/sch_generic.h b/include/net/sch_g
ow it returns true. However in this case all
call sites of sch_direct_xmit will implement a dequeue() and get
a null skb and abort. This trades tracking qlen in the hotpath for
an extra dequeue operation. Overall this seems to be good for
performance.
Signed-off-by: John Fastabend
---
includ
The per cpu qstats support was added with per cpu bstat support which
is currently used by the ingress qdisc. This patch adds a set of
helpers needed to make other qdiscs that use qstats per cpu as well.
Signed-off-by: John Fastabend
---
include/net/sch_generic.h | 35
doing the enqueue/dequeue operations when tested with
pktgen.
Signed-off-by: John Fastabend
---
include/net/sch_generic.h |1 +
net/core/dev.c| 26 ++
net/sched/sch_generic.c | 30 --
3 files changed, 43 insertions(+), 14
Currently __qdisc_run calls qdisc_run_end() but does not call
qdisc_run_begin(). This makes it hard to track pairs of
qdisc_run_{begin,end} across function calls.
To simplify reading these code paths this patch moves begin/end calls
into qdisc_run().
Signed-off-by: John Fastabend
---
include
m, I left out lockdep annotation for a follow on series
to add lockdep more completely, rather than just in code I
touched.
Comments and feedback welcome.
Thanks,
John
---
John Fastabend (14):
net: sched: cleanup qdisc_run and __qdisc_run semantics
net: sched: allow qdiscs to handle locking
On 11/20/2017 05:09 AM, David Miller wrote:
> From: Steffen Klassert
> Date: Mon, 20 Nov 2017 08:37:47 +0100
>
>> This patchset implements asynchronous crypto handling
>> in the layer 2 TX path. With this we can allow IPsec
>> ESP GSO for software crypto. This also merges the IPsec
>> GSO and non
On 11/15/2017 09:51 AM, Willem de Bruijn wrote:
> On Wed, Nov 15, 2017 at 10:11 AM, John Fastabend
> wrote:
>> On 11/14/2017 04:41 PM, Willem de Bruijn wrote:
>>>> /* use instead of qdisc->dequeue() for all qdiscs queried with ->peek() */
>>>> static i
On 11/14/2017 05:16 PM, Willem de Bruijn wrote:
> On Mon, Nov 13, 2017 at 3:10 PM, John Fastabend
> wrote:
>> Add qdisc qlen helper routines for lockless qdiscs to use.
>>
>> The qdisc qlen is no longer used in the hotpath but it is reported
>> via stats query on the
On 11/14/2017 04:41 PM, Willem de Bruijn wrote:
>> /* use instead of qdisc->dequeue() for all qdiscs queried with ->peek() */
>> static inline struct sk_buff *qdisc_dequeue_peeked(struct Qdisc *sch)
>> {
>> - struct sk_buff *skb = sch->gso_skb;
>> + struct sk_buff *skb = skb_peek(&sc
On 11/14/2017 05:56 PM, Willem de Bruijn wrote:
> On Tue, Nov 14, 2017 at 7:11 PM, Willem de Bruijn
> wrote:
>> On Mon, Nov 13, 2017 at 3:08 PM, John Fastabend
>> wrote:
>>> sch_direct_xmit() uses qdisc_qlen as a return value but all call sites
>>> of the rou
[...]
>> static int pfifo_fast_dump(struct Qdisc *qdisc, struct sk_buff *skb)
>> @@ -685,17 +688,40 @@ static int pfifo_fast_dump(struct Qdisc *qdisc, struct
>> sk_buff *skb)
>>
>> static int pfifo_fast_init(struct Qdisc *qdisc, struct nlattr *opt)
>> {
>> - int prio;
>> + unsigned
On 11/13/2017 09:33 PM, Björn Töpel wrote:
> 2017-11-14 0:50 GMT+01:00 Alexei Starovoitov :
>> On 11/13/17 9:07 PM, Björn Töpel wrote:
>>>
>>> 2017-10-31 13:41 GMT+01:00 Björn Töpel :
From: Björn Töpel
>>> [...]
We'll do a presentation on AF_PACKET V4 in NetDev 2.2 [1
On 11/13/2017 06:18 PM, Michael Ma wrote:
> 2017-11-13 16:13 GMT-08:00 Alexander Duyck :
>> On Mon, Nov 13, 2017 at 3:08 PM, Eric Dumazet wrote:
>>> On Mon, 2017-11-13 at 14:47 -0800, Alexander Duyck wrote:
On Mon, Nov 13, 2017 at 10:17 AM, Michael Ma wrote:
> 2017-11-12 16:14 GMT-08:00
Signed-off-by: John Fastabend
---
0 files changed
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 683f6ec..8ab7933 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -206,33 +206,22 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool
*validate
Signed-off-by: John Fastabend
---
0 files changed
diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index c7addf3..d0a240e 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -142,6 +142,11 @@ static inline int skb_array_consume_batched_bh(struct
This adds a peek routine to skb_array.h for use with qdisc.
Signed-off-by: John Fastabend
---
0 files changed
diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index 8621ffd..c7addf3 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -72,6 +72,11
clever but seems to work
for now.
Signed-off-by: John Fastabend
---
0 files changed
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 84c4ea1..683f6ec 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -26,6 +26,7 @@
#include
#include
#include
+#include
andle this case add a check when calculating
stats and aggregate the per cpu stats if needed.
Signed-off-by: John Fastabend
---
0 files changed
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index b85885a9..24474d0 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
1101 - 1200 of 2039 matches
Mail list logo