In verifier function adjust_scalar_min_max_vals,
when src_known is false and the opcode is BPF_LSH/BPF_RSH,
early return will happen in the function. So remove
the branch in handling BPF_LSH/BPF_RSH when src_known is false.
Signed-off-by: Yonghong Song
---
kernel/bpf/verifier.c | 11
(id=0,umax_value=800,var_off=(0x0; 0x3ff))
R1=inv0 R6=ctx(id=0,off=0,imm=0)
R7=map_value(id=0,off=0,ks=4,vs=1600,imm=0)
R8=inv(id=0,umax_value=800,var_off=(0x0; 0x3ff)) R9=inv800
R10=fp0,call_-1
58: (bf) r2 = r7
59: (0f) r2 += r8
60: (1f) r9 -= r8
61: (bf) r1 = r6
Starovoitov
Signed-off-by: Yonghong Song
---
samples/bpf/Makefile| 11 +-
samples/bpf/bpf_load.c | 63 --
samples/bpf/bpf_load.h | 7 --
samples/bpf/offwaketime_user.c | 1 +
samples/bpf/sampleip_user.c
id's must be the same.
Acked-by: Alexei Starovoitov
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_progs.c | 70 --
.../selftests/bpf/test_stacktrace_build_id.c | 20 ++-
tools/testing/selftests/bpf/test_stacktrace_map.c | 19
available, the user space
application will check to ensure that the kernel function
for raw_tracepoint ___bpf_prog_run is part of the stack.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/Makefile | 3 +-
tools/testing/selftests/bpf/test_get_stack_rawtp.c | 102
This patch didn't incur functionality change. The function prototype
got changed so that the same function can be reused later.
Signed-off-by: Yonghong Song
---
kernel/bpf/stackmap.c | 13 +
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/k
The tools header file bpf.h is synced with kernel uapi bpf.h.
The new helper is also added to bpf_helpers.h.
Signed-off-by: Yonghong Song
---
tools/include/uapi/linux/bpf.h| 19 +--
tools/testing/selftests/bpf/bpf_helpers.h | 2 ++
2 files changed, 19 insertions
The test_verifier already has a few ARSH test cases.
This patch adds a new test case which takes advantage of newly
improved verifier behavior for bpf_get_stack and ARSH.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_verifier.c | 45 +
1 file
Hi,
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
net-next function skb_segment, line 3667.
3472 struct sk_buff *skb_segment(struct sk_buff *head_skb,
3473 netdev_features_t features)
3474 {
3475 struct sk_buff *segs = NULL;
3476
On 3/12/18 11:04 PM, Eric Dumazet wrote:
On 03/12/2018 10:45 PM, Yonghong Song wrote:
Hi,
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
net-next function skb_segment, line 3667.
3472 struct sk_buff *skb_segment(struct sk_buff *head_skb,
3473
Adding additional cc's:
Saeed Mahameed as this is most likely mlx5 driver related.
Diptanu Gon Choudhury who initially reported the issue.
On 3/13/18 1:44 AM, Steffen Klassert wrote:
On Mon, Mar 12, 2018 at 11:25:09PM -0700, Eric Dumazet wrote:
On 03/12/2018 11:08 PM, Yonghong
On 3/13/18 4:45 PM, Omar Sandoval wrote:
On Tue, Mar 13, 2018 at 04:16:27PM -0700, Howard McLauchlan wrote:
Error injection is a useful mechanism to fail arbitrary kernel
functions. However, it is often hard to guarantee an error propagates
appropriately to user space programs. By injecting in
-4.2$ git show
commit 41681ab51f85b4a0ea3416a0a62d6bde74f3af4b
Author: Yonghong Song
Date: Fri Mar 16 15:10:02 2018 -0700
[hack] hack test_bpf module to trigger BUG_ON in skb_segment.
"modprobe test_bpf" will have the following errors:
...
[ 98.149165]
On 3/16/18 4:03 PM, Eric Dumazet wrote:
On 03/16/2018 03:37 PM, Yonghong Song wrote:
Eric and Daniel,
I have tried to fix this issue but not really successful.
I tried two hacks:
. if skb_headlen(list_skb) is not 0, we just pull
skb_headlen(list_skb) from the skb to make
ed before the list_skb->frags.
Patch #2 provides a test case in test_bpf module which
constructs a skb and calls skb_segment() directly. The test
case is able to trigger the BUG_ON without Patch #1.
Yonghong Song (2):
net: permit skb_segment on head_frag frag_list skb
net: bpf: add a test for skb_
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
pected in
most cases. A one-element frag array is created for the list_skb head
and processed before list_skb->frags are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 42 ++
1 file changed, 30 insertions
On 3/19/18 10:30 PM, Yuan, Linyu (NSB - CN/Shanghai) wrote:
-Original Message-
From: netdev-ow...@vger.kernel.org [mailto:netdev-ow...@vger.kernel.org]
On Behalf Of Yonghong Song
Sent: Tuesday, March 20, 2018 1:16 PM
To: eduma...@google.com; a...@fb.com; dan...@iogearbox.net;
dipt
ed before the list_skb->frags.
Patch #2 provides a test case in test_bpf module which
constructs a skb and calls skb_segment() directly. The test
case is able to trigger the BUG_ON without Patch #1.
Changelog:
v1 -> v2:
. Removed never-hit BUG_ON, spotted by Linyu Yuan.
Yonghong Song (2):
n
pected in
most cases. A one-element frag array is created for the list_skb head
and processed before list_skb->frags are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 42 +-
1 file changed, 29 insertions
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
On 3/20/18 5:58 AM, Thadeu Lima de Souza Cascardo wrote:
Function bpf_fill_maxinsns11 is designed to not be able to be JITed on
x86_64. So, it fails when CONFIG_BPF_JIT_ALWAYS_ON=y, and
commit 09584b406742 ("bpf: fix selftests/bpf test_kmod.sh failure when
CONFIG_BPF_JIT_ALWAYS_ON=y") makes sur
On 3/20/18 10:00 AM, Thadeu Lima de Souza Cascardo wrote:
On Tue, Mar 20, 2018 at 09:05:15AM -0700, Yonghong Song wrote:
On 3/20/18 5:58 AM, Thadeu Lima de Souza Cascardo wrote:
Function bpf_fill_maxinsns11 is designed to not be able to be JITed on
x86_64. So, it fails when
read_value
from tracepoint func prototype.
Fixes: Commit 4bebdc7a85aa ("bpf: add helper bpf_perf_prog_read_value")
Reported-by: Alexei Starovoitov
Signed-off-by: Yonghong Song
---
kernel/trace/bpf_trace.c | 68
1 file changed, 40 inserti
On 3/20/18 11:08 AM, Alexander Duyck wrote:
On Tue, Mar 20, 2018 at 8:55 AM, Yonghong Song wrote:
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
function skb_segment(), line 3667. The bpf program attaches to
clsact ingress, calls bpf_skb_change_proto to change protocol
-> v2:
. Removed never-hit BUG_ON, spotted by Linyu Yuan.
Yonghong Song (2):
net: permit skb_segment on head_frag frag_list skb
net: bpf: add a test for skb_segment in test_bpf module
lib/test_bpf.c| 71 ++-
net/core/skbuff.c | 51
ted in
most cases. The head frag is processed before list_skb->frags
are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 51 +--
1 file changed, 37 insertions(+), 14 deletions(-)
diff --git a
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
On 3/20/18 4:50 PM, Alexander Duyck wrote:
On Tue, Mar 20, 2018 at 4:21 PM, Yonghong Song wrote:
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
function skb_segment(), line 3667. The bpf program attaches to
clsact ingress, calls bpf_skb_change_proto to change protocol
On 3/20/18 5:44 PM, Eric Dumazet wrote:
On 03/20/2018 04:21 PM, Yonghong Song wrote:
Without the previous commit,
"modprobe test_bpf" will have the following errors:
...
[ 98.149165] [ cut here ]
[ 98.159362] kernel BUG at net/core/skbuff.c:3667!
[
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
ted in
most cases. The head frag is processed before list_skb->frags
are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 36 +---
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/net/core/skbu
the skb,
from Alexander Duyck.
v1 -> v2:
. Removed never-hit BUG_ON, spotted by Linyu Yuan.
Yonghong Song (2):
net: permit skb_segment on head_frag frag_list skb
net: bpf: add a test for skb_segment in test_bpf module
lib/test_bpf.c| 91 ++
On 3/21/18 7:59 AM, Alexander Duyck wrote:
On Tue, Mar 20, 2018 at 10:02 PM, Yonghong Song wrote:
On 3/20/18 4:50 PM, Alexander Duyck wrote:
On Tue, Mar 20, 2018 at 4:21 PM, Yonghong Song wrote:
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
function skb_segment
On 3/21/18 8:26 AM, Eric Dumazet wrote:
On 03/20/2018 11:47 PM, Yonghong Song wrote:
+static __init int test_skb_segment(void)
+{
+ netdev_features_t features;
+ struct sk_buff *skb;
+ int ret = -1;
+
+ features = NETIF_F_SG | NETIF_F_GSO_PARTIAL | NETIF_F_IP_CSUM
ted in
most cases. The head frag is processed before list_skb->frags
are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 26 --
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/net/core/skbuff.c b/ne
cate skb, proper function
argument for skb_add_rx_frag and not freeint skb, etc.,
from Eric.
v2 -> v3:
. Use starting frag index -1 (instead of 0) to
special process head_frag before other frags in the skb,
from Alexander Duyck.
v1 -> v2:
. Removed never-hit BUG_ON,
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
On 3/21/18 2:51 PM, Alexander Duyck wrote:
On Wed, Mar 21, 2018 at 1:36 PM, Yonghong Song wrote:
One of our in-house projects, bpf-based NAT, hits a kernel BUG_ON at
function skb_segment(), line 3667. The bpf program attaches to
clsact ingress, calls bpf_skb_change_proto to change protocol
test_bpf(+)
...
which triggers the bug the previous commit intends to fix.
The skbs are constructed to mimic what mlx5 may generate.
The packet size/header may not mimic real cases in production. But
the processing flow is similar.
Signed-off-by: Yonghong Song
---
lib/test_
ted in
most cases. The head frag is processed before list_skb->frags
are processed.
Reported-by: Diptanu Gon Choudhury
Signed-off-by: Yonghong Song
---
net/core/skbuff.c | 27 ++-
1 file changed, 22 insertions(+), 5 deletions(-)
diff --git a/net/core/skbuff.c b/ne
ead of 0) to
special process head_frag before other frags in the skb,
from Alexander Duyck.
v1 -> v2:
. Removed never-hit BUG_ON, spotted by Linyu Yuan.
Yonghong Song (2):
net: permit skb_segment on head_frag frag_list skb
net: bpf: add a test for skb_segment in test_bpf module
lib/test
On 12/6/17 5:16 AM, Peter Zijlstra wrote:
On Wed, Dec 06, 2017 at 12:56:36PM +0100, Peter Zijlstra wrote:
On Tue, Dec 05, 2017 at 10:31:28PM -0800, Yonghong Song wrote:
Commit e87c6bc3852b ("bpf: permit multiple bpf attachments
for a single perf event") added support to attach mu
-> v2:
- Rebase on top of net-next.
- Use existing bpf_prog_array_length function instead of
implementing the same functionality in function
bpf_prog_array_copy_info.
Yonghong Song (2):
bpf/tracing: allow user space to query prog array on the same tp
bpf/tracing: add a bpf tes
ry.prog_cnt is the number of available progs,
* number of progs in ids: (ids_len == 0) ? 0 : query.prog_cnt
*/
} else if (errno == ENOSPC) {
/* query.ids_len number of progs copied,
* query.prog_cnt is the number of available progs
*/
} else {
/* other errors */
}
Added a subtest in test_progs. The tracepoint is
sched/sched_switch. Multiple bpf programs are attached to
this tracepoint and the query interface is exercised.
Signed-off-by: Yonghong Song
Acked-by: Alexei Starovoitov
---
tools/include/uapi/linux/perf_event.h | 22 +
tools
Added a subtest in test_progs. The tracepoint is
sched/sched_switch. Multiple bpf programs are attached to
this tracepoint and the query interface is exercised.
Signed-off-by: Yonghong Song
Acked-by: Alexei Starovoitov
Acked-by: Peter Zijlstra (Intel)
---
tools/include/uapi/linux/perf_event.h
ase on top of net-next.
- Use existing bpf_prog_array_length function instead of
implementing the same functionality in function
bpf_prog_array_copy_info.
Yonghong Song (2):
bpf/tracing: allow user space to query prog array on the same tp
bpf/tracing: add a bpf test for new ioctl
ry.prog_cnt is the number of available progs,
* number of progs in ids: (ids_len == 0) ? 0 : query.prog_cnt
*/
} else if (errno == ENOSPC) {
/* query.ids_len number of progs copied,
* query.prog_cnt is the number of available progs
*/
} else {
/* other errors */
ot defined:
kernel/events/core.o: In function `perf_ioctl':
core.c:(.text+0x98c4): undefined reference to `bpf_event_query_prog_array'
This patch fixed this error.
Fixes: f371b304f12e ("bpf/tracing: allow user space to query prog array on the
same tp")
Reported-by: Stephen Rot
On 12/13/17 7:50 AM, Alexei Starovoitov wrote:
On 12/13/17 7:44 AM, Daniel Borkmann wrote:
On 12/13/2017 08:42 AM, Yonghong Song wrote:
Commit f371b304f12e ("bpf/tracing: allow user space to
query prog array on the same tp") introduced a perf
ioctl command to query prog array attac
so the definition is in proximity to
other prog_array related functions.
Fixes: f371b304f12e ("bpf/tracing: allow user space to query prog array on the
same tp")
Reported-by: Stephen Rothwell
Signed-off-by: Yonghong Song
---
include/linux/bpf.h | 1 -
include/linux/trace_even
rejects
any non u32 access.
This patch permits the field access_type to be accessible
with type u16 and u8 as well.
Signed-off-by: Yonghong Song
Tested-by: Roman Gushchin
---
include/uapi/linux/bpf.h | 3 ++-
kernel/bpf/cgroup.c | 15 +--
2 files changed, 15 insertions
ration not permitted
libbpf: failed to load object 'test_tcpbpf_kern.o'
FAILED: load_bpf_file failed for: test_tcpbpf_kern.o
Changing the default rlimit RLIMIT_MEMLOCK to unlimited makes
the test always pass.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_tcpbpf_user.
On 2/13/18 5:11 PM, Daniel Borkmann wrote:
Hi Yonghong,
On 02/12/2018 10:58 PM, Yonghong Song wrote:
There is a memory leak happening in lpm_trie map_free callback
function trie_free. The trie structure itself does not get freed.
Also, trie_free function did not do synchronize_rcu before
by: Alexei Starovoitov
Tested-by: Mathieu Malaterre
Signed-off-by: Yonghong Song
---
kernel/bpf/lpm_trie.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
v1->v2:
Make comments more precise and make label name more appropriate,
as suggested by Daniel
diff --git a/kernel/b
On 2/21/18 7:40 PM, Eric Dumazet wrote:
On Tue, 2018-02-13 at 19:17 -0800, Alexei Starovoitov wrote:
On Tue, Feb 13, 2018 at 07:00:21PM -0800, Yonghong Song wrote:
There is a memory leak happening in lpm_trie map_free callback
function trie_free. The trie structure itself does not get freed
atch simply converted all
rcu protected pointer access to normal access, which removed the
above warning.
Fixes: 9a3efb6b661f ("bpf: fix memory leak in lpm_trie map_free callback
function")
Reported-by: Eric Dumazet
Signed-off-by: Yonghong Song
---
kernel/bpf/lpm_trie.c | 11 +---
On 2/22/18 5:37 AM, Eric Dumazet wrote:
On Wed, 2018-02-21 at 22:38 -0800, Yonghong Song wrote:
Commit 9a3efb6b661f ("bpf: fix memory leak in lpm_trie map_free callback
function")
fixed a memory leak and removed unnecessary locks in map_free callback function.
Unfortrunately, it in
acing
rcu_dereference_protected(*slot, lockdep_is_held(&trie->lock))
with
rcu_dereference_protected(*slot, 1)
fixed the issue.
Fixes: 9a3efb6b661f ("bpf: fix memory leak in lpm_trie map_free callback
function")
Reported-by: Eric Dumazet
Suggested-by: Eric Dumazet
Signed-off-by: Y
some special meaning
while it doesn't. The new output:
...
<...>-1799 [002] 25.953576: 0: mmap
<...>-1799 [002] 25.953865: 0: read(fd=0, buf=053936b5,
size=512)
...
Signed-off-by: Yonghong Song
---
kernel/trace/bpf_trace.c | 2 +-
1 file change
A test case is added in tools/testing/selftests/bpf/test_lpm_map.c
for MAP_GET_NEXT_KEY command. A four node trie, which
is described in kernel/bpf/lpm_trie.c, is built and the
MAP_GET_NEXT_KEY results are checked.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_lpm_map.c
.
Yonghong Song (2):
bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE map
tools/bpf: add a testcase for MAP_GET_NEXT_KEY command of LPM_TRIE map
kernel/bpf/lpm_trie.c | 95 +-
tools/testing/selftests/bpf/test_lpm_map.c | 122
.
Otherwise, the next key will be returned.
In this implemenation, key enumeration follows a postorder
traversal of internal trie. More specific keys
will be returned first than less specific ones, given
a sequence of MAP_GET_NEXT_KEY syscalls.
Signed-off-by: Yonghong Song
---
kernel/bpf/lpm_trie.c | 95
...
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_dev_cgroup.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_dev_cgroup.c
b/tools/testing/selftests/bpf/test_dev_cgroup.c
index 02c85d6..c1535b3 100644
--- a/tools/t
On 12/20/17 12:19 PM, Roman Gushchin wrote:
Bpftool determines it's own version based on the kernel
version, which is picked from the linux/version.h header.
It's strange to use the version of the installed kernel
headers, and makes much more sense to use the version
of the actual source tree,
both have
the same set of keys.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/Makefile | 2 +-
tools/testing/selftests/bpf/test_progs.c | 127 ++
tools/testing/selftests/bpf/test_stacktrace_map.c | 62 +++
3 files changed, 190
The patch set implements bpf syscall command BPF_MAP_GET_NEXT_KEY
for stacktrace map. Patch #1 is the core implementation
and Patch #2 implements a bpf test at tools/testing/selftests/bpf
directory. Please see individual patch comments for details.
Yonghong Song (2):
bpf: implement syscall
pointer, the first key is returned. Otherwise,
the first valid key after the input parameter "key"
is returned, or -ENOENT if no valid key can be found.
Signed-off-by: Yonghong Song
---
kernel/bpf/stackmap.c | 23 +--
1 file changed, 21 insertions(+), 2 deletions(-)
di
On 1/4/18 1:08 PM, Jakub Kicinski wrote:
On Wed, 3 Jan 2018 23:27:45 -0800, Yonghong Song wrote:
Currently, bpf syscall command BPF_MAP_GET_NEXT_KEY is not
supported for stacktrace map. However, there are use cases where
user space wants to enumerate all stacktrace map entries where
invalid key, the first key is returned.
Otherwise, the first valid key after the input parameter "key"
is returned, or -ENOENT if no valid key can be found.
Signed-off-by: Yonghong Song
---
kernel/bpf/stackmap.c | 28 ++--
1 file changed, 26 insertions(+), 2
both have
the same set of keys.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/Makefile | 2 +-
tools/testing/selftests/bpf/test_progs.c | 127 ++
tools/testing/selftests/bpf/test_stacktrace_map.c | 62 +++
3 files changed, 190
key pointer is non-NULL), sets next_key to be the first
valid key.
Yonghong Song (2):
bpf: implement syscall command BPF_MAP_GET_NEXT_KEY for stacktrace map
tools/bpf: add a bpf selftest for stacktrace
kernel/bpf/stackmap.c | 28 -
tools/testing/selftests/
On 1/22/18 7:06 AM, Arnaldo Carvalho de Melo wrote:
Em Wed, Nov 22, 2017 at 10:42:22AM -0800, Gianluca Borello escreveu:
On Tue, Nov 21, 2017 at 2:31 PM, Alexei Starovoitov
wrote:
yeah sorry about this hack. Gianluca reported this issue as well.
Yonghong fixed it for bpf_probe_read only. We
On 1/22/18 11:28 AM, Eric Dumazet wrote:
On Thu, 2018-01-18 at 15:08 -0800, Yonghong Song wrote:
Current LPM_TRIE map type does not implement MAP_GET_NEXT_KEY
command. This command is handy when users want to enumerate
keys. Otherwise, a different map which supports key
enumeration may be
0. However, the actual return value is 1.
As a result, the test failed. The fix is to correctly set
the return value in the test structure.
Fixes: 111e6b45315c ("selftests/bpf: make test_verifier run most programs")
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_verifier.
rcu read lock region requirements. This patch fixed the issue
by using GFP_ATOMIC instead to avoid blocking kmalloc. Tested with
CONFIG_DEBUG_ATOMIC_SLEEP=y as suggested by Eric Dumazet.
Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE map")
Signed-off-by: Yonghong Song
On 1/25/18 8:47 PM, Eric Dumazet wrote:
On Thu, 2018-01-18 at 15:08 -0800, Yonghong Song wrote:
+find_leftmost:
+ /* Find the leftmost non-intermediate node, all intermediate nodes
+* have exact two children, so this function will never return NULL.
+*/
syzbot [1
e in tools/testing/selftests/bpf/test_lpm_map.
Yonghong Song (2):
bpf: fix kernel page fault in lpm map trie_get_next_key
tools/bpf: add a multithreaded stress test in bpf selftests
test_lpm_map
kernel/bpf/lpm_trie.c | 26
tools/testing/selftests/bpf/Makefile
The new test will spawn four threads, doing map update, delete, lookup
and get_next_key in parallel. It is able to reproduce the issue in the
previous commit found by syzbot and Eric Dumazet.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/Makefile | 2 +-
tools/testing
pointer instead of *(&trie->root) later on.
Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command or LPM_TRIE map")
Reported-by: syzbot
Reported-by: Eric Dumazet
Signed-off-by: Yonghong Song
---
kernel/bpf/lpm_trie.c | 26 +++---
1 file changed, 11 in
("selftests/bpf: add a test for overlapping packet range
checks")
Fixes: 9d1f15941967 ("bpf: move cgroup_helpers from samples/bpf/ to
tools/testing/selftesting/bpf/")
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile
The added documentation explains how generated codes may differ
between clang bpf target and default target, and when to use
each target.
Signed-off-by: Yonghong Song
---
Documentation/bpf/bpf_devel_QA.txt | 30 ++
1 file changed, 30 insertions(+)
diff --git a
CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.
This patch fixed the failure by marking Test #297 as expected failure
when CONFIG_BPF_JIT_ALWAYS_ON is defined.
Fixes: 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
Signed-off-by: Yonghong Song
---
lib/test_bpf.c | 31 +++
c8852624fc ("bpf: improve selftests and add tests for meta pointer")
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_xdp_meta.sh | 1 +
tools/testing/selftests/bpf/test_xdp_redirect.sh | 2 ++
2 files changed, 3 insertions(+)
diff --git a/tools/testing/selftests/bpf
by: Alexei Starovoitov
Tested-by: Mathieu Malaterre
Signed-off-by: Yonghong Song
---
kernel/bpf/lpm_trie.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 7b469d1..9b41ea4 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/
On 4/27/18 4:48 PM, Alexei Starovoitov wrote:
On Wed, Apr 25, 2018 at 12:29:05PM -0700, Yonghong Song wrote:
When helpers like bpf_get_stack returns an int value
and later on used for arithmetic computation, the LSH and ARSH
operations are often required to get proper sign extension into
64
v2:
. fixed compilation error when CONFIG_PERF_EVENTS is not enabled
Yonghong Song (10):
bpf: change prototype for stack_map_get_build_id_offset
bpf: add bpf_get_stack helper
bpf/verifier: refine retval R0 state for bpf_get_stack helper
bpf: remove never-hit branches in verifier adju
: Yonghong Song
---
include/linux/bpf.h | 1 +
include/linux/filter.h | 3 ++-
include/uapi/linux/bpf.h | 42 --
kernel/bpf/core.c| 5
kernel/bpf/stackmap.c| 67
kernel/bpf/verifier.c| 19
In verifier function adjust_scalar_min_max_vals,
when src_known is false and the opcode is BPF_LSH/BPF_RSH,
early return will happen in the function. So remove
the branch in handling BPF_LSH/BPF_RSH when src_known is false.
Acked-by: Alexei Starovoitov
Signed-off-by: Yonghong Song
---
kernel
The tools header file bpf.h is synced with kernel uapi bpf.h.
The new helper is also added to bpf_helpers.h.
Signed-off-by: Yonghong Song
---
tools/include/uapi/linux/bpf.h| 42 +--
tools/testing/selftests/bpf/bpf_helpers.h | 2 ++
2 files changed, 42
The test_verifier already has a few ARSH test cases.
This patch adds a new test case which takes advantage of newly
improved verifier behavior for bpf_get_stack and ARSH.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_verifier.c | 45 +
1 file
This patch didn't incur functionality change. The function prototype
got changed so that the same function can be reused later.
Signed-off-by: Yonghong Song
---
kernel/bpf/stackmap.c | 13 +
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/k
ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
..
Without improving ARSH value range tracking, the register representing
"max_len - usize" will have smin_value equal to S64_MIN and will be
rejected by verifier.
Signed-off-by: Yonghong Song
---
in
Starovoitov
Signed-off-by: Yonghong Song
---
samples/bpf/Makefile| 11 +-
samples/bpf/bpf_load.c | 63 --
samples/bpf/bpf_load.h | 7 --
samples/bpf/offwaketime_user.c | 1 +
samples/bpf/sampleip_user.c
available, the user space
application will check to ensure that the kernel function
for raw_tracepoint ___bpf_prog_run is part of the stack.
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/Makefile | 4 +-
tools/testing/selftests/bpf/test_get_stack_rawtp.c | 102
id's must be the same.
Acked-by: Alexei Starovoitov
Signed-off-by: Yonghong Song
---
tools/testing/selftests/bpf/test_progs.c | 70 --
.../selftests/bpf/test_stacktrace_build_id.c | 20 ++-
tools/testing/selftests/bpf/test_stacktrace_map.c | 19
(id=0,umax_value=800,var_off=(0x0; 0x3ff))
R1=inv0 R6=ctx(id=0,off=0,imm=0)
R7=map_value(id=0,off=0,ks=4,vs=1600,imm=0)
R8=inv(id=0,umax_value=800,var_off=(0x0; 0x3ff)) R9=inv800
R10=fp0,call_-1
58: (bf) r2 = r7
59: (0f) r2 += r8
60: (1f) r9 -= r8
61: (bf) r1
On 4/28/18 12:06 PM, Alexei Starovoitov wrote:
On Sat, Apr 28, 2018 at 11:17:30AM -0700, Y Song wrote:
On Sat, Apr 28, 2018 at 9:56 AM, Alexei Starovoitov
wrote:
On Sat, Apr 28, 2018 at 12:02:04AM -0700, Yonghong Song wrote:
The test attached a raw_tracepoint program to sched/sched_switch
Starovoitov
Signed-off-by: Yonghong Song
---
samples/bpf/Makefile| 11 +-
samples/bpf/bpf_load.c | 63 --
samples/bpf/bpf_load.h | 7 --
samples/bpf/offwaketime_user.c | 1 +
samples/bpf/sampleip_user.c
101 - 200 of 1519 matches
Mail list logo