On 3/31/16 11:46 AM, Naveen N. Rao wrote:
It's failing this way on powerpc? Odd.
This fails for me on x86_64 too -- RHEL 7.1.
indeed. fails on centos 7.1, whereas centos 6.7 is fine.
On 3/31/16 11:51 AM, Naveen N. Rao wrote:
On 2016/03/31 10:49AM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
Make BPF samples build depend on CONFIG_SAMPLE_BPF. We still don't add a
Kconfig option since that will add a dependency on llvm for allyesconfig
builds whic
fixed this to work with x86_64 and arm64, but not s390.
Cc: Alexei Starovoitov
Cc: David S. Miller
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
...
+
+#ifdef __powerpc__
+#define BPF_KPROBE_READ_RET_IP(ip, ctx){ (ip) = (ctx)->link; }
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
Make BPF samples build depend on CONFIG_SAMPLE_BPF. We still don't add a
Kconfig option since that will add a dependency on llvm for allyesconfig
builds which may not be desirable.
Those who need to build the BPF samples can now just do:
make CONFIG_SAMP
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
While at it, fix some typos in the comment.
Cc: Alexei Starovoitov
Cc: David S. Miller
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
samples/bpf/Makefile | 11 ---
1 file changed, 4 insertions(+), 7
};
^
Fix this by including the necessary header file.
Cc: Alexei Starovoitov
Cc: David S. Miller
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
samples/bpf/map_perf_test_user.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/samples/bpf
On Tue, Mar 29, 2016 at 10:01:24AM +0800, Wangnan (F) wrote:
>
>
> On 2016/3/28 14:41, Wang Nan wrote:
>
> [SNIP]
>
> >
> >To prevent this problem, we need to find a way to ensure the ring buffer
> >is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is
> >suggested because its overhea
or
> the reading is unreliable.
>
> Signed-off-by: Wang Nan
> Cc: He Kuang
> Cc: Alexei Starovoitov
> Cc: Arnaldo Carvalho de Melo
> Cc: Brendan Gregg
> Cc: Jiri Olsa
> Cc: Masami Hiramatsu
> Cc: Namhyung Kim
> Cc: Peter Zijlstra
> Cc: Zefan Li
> Cc: pi3or
On Mon, Mar 28, 2016 at 02:56:47PM -0700, Kees Cook wrote:
> From: Dave Anderson
>
> Fixes a copy-paste-o in the BPF opcode table: "neg" takes no arguments
> and thus has no addressing modes.
>
> Signed-off-by: Dave Anderson
> Signed-off-by: Kees Cook
Acked-by: Alexei Starovoitov
@ 3.60GHz
> Kernel : v4.5.0
>MEAN STDVAR
> BASE800214.9502853.083
> PRE1 2253846.7009997.014
> PRE2 2257495.5408516.293
> POST 2250896.1008933.921
>
> Where 'BASE' is pure p
or
> the reading is unreliable.
>
> Signed-off-by: Wang Nan
> Cc: He Kuang
> Cc: Alexei Starovoitov
> Cc: Arnaldo Carvalho de Melo
> Cc: Brendan Gregg
> Cc: Jiri Olsa
> Cc: Masami Hiramatsu
> Cc: Namhyung Kim
> Cc: Peter Zijlstra
> Cc: Zefan Li
> Cc: pi3or
s test result after this
> patch. See [4] for detail experimental setup.
>
> Considering the stdvar, this patch doesn't hurt performance.
>
> For the detail of testing method, please refer to [2].
>
> [1] http://lkml.kernel.org/g/56f52e83.70...@huawei.com
> [2] http://lkml.
On Thu, Mar 24, 2016 at 11:48:54AM +0800, Wangnan (F) wrote:
>
> >>http://lkml.iu.edu/hypermail/linux/kernel/1601.2/03966.html
> >Wang, when you respin, please add all perf analysis that you've
> >done and the reasons to do it this way to commit log
> >to make sure it stays in git history.
> >
> >
On Wed, Mar 23, 2016 at 06:08:41PM +0800, Wangnan (F) wrote:
>
>
> On 2016/3/23 17:50, Peter Zijlstra wrote:
> >On Mon, Mar 14, 2016 at 09:59:43AM +, Wang Nan wrote:
> >>Convert perf_output_begin to __perf_output_begin and make the later
> >>function able to write records from the end of the
On 3/11/16 10:02 AM, Daniel Borkmann wrote:
Would strscpy() help in this case (see 30035e45753b ("string: provide
strscpy()"))?
I've looked at it too, but 990486c8af04 scared me a little,
it's not easily backport-able and mainly I don't think
it's faster than strlcpy for small strings like comm
On 3/11/16 2:24 AM, Daniel Borkmann wrote:
On 03/10/2016 05:02 AM, Alexei Starovoitov wrote:
Lots of places in the kernel use memcpy(buf, comm, TASK_COMM_LEN); but
the result is typically passed to print("%s", buf) and extra bytes
after zero don't cause any harm.
In bp
: introduce current->pid, tgid, uid, gid, comm
accessors")
Reported-by: Tobias Waldekranz
Signed-off-by: Alexei Starovoitov
---
Targeting net-next, since it's too late for net.
I think it makes sense for stable as well.
kernel/bpf/helpers.c | 2 +-
1 file changed, 1 insertion(+), 1 del
On Thu, Mar 10, 2016 at 02:43:42AM +0100, Arnd Bergmann wrote:
> Changing the bpf syscall to use the new bpf_stackmap_copy() helper for
> BPF_MAP_TYPE_STACK_TRACE causes a link error when CONFIG_PERF_EVENTS
> is disabled:
>
> kernel/built-in.o: In function `map_lookup_elem':
> :(.text+0x7fca4): un
performance tests for hash map and per-cpu hash map
with and without pre-allocation
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile | 4 +
samples/bpf/map_perf_test_kern.c | 100 +
samples/bpf/map_perf_test_user.c | 155
increase stress by also calling bpf_get_stackid() from
various *spin* functions
Signed-off-by: Alexei Starovoitov
---
samples/bpf/spintest_kern.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/samples/bpf/spintest_kern.c b/samples/bpf/spintest_kern.c
index ef8ac33bb2e9
walking and deleting map elements.
Note that due to nature bpf_load.c the earlier kprobe+bpf programs are
already active while loader loads new programs, creates new kprobes and
attaches them.
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile| 4 +++
samples/bpf/spintest_kern.c
On Tue, Mar 08, 2016 at 03:31:10PM -0500, David Miller wrote:
...
> > Patch 10: stress test for hash map infra. It attaches to spin_lock
> > functions and bpf_map_update/delete are called from different contexts
> > Patch 11: stress for bpf_get_stackid
> > Patch 12: map performance test
> >
> > Re
On 3/8/16 1:13 AM, Daniel Wagner wrote:
Some time back Daniel Wagner reported crashes when bpf hash map is
>used to compute time intervals between preempt_disable->preempt_enable
>and recently Tom Zanussi reported a dead lock in iovisor/bcc/funccount
>tool if it's used to count the number of invo
n the same cpu.
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/Makefile | 2 +-
kernel/bpf/percpu_freelist.c | 100 +++
kernel/bpf/percpu_freelist.h | 31 ++
3 files changed, 132 insertions(+), 1 deletion(-)
create mode 100644 kerne
move ksym search from offwaketime into library to be reused
in other tests
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 62 ++
samples/bpf/bpf_load.h | 6
samples/bpf/offwaketime_user.c | 67
map creation is typically the first one to fail when rlimits are
too low, not enough memory, etc
Make this failure scenario more verbose
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/bpf_load.c b
can be large
and number of map update/delete per second is low, it may make
sense to use it.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 2 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 240 +--
kernel/bpf/sysca
helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
include/linux/bpf.h | 3 +++
kernel/bpf/sys
lookup and kernel side updates
is also present in hashmap, but it's not a new race. bpf programs were
always allowed to modify hash and array map elements while user space
is copying them.
Fixes: d5a3b1f69186 ("bpf: introduce BPF_MAP_TYPE_STACK_TRACE")
Signed-off-by: Alexei Starovoitov
Suggested-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/stackmap.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index bd3bdf2486a7..76d5a794e426 100644
--- a/kernel/bpf
extend test coveraged to include pre-allocated and run-time alloc maps
Signed-off-by: Alexei Starovoitov
---
samples/bpf/test_maps.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
index 7bd9edd02d9b..47bf0858f9e4
note old loader is compatible with new kernel.
map_flags are optional
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_helpers.h | 1 +
samples/bpf/bpf_load.c | 3 ++-
samples/bpf/fds_example.c | 2 +-
samples/bpf/libbpf.c| 5 +++--
samples/bpf/libbpf.h| 2
ess test for hash map infra. It attaches to spin_lock
functions and bpf_map_update/delete are called from different contexts
Patch 11: stress for bpf_get_stackid
Patch 12: map performance test
Reported-by: Daniel Wagner
Reported-by: Tom Zanussi
Alexei Starovoitov (12):
bpf: prevent kprobe+bpf deadl
seeing on ton of these errors on net-next with kasan on.
Likely old bug though.
[ 373.705691] BUG: KASAN: slab-out-of-bounds in memcpy+0x28/0x40 at
addr 8811ada62cb0
[ 373.707137] Write of size 28 by task bash/7059
[ 373.708177]
=
On 3/7/16 3:08 AM, Daniel Borkmann wrote:
On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
[...]
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 264
++-
kernel/bpf/syscall.c | 2 +-
4
On 3/7/16 2:33 AM, Daniel Borkmann wrote:
On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
Introduce simple percpu_freelist to keep single list of elements
spread across per-cpu singly linked lists.
/* push element into the list */
void pcpu_freelist_push(struct pcpu_freelist *, struct
performance tests for hash map and per-cpu hash map
with and without pre-allocation
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile | 4 +
samples/bpf/map_perf_test_kern.c | 100 +
samples/bpf/map_perf_test_user.c | 155
walking and deleting map elements.
Note that due to nature bpf_load.c the earlier kprobe+bpf programs are
already active while loader loads new programs, creates new kprobes and
attaches them.
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile| 4 +++
samples/bpf/spintest_kern.c
large
and number of map update/delete per second is low, it may make
sense to use it.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 264 ++-
kernel/bpf/syscall.c
move ksym search from offwaketime into library to be reused
in other tests
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 62 ++
samples/bpf/bpf_load.h | 6
samples/bpf/offwaketime_user.c | 67
n the same cpu.
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/Makefile | 2 +-
kernel/bpf/percpu_freelist.c | 81
kernel/bpf/percpu_freelist.h | 31 +
3 files changed, 113 insertions(+), 1 deletion(-)
create mode 100644 k
map creation is typically the first one to fail when rlimits are
too low, not enough memory, etc
Make this failure scenario more verbose
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/bpf_load.c b
note old loader is compatible with new kernel.
map_flags are optional
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_helpers.h | 1 +
samples/bpf/bpf_load.c | 3 ++-
samples/bpf/fds_example.c | 2 +-
samples/bpf/libbpf.c| 5 +++--
samples/bpf/libbpf.h| 2
extend test coveraged to include pre-allocated and run-time alloc maps
Signed-off-by: Alexei Starovoitov
---
samples/bpf/test_maps.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
index af02f7518c0a..d1e63f48e39c
checks
Patches 4-7: prepare test infra
Patch 8: stress test for hash map infra. It attaches to spin_lock
functions and bpf_map_update/delete are called from different contexts
(except nmi, which is unsupported by bpf still)
Patch 9: map performance test
Reported-by: Daniel Wagner
Reported-by: To
helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 3 +++
kernel/bpf/syscall.c | 13 +
kernel/tr
On 2/25/16 8:47 AM, Peter Zijlstra wrote:
On Wed, Feb 17, 2016 at 07:58:57PM -0800, Alexei Starovoitov wrote:
+static inline int perf_callchain_store(struct perf_callchain_entry *entry, u64
ip)
{
+ if (entry->nr < PERF_MAX_STACK_DEPTH) {
entry->ip[entry-&g
On 2/25/16 6:18 AM, Peter Zijlstra wrote:
On Wed, Feb 17, 2016 at 07:58:57PM -0800, Alexei Starovoitov wrote:
. avoid walking the stack when there is no room left in the buffer
. generalize get_perf_callchain() to be called from bpf helper
If it does two things it should be two patches
On 2/25/16 6:23 AM, Peter Zijlstra wrote:
+ id = hash & (smap->n_buckets - 1);
Its not at all clear where the corresponding rcu_read_lock() is at.
>+ bucket = rcu_dereference(smap->buckets[id]);
bpf programs of all types are always executing under rcu_read_lock().
This is fundamental
")
> Signed-off-by: Sasha Levin
thank you.
Acked-by: Alexei Starovoitov
On Thu, Feb 18, 2016 at 03:27:18PM -0600, Tom Zanussi wrote:
> On Tue, 2016-02-16 at 20:51 -0800, Alexei Starovoitov wrote:
> > On Tue, Feb 16, 2016 at 04:35:27PM -0600, Tom Zanussi wrote:
> > > On Sun, 2016-02-14 at 01:02 +0100, Alexei Starovoitov wrote:
> > > > On F
On Thu, Feb 18, 2016 at 09:56:22PM -0500, Sasha Levin wrote:
> bpf_percpu_hash_update() expects rcu lock to be held and warns if it's not,
> which pointed out a missing rcu read lock.
>
> Fixes: 15a07b338 ("bpf: add lookup/update support for per-cpu hash and array
> maps")
> Signed-off-by: Sasha
y were woken
up. The combined stacks, task names, and total time is summarized in kernel
context for efficiency.
Example:
$ sudo ./offwaketime | flamegraph.pl > demo.svg
Open demo.svg in the browser as FlameGraph visualization.
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefi
hash_futex
1.05% sched_bench [kernel.vmlinux][k] do_futex
1.05% sched_bench [kernel.vmlinux][k] get_futex_key_refs.isra.13
The hotest part of bpf_get_stackid() is inlined jhash2, so we may consider
using some faster hash in the future, but it's good enough for now.
Alexe
ata (including other stackid) and used as a key into maps.
Userspace will access stackmap using standard lookup/delete syscall commands to
retrieve full stack trace for given stackid.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 21 +
ker
. avoid walking the stack when there is no room left in the buffer
. generalize get_perf_callchain() to be called from bpf helper
Signed-off-by: Alexei Starovoitov
---
arch/x86/include/asm/stacktrace.h | 2 +-
arch/x86/kernel/cpu/perf_event.c | 4 ++--
arch/x86/kernel/dumpstack.c | 6
On Tue, Feb 16, 2016 at 04:35:27PM -0600, Tom Zanussi wrote:
> On Sun, 2016-02-14 at 01:02 +0100, Alexei Starovoitov wrote:
> > On Fri, Feb 12, 2016 at 10:11:18AM -0600, Tom Zanussi wrote:
> > this hist triggers belong in the kernel. BPF already can do
> > way more co
On Fri, Feb 12, 2016 at 10:11:18AM -0600, Tom Zanussi wrote:
> Hi,
>
> As promised in previous threads, this patchset shares some common
> functionality with the hist triggers code and enables trace events to
> be accessed from eBPF programs.
great that you've started working on BPF!
> It needs
On Fri, Jan 29, 2016 at 03:28:40AM -0800, tip-bot for Alexei Starovoitov wrote:
> Commit-ID: e03e7ee34fdd1c3ef494949a75cb8c61c7265fa9
> Gitweb: http://git.kernel.org/tip/e03e7ee34fdd1c3ef494949a75cb8c61c7265fa9
> Author: Alexei Starovoitov
> AuthorDate: Mon, 25 Jan 2016 20
Commit-ID: e03e7ee34fdd1c3ef494949a75cb8c61c7265fa9
Gitweb: http://git.kernel.org/tip/e03e7ee34fdd1c3ef494949a75cb8c61c7265fa9
Author: Alexei Starovoitov
AuthorDate: Mon, 25 Jan 2016 20:59:49 -0800
Committer: Ingo Molnar
CommitDate: Fri, 29 Jan 2016 08:35:25 +0100
perf/bpf: Convert
On Wed, Jan 27, 2016 at 11:54:41AM -0500, Mathieu Desnoyers wrote:
> Expose a new system call allowing threads to register one userspace
> memory area where to store the CPU number on which the calling thread is
> running. Scheduler migration sets the TIF_NOTIFY_RESUME flag on the
> current thread.
On Wed, Jan 27, 2016 at 10:58:22AM +0100, Peter Zijlstra wrote:
>
> > Meaning there gotta be always a user space process
> > that will be holding perf_event FDs.
>
> By using fget() the BPF array thing will hold the FDs, right? I mean
> once you do a full fget() userspace can go and kill itself,
On Tue, Jan 26, 2016 at 09:51:54PM -0800, Joel Fernandes wrote:
> Hi Brendan, Alexei,
>
> I noticed your patch fixing the $subject issue.
>
> https://patchwork.ozlabs.org/patch/471118/
>
> However, I still see make samples/bpf/ using gcc instead of clang.
>
> Here's a verbose kbuild output of m
On Tue, Jan 26, 2016 at 06:24:25PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 26, 2016 at 05:16:37PM +0100, Peter Zijlstra wrote:
> > > +struct file *perf_event_get(unsigned int fd)
> > > {
> > > + struct file *file;
> > >
> > > + file = fget_raw(fd);
> >
> > fget_raw() to guarantee the return
++---
> 2 files changed, 150 insertions(+), 157 deletions(-)
I think I understand what you're trying to do and
the patch looks good to me.
As far as BPF side I did the following...
does it match the model you outlined above?
I did basic testing and it looks fine.
On Mon, Jan 25, 2016 at 08:33:48AM +, Wang Nan wrote:
> This is the v3 of this series. Compare with v2, tailsize method is
> removed, ioctl command PERF_EVENT_IOC_PAUSE_OUTPUT is changed to
> _IOW('$', 9, __u32) since it has an input value, commit message
> is slightly adjusted.
>
> New test r
On Fri, Jan 22, 2016 at 01:38:47PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 22, 2016 at 01:35:40PM +0200, Alexander Shishkin wrote:
> > Peter Zijlstra writes:
> >
> > > So I think there's a number of problems still :-(
I've been looking at how perf_event->owner is handled and couldn't
figure o
On Fri, Jan 22, 2016 at 12:13:48PM +, Wang Nan wrote:
> This is v2 of this series.
>
> Compare with v1:
>
> Fixes several bugs in v1.
>
> Corresponsing perf has finished and can be found from:
>
> https://git.kernel.org/cgit/linux/kernel/git/pi3orama/linux.git/
> branch: perf/overwrite-b
off all events attached to this ring buffer.
> This patch is for supporting overwritable ring buffer with TAILSIZE
> selected.
TAILSIZE is dropped. pls adjust commit log.
> Signed-off-by: Wang Nan
> Cc: He Kuang
> Cc: Alexei Starovoitov
> Cc: Arnaldo Carvalho de Melo
> Cc:
On Fri, Jan 22, 2016 at 12:35:42PM -0500, Adam Jackson wrote:
> On Fri, 2016-01-22 at 14:22 -0300, Arnaldo Carvalho de Melo wrote:
>
> > the 'bpf' target for clang is being used together with perf to
> > build scriptlets into object code that then gets uploaded to the kernel
> > via sys_bpf(),
On Fri, Jan 22, 2016 at 11:36:14AM -0600, Josh Poimboeuf wrote:
> On Fri, Jan 22, 2016 at 09:18:23AM -0800, Alexei Starovoitov wrote:
> > On Fri, Jan 22, 2016 at 09:58:04AM -0600, Josh Poimboeuf wrote:
> > > On Thu, Jan 21, 2016 at 08:18:46PM -0800, Alexei Starovoitov wrote:
>
On Fri, Jan 22, 2016 at 03:30:00PM +0900, Daniel Sangorrin wrote:
> This patch allows applications to restrict the order in which
> its system calls may be requested. In order to do that, we
> provide seccomp-BPF scripts with information about the
> previous system call requested.
>
> An example u
On Thu, Jan 21, 2016 at 10:13:02PM -0600, Josh Poimboeuf wrote:
> On Thu, Jan 21, 2016 at 06:55:41PM -0800, Alexei Starovoitov wrote:
> > On Thu, Jan 21, 2016 at 04:49:35PM -0600, Josh Poimboeuf wrote:
> > > stacktool reports the following false positive warnings:
> > >
On Fri, Jan 22, 2016 at 09:58:04AM -0600, Josh Poimboeuf wrote:
> On Thu, Jan 21, 2016 at 08:18:46PM -0800, Alexei Starovoitov wrote:
> > On Thu, Jan 21, 2016 at 09:55:31PM -0600, Josh Poimboeuf wrote:
> > > On Thu, Jan 21, 2016 at 06:44:28PM -0800, Alexei Starovoitov wrote:
>
return -LIBBPF_ERRNO__RELOC;
> + }
May be 'pr_err' instead of 'pr_warning', since such program will fail
to load by kernel anyway. Looks good otherwise.
Acked-by: Alexei Starovoitov
On Fri, Jan 22, 2016 at 12:40:50PM -0300, Arnaldo Carvalho de Melo wrote:
> [root@jouet ~]# llc --version
> LLVM (http://llvm.org/):
> LLVM version 3.7.0
> Optimized build.
> Built Dec 4 2015 (15:49:18).
> Default target: x86_64-redhat-linux-gnu
> Host CPU: broadwell
>
> Registered Ta
On Thu, Jan 21, 2016 at 09:55:31PM -0600, Josh Poimboeuf wrote:
> On Thu, Jan 21, 2016 at 06:44:28PM -0800, Alexei Starovoitov wrote:
> > On Thu, Jan 21, 2016 at 04:49:27PM -0600, Josh Poimboeuf wrote:
> > > bpf_jit.S has several callable non-leaf functions
On Fri, Jan 22, 2016 at 10:21:19AM +0800, Wangnan (F) wrote:
>
>
> On 2016/1/21 14:51, Wangnan (F) wrote:
> >
> >
> >On 2016/1/20 10:20, Alexei Starovoitov wrote:
> >>On Wed, Jan 20, 2016 at 09:37:42AM +0800, Wangnan (F) wrote:
> >>>
> >>&
On Thu, Jan 21, 2016 at 04:49:35PM -0600, Josh Poimboeuf wrote:
> stacktool reports the following false positive warnings:
>
> stacktool: kernel/bpf/core.o: __bpf_prog_run()+0x5c: sibling call from
> callable instruction with changed frame pointer
> stacktool: kernel/bpf/core.o: __bpf_prog_ru
E_POINTER is enabled.
>
> Signed-off-by: Josh Poimboeuf
> Cc: Alexei Starovoitov
> Cc: net...@vger.kernel.org
> ---
> arch/x86/net/bpf_jit.S | 9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit.S b/arch/x86/net/bpf_jit.S
>
On Wed, Jan 20, 2016 at 09:32:22AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 19, 2016 at 01:58:19PM -0800, Alexei Starovoitov wrote:
> > On Tue, Jan 19, 2016 at 09:05:58PM +0100, Peter Zijlstra wrote:
>
> > > The most obvious place that generates such magical references w
> Acked-by: Daniel Borkmann
good catch indeed.
Classic bpf jits didn't have much love. Great to see this work.
Acked-by: Alexei Starovoitov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Fri, Dec 18, 2015 at 03:04:00PM +0800, Wangnan (F) wrote:
>
> >>However, linux/err.h is not a part of uapi. To make libbpf work, one has to
> >>create its
> >>own err.h.
> >Why tools/include/linux/err.h is not suitable for everyone?
> >
> >>Now I'm thinking provide LIBBPF_{IS_ERR,PTR_ERR}(), i
On Wed, Dec 16, 2015 at 02:58:08PM +0800, Ming Lei wrote:
> On Wed, Dec 16, 2015 at 1:01 PM, Yang Shi wrote:
>
> >
> > I recalled Steven confirmed raw_spin_lock has the lockdep benefit too in the
> > patch review for changing to raw lock.
> >
> > Please check this thread out
> > http://lists.ope
On Fri, Dec 18, 2015 at 09:47:11AM +0800, Wangnan (F) wrote:
>
> This is a limitation in tools/lib/bpf/libbpf.h, which has a #include
>
> in its header.
>
> libbpf.h requires this include because its API uses ERR_PTR() to encode
> error code.
> For example, when calling bpf_object__open(), calle
On Thu, Dec 17, 2015 at 05:23:12AM +, Wang Nan wrote:
> We are going to uses libbpf to replace old libbpf.[ch] and
> bpf_load.[ch]. This is the first patch of this work. In this patch,
> several macros and helpers in libbpf.[ch] and bpf_load.[ch] are
> merged into utils.[ch]. utils.[ch] utilize
On Tue, Dec 15, 2015 at 07:21:03PM +0800, Ming Lei wrote:
> kmalloc() is often a bit time-consuming, also
> one atomic counter has to be used to track the total
> allocated elements, which is also not good.
>
> This patch pre-allocates element pool in htab_map_alloc(),
> then use percpu_ida to all
On Tue, Dec 15, 2015 at 07:21:02PM +0800, Ming Lei wrote:
> Both htab_map_update_elem() and htab_map_delete_elem() can be
> called from eBPF program, and they may be in kernel hot path,
> so it isn't efficient to use a per-hashtable lock in this two
> helpers.
>
> The per-hashtable spinlock is use
On Mon, Dec 14, 2015 at 12:39:40PM +0800, Wangnan (F) wrote:
>
> And what do you think about the BPF function prototype? Should we put them
> into kernel headers? What about::
> +#define DEFINE_BPF_FUNC(rettype, name, arglist...) static rettype
> (*name)(arglist) = (void *)BPF_FUNC_##name
tld
On Mon, Dec 14, 2015 at 11:27:36AM +0800, Wangnan (F) wrote:
>
>
> On 2015/12/12 2:21, Alexei Starovoitov wrote:
> >On Fri, Dec 11, 2015 at 08:39:35PM +0800, pi3orama wrote:
> >>>static u64 (*bpf_ktime_get_ns)(void) =
> >>> (void *)5;
> >>>
On Fri, Dec 11, 2015 at 08:39:35PM +0800, pi3orama wrote:
> > static u64 (*bpf_ktime_get_ns)(void) =
> > (void *)5;
> > static int (*bpf_trace_printk)(const char *fmt, int fmt_size, ...) =
> > (void *)6;
> > static int (*bpf_get_smp_processor_id)(void) =
> > (void *)8;
> > static int (*
On Fri, Dec 11, 2015 at 01:12:56PM -0500, Steven Rostedt wrote:
> On Fri, 11 Dec 2015 18:35:59 +0100
> Julia Lawall wrote:
>
> > This bpf_verifier_ops structure is never modified, like the other
> > bpf_verifier_ops structures, so declare it as const.
> >
> > Done with the help of Coccinelle.
>
On Fri, Dec 11, 2015 at 06:35:59PM +0100, Julia Lawall wrote:
> This bpf_verifier_ops structure is never modified, like the other
> bpf_verifier_ops structures, so declare it as const.
>
> Done with the help of Coccinelle.
>
> Signed-off-by: Julia Lawall
Acked-by: Alexei
On Thu, Dec 10, 2015 at 10:02:51AM +0100, Peter Zijlstra wrote:
> On Wed, Dec 09, 2015 at 07:54:35PM -0800, Alexei Starovoitov wrote:
> > Freeing memory is a requirement regardless.
> > Even when kernel running with kasan, there must be a way to stop
> > stack collection
On Wed, Dec 09, 2015 at 10:17:17AM +0100, Dmitry Vyukov wrote:
>
> We would happily share this code with other subsystems, or even better
> reuse an existing solutions. But to the best of my knowledge there is
> no such existing solution, and I still know basically nothing about
> what you were ha
On Wed, Dec 09, 2015 at 10:41:38AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Dec 09, 2015 at 11:10:48AM +0900, Masami Hiramatsu escreveu:
> > Hi Arnaldo,
> >
> > Here is a series of patches for perf refcnt debugger and
> > some fixes.
> >
> > In this series I've replaced all atomic referen
On Tue, Dec 08, 2015 at 07:35:20PM +0100, Dmitry Vyukov wrote:
> On Tue, Dec 8, 2015 at 7:05 PM, Alexei Starovoitov
> wrote:
> > On Tue, Dec 08, 2015 at 06:56:23PM +0100, Dmitry Vyukov wrote:
> >> On Tue, Dec 8, 2015 at 6:54 PM, Alexei Starovoitov
> >> wrote:
> &
On Tue, Dec 08, 2015 at 06:56:23PM +0100, Dmitry Vyukov wrote:
> On Tue, Dec 8, 2015 at 6:54 PM, Alexei Starovoitov
> wrote:
> > On Tue, Dec 08, 2015 at 05:12:04PM +0100, Dmitry Vyukov wrote:
> >> On Tue, Dec 8, 2015 at 4:24 AM, Alexei Starovoitov
> >> wrote:
> &
On Tue, Dec 08, 2015 at 05:12:04PM +0100, Dmitry Vyukov wrote:
> On Tue, Dec 8, 2015 at 4:24 AM, Alexei Starovoitov
> wrote:
> > On Mon, Dec 07, 2015 at 05:09:21PM +0100, Dmitry Vyukov wrote:
> >> > So it would be _awesome_ if we could somehow extend this callchain to
>
On Mon, Dec 07, 2015 at 05:09:21PM +0100, Dmitry Vyukov wrote:
> > So it would be _awesome_ if we could somehow extend this callchain to
> > include the site that calls call_rcu().
>
> We have a patch for KASAN in works that adds so-called stack depot
> which allows to map a stack trace onto uint3
1001 - 1100 of 2320 matches
Mail list logo