On Sat, Aug 27, 2016 at 12:30:36AM -0700, Andy Lutomirski wrote:
> > cgroup is the common way to group multiple tasks.
> > Without cgroup only parent<->child relationship will be possible,
> > which will limit usability of such lsm to a master task that controls
> > its children. Such api restricti
On Sat, Aug 27, 2016 at 04:06:38PM +0200, Mickaël Salaün wrote:
>
> On 27/08/2016 01:05, Alexei Starovoitov wrote:
> > On Fri, Aug 26, 2016 at 05:10:40PM +0200, Mickaël Salaün wrote:
> >>
> >>>
> >>> - I don't think such 'for' loop c
e_data without affecting bpf programs.
New fields can be added to the end of struct bpf_perf_event_data
in the future.
Signed-off-by: Alexei Starovoitov
---
include/linux/perf_event.h | 5
include/uapi/linux/Kbuild | 1 +
include/uapi/linux/bpf.h| 1 +
includ
Make sure that BPF_PROG_TYPE_PERF_EVENT programs only use
preallocated hash maps, since doing memory allocation
in overflow_handler can crash depending on where nmi got triggered.
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/verifier.c | 22 +-
1 file changed, 21
d xdp programs.
They check for 4-byte only ctx access before these conditions are hit.
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/verifier.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index abb61f3f6900..c1c9e441f0f5 1
From: Brendan Gregg
sample instruction pointer and frequency count in a BPF map
Signed-off-by: Brendan Gregg
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile| 4 +
samples/bpf/sampleip_kern.c | 38 +
samples/bpf/sampleip_user.c | 196
YCLES for current process and inherited perf_events to
children
- PERF_COUNT_SW_CPU_CLOCK on all cpus
- PERF_COUNT_SW_CPU_CLOCK for current process
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile | 4 +
samples/bpf/bpf_helpers.h | 2 +
samples/bpf/bpf_load.c
.
Patches 5 and 6 are tests/examples from myself and Brendan.
Thanks!
Alexei Starovoitov (5):
bpf: support 8-byte metafield access
bpf: introduce BPF_PROG_TYPE_PERF_EVENT program type
bpf: perf_event progs should only use preallocated maps
perf, bpf: add perf events core support for
>prog, since it's
assigned only once before it's accessed.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h| 4 +++
include/linux/perf_event.h | 2 ++
kernel/events/core.c | 82 +-
3 files changed, 87 inserti
On Fri, Aug 26, 2016 at 05:10:40PM +0200, Mickaël Salaün wrote:
>
trimming cc list again. When it's too big vger will consider it as spam.
> On 26/08/2016 04:14, Alexei Starovoitov wrote:
> > On Thu, Aug 25, 2016 at 12:32:44PM +0200, Mickaël Salaün wrote:
> >
On Thu, Aug 25, 2016 at 12:32:44PM +0200, Mickaël Salaün wrote:
> Add an eBPF function bpf_landlock_cmp_cgroup_beneath(opt, map, map_op)
> to compare the current process cgroup with a cgroup handle, The handle
> can match the current cgroup if it is the same or a child. This allows
> to make condit
On Fri, Aug 05, 2016 at 12:52:09PM +0200, Peter Zijlstra wrote:
> > > > Currently overflow_handler is set at event alloc time. If we start
> > > > changing it on the fly with atomic xchg(), afaik things shouldn't
> > > > break, since each overflow_handler is run to completion and doesn't
> > > > ch
On Thu, Aug 04, 2016 at 09:13:16PM -0700, Brendan Gregg wrote:
> On Thu, Aug 4, 2016 at 6:43 PM, Alexei Starovoitov
> wrote:
> > On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> >> On Wed, Aug 03, 2016 at 11:57:05AM -0700, Brendan Gregg wrote:
> >>
On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 03, 2016 at 11:57:05AM -0700, Brendan Gregg wrote:
>
> > As for pmu tracepoints: if I were to instrument it (although I wasn't
> > planning to), I'd put a tracepoint in perf_event_overflow() called
> > "perf:perf_overflo
On Wed, Dec 31, 2014 at 08:38:49PM -0500, kan.li...@intel.com wrote:
>
> Changes since V1:
> - Using work queue to set Rx network flow classification rules and search
>available NET policy object asynchronously.
> - Using RCU lock to replace read-write lock
> - Redo performance test and upd
On Tue, Aug 02, 2016 at 11:15:34PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Aug 02, 2016 at 02:03:33PM -0700, Alexei Starovoitov escreveu:
> > On Tue, Aug 02, 2016 at 04:51:02PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Hi Wang,
> > >
> > > Someth
On Tue, Aug 02, 2016 at 04:51:02PM -0300, Arnaldo Carvalho de Melo wrote:
> Hi Wang,
>
> Something changed and a function used in a perf test for BPF is
> not anymore appearing on vmlinux, albeit still available on
> /proc/kallsyms:
>
> # readelf -wi /lib/modules/4.7.0+/build/vmlinux | grep
On Mon, Aug 01, 2016 at 01:18:43AM -0400, valdis.kletni...@vt.edu wrote:
> On Sun, 31 Jul 2016 21:42:22 -0700, Alexei Starovoitov said:
>
> > and at least 2 other such patches for other files...
> > Is there a single warning where -Woverride-init was useful?
> > May
On Mon, Aug 01, 2016 at 12:33:30AM -0400, Valdis Kletnieks wrote:
> Building with W=1 generates some 350 lines of warnings of the form:
>
> kernel/bpf/core.c: In function '__bpf_prog_run':
> kernel/bpf/core.c:476:33: warning: initialized field overwritten
> [-Woverride-init]
>[BPF_ALU | BPF_A
On Sun, Jul 24, 2016 at 06:50:47PM +0100, Colin King wrote:
> From: Colin Ian King
>
> file f needs to be closed, fixes resource leak.
>
> Signed-off-by: Colin Ian King
have been travelling. sorry for delay.
Acked-by: Alexei Starovoitov
On Sat, Jul 23, 2016 at 09:01:39PM -0700, Sargun Dhillon wrote:
> In kernel/bpf/syscall.c we restrict programs loading bpf kprobe programs so
> attr.kern_version must be exactly equal to what the user is running at the
> moment. This makes a lot of sense because kprobes can touch lots of
> unstab
at
> uses it, in one the intended ways to divert execution.
>
> Thanks to Alexei Starovoitov, and Daniel Borkmann for review, I've made
> changes based on their recommendations.
>
> This helper should be considered experimental, so we print a warning
> to dmesg when it i
On Sat, Jul 23, 2016 at 05:39:42PM -0700, Sargun Dhillon wrote:
> The example has been modified to act like a test in the follow up set. It
> tests
> for the positive case (Did the helper work or not) as opposed to the negative
> case (is the helper able to violate the safety constraints we set
On Sat, Jul 23, 2016 at 05:44:11PM -0700, Sargun Dhillon wrote:
> This example shows using a kprobe to act as a dnat mechanism to divert
> traffic for arbitrary endpoints. It rewrite the arguments to a syscall
> while they're still in userspace, and before the syscall has a chance
> to copy the arg
memory!",
> + current->comm, task_pid_nr(current));
I think checkpatch should have complained here.
current->comm line should start under "
No other nits for this patch :)
Once fixed, feel free to add my Acked-by: Alexei Starovoitov
On Fri, Jul 22, 2016 at 05:05:27PM -0700, Sargun Dhillon wrote:
> It was tested with the tracex7 program on x86-64.
it's my fault to start tracexN tradition that turned out to be
cumbersome, let's not continue it. Instead could you rename it
to something meaningful? Like test_probe_write_user ?
Ri
On Fri, Jul 22, 2016 at 11:53:52AM +0200, Daniel Borkmann wrote:
> On 07/22/2016 04:14 AM, Alexei Starovoitov wrote:
> >On Thu, Jul 21, 2016 at 06:09:17PM -0700, Sargun Dhillon wrote:
> >>This allows user memory to be written to during the course of a kprobe.
> >>It sho
hing
> the system, we print a warning on invocation.
>
> It was tested with the tracex7 program on x86-64.
>
> Signed-off-by: Sargun Dhillon
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> ---
> include/uapi/linux/bpf.h | 12
> kern
On Wed, Jul 20, 2016 at 01:19:51AM +0200, Daniel Borkmann wrote:
> On 07/19/2016 06:34 PM, Alexei Starovoitov wrote:
> >On Tue, Jul 19, 2016 at 01:17:53PM +0200, Daniel Borkmann wrote:
> >>>+ return -EINVAL;
> >>>+
> >>>+ /* Is this a use
On Tue, Jul 19, 2016 at 01:17:53PM +0200, Daniel Borkmann wrote:
> >+return -EINVAL;
> >+
> >+/* Is this a user address, or a kernel address? */
> >+if (!access_ok(VERIFY_WRITE, to, size))
> >+return -EINVAL;
> >+
> >+return probe_kernel_write(to, from, size);
>
On Mon, Jul 18, 2016 at 03:57:17AM -0700, Sargun Dhillon wrote:
>
>
> On Sun, 17 Jul 2016, Alexei Starovoitov wrote:
>
> >On Sun, Jul 17, 2016 at 03:19:13AM -0700, Sargun Dhillon wrote:
> >>
> >>+static u64 bpf_copy_to_user(u64 r1, u64 r2, u64 r3, u64 r4, u64
On Mon, Jul 18, 2016 at 06:01:08AM +, Wang Nan wrote:
> New LLVM will issue newly assigned EM_BPF machine code. The new code
> will be propogated to glibc and libelf.
>
> This patch introduces the new machine code to libbpf.
>
> Signed-off-by: Wang Nan
> Cc: Ale
On Sun, Jul 17, 2016 at 03:19:13AM -0700, Sargun Dhillon wrote:
>
> +static u64 bpf_copy_to_user(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
> +{
> + void *to = (void *) (long) r1;
> + void *from = (void *) (long) r2;
> + int size = (int) r3;
> +
> + /* check if we're in a user contex
On Fri, Jul 15, 2016 at 07:16:01PM -0700, Sargun Dhillon wrote:
>
>
> On Thu, 14 Jul 2016, Alexei Starovoitov wrote:
>
> >On Wed, Jul 13, 2016 at 01:31:57PM -0700, Sargun Dhillon wrote:
> >>
> >>
> >>On Wed, 13 Jul 2016, Alexei Starovoitov wrote:
On Wed, Jul 13, 2016 at 01:31:57PM -0700, Sargun Dhillon wrote:
>
>
> On Wed, 13 Jul 2016, Alexei Starovoitov wrote:
>
> > On Wed, Jul 13, 2016 at 03:36:11AM -0700, Sargun Dhillon wrote:
> >> Provides BPF programs, attached to kprobes a safe way to write to
>
On Wed, Jul 13, 2016 at 03:36:11AM -0700, Sargun Dhillon wrote:
> Provides BPF programs, attached to kprobes a safe way to write to
> memory referenced by probes. This is done by making probe_kernel_write
> accessible to bpf functions via the bpf_probe_write helper.
not quite :)
> Signed-off-by:
On Sat, Jul 09, 2016 at 01:31:40AM +0200, Eric Dumazet wrote:
> On Fri, 2016-07-08 at 17:52 +0200, Michal Kubecek wrote:
> > If socket filter truncates an udp packet below the length of UDP header
> > in udpv6_queue_rcv_skb() or udp_queue_rcv_skb(), it will trigger a
> > BUG_ON in skb_pull_rcsum().
On Wed, Jun 29, 2016 at 06:35:12PM +0800, Wangnan (F) wrote:
>
>
> On 2016/6/29 18:15, Hekuang wrote:
> >hi
> >
> >在 2016/6/28 22:57, Alexei Starovoitov 写道:
> >>
> >> return 0;
> >> }
> >>@@ -465,7 +465,7 @@ EXPORT_SYMBOL_
On Tue, Jun 28, 2016 at 07:47:53PM +0800, Hekuang wrote:
>
>
> 在 2016/6/27 4:48, Alexei Starovoitov 写道:
> >On Sun, Jun 26, 2016 at 11:20:52AM +, He Kuang wrote:
> >> bounds check just like ubpf library does.
> >hmm. I don't think I suggested to hack b
ally adjust period of different events. Policy is defined
> by user.
> """
>
> and modified by following the reviewers' suggestions.
>
> v1-v2:
>
> - Split bpf vm part out of kernel/bpf/core.c and link to it instead
> of using ubpf libr
;sk_cgrp_data), cgrp);
if you'd need to respin the patch for other reasons please add kdoc
to bpf.h for this new helper similar to other helpers.
To say that 0 or 1 return values is indication of cg2 descendant relation
and < 0 in case of error.
Acked-by: Alexei Starovoitov
and
> give enough debug info if things did not go well.
>
> Signed-off-by: Martin KaFai Lau
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
> ---
> samples/bpf/Makefile | 3 +
> samples/bpf/bpf_helpers.h |
> Signed-off-by: Martin KaFai Lau
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
Acked-by: Alexei Starovoitov
On 6/21/16 7:47 AM, Thadeu Lima de Souza Cascardo wrote:
The calling convention is different with ABIv2 and so we'll need changes
in bpf_slow_path_common() and sk_negative_common().
How big would those changes be? Do we know?
How come no one reported this was broken previously? This is the fi
On Mon, Jun 20, 2016 at 11:38:18AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Jun 20, 2016 at 11:29:13AM +0800, Wangnan (F) escreveu:
> > On 2016/6/17 0:48, Arnaldo Carvalho de Melo wrote:
> > >Em Thu, Jun 16, 2016 at 08:02:41AM +, Wang Nan escreveu:
> > >>With '--dry-run', 'perf record'
.
>
> Cc: Matt Evans
> Cc: Denis Kirjanov
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: "David S. Miller"
> Cc: Ananth N Mavinakayanahalli
> Signed-off-by: Naveen N. Rao
> ---
> arch/powerp
On Tue, May 24, 2016 at 12:04 PM, Tejun Heo wrote:
> Hello,
>
> Alexei, can you please verify this patch? Map extension got rolled
> into balance work so that there's no sync issues between the two async
> operations.
tests look good. No uaf and basic bpf tests exercise per-cpu map are fine.
>
seen.
Tested-by: Alexei Starovoitov
>
> Thanks.
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 0c59684..bd2df70 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -162,7 +162,7 @@ static struct pcpu_chunk *pcpu_reserved_chunk;
> static int pcpu_reserved_chunk_limi
does the same thing, but all the time.
yeah. good point. there is no actual 'order' here.
The whole thing looks good to me.
Acked-by: Alexei Starovoitov
a little slower, but
> that may be well within the noise.
>
> The third run shows that discarding all events only took 1.3 seconds. This
> is a speed up of 23%! The discard is much faster than even the commit.
>
> The one downside is shown in the last run.
On Wed, Apr 27, 2016 at 11:00:23AM +0800, Wangnan (F) wrote:
>
>
> On 2016/4/27 10:46, Florian Fainelli wrote:
> >Le 24/04/2016 19:34, Florian Fainelli a écrit :
> >>Hi all,
> >>
> >>Two trivial patches that were flagged by Coverity.
> >>
> >>Thanks!
> >Ping! Did I send this to the correct mailin
On Tue, Apr 26, 2016 at 06:38:28PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 25, 2016 at 10:24:31PM -0300, Arnaldo Carvalho de Melo wrote:
> > Em Mon, Apr 25, 2016 at 10:03:58PM -0300, Arnaldo Carvalho de Melo escreveu:
> > > I now need to continue investigation why this doesn't seem to work from
On Mon, Apr 25, 2016 at 09:29:28PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 05:07:26PM -0700, Alexei Starovoitov escreveu:
> > > + {
> > > + .procname = "perf_event_max_stack",
> > > + .data
On Mon, Apr 25, 2016 at 08:41:39PM -0300, Arnaldo Carvalho de Melo wrote:
>
> +int sysctl_perf_event_max_stack __read_mostly = PERF_MAX_STACK_DEPTH;
> +
> +static inline size_t perf_callchain_entry__sizeof(void)
> +{
> + return (sizeof(struct perf_callchain_entry) +
> + sizeof(__u
On Mon, Apr 25, 2016 at 05:17:50PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 01:06:48PM -0700, Alexei Starovoitov escreveu:
> > On Mon, Apr 25, 2016 at 04:22:29PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Em Mon, Apr 25, 2016 at 01:27:06PM -0300, Arnald
On Mon, Apr 25, 2016 at 04:22:29PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 01:27:06PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Mon, Apr 25, 2016 at 01:14:25PM -0300, Arnaldo Carvalho de Melo escreveu:
> > > Em Fri, Apr 22, 2016 at 03:18:
On Fri, Apr 22, 2016 at 04:05:31PM -0600, David Ahern wrote:
> On 4/22/16 2:52 PM, Arnaldo Carvalho de Melo wrote:
> >Em Wed, Apr 20, 2016 at 04:04:12PM -0700, Alexei Starovoitov escreveu:
> >>On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
> >
>
On Fri, Apr 22, 2016 at 05:52:32PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Apr 20, 2016 at 04:04:12PM -0700, Alexei Starovoitov escreveu:
> > On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
>
> > Nice. I like it. That's a great a
On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
> The default remains 127, which is good for most cases, and not even hit
> most of the time, but then for some cases, as reported by Brendan, 1024+
> deep frames are appearing on the radar for things like groovy, ruby.
>
On Wed, Apr 20, 2016 at 06:01:40PM +, Wang Nan wrote:
> This patch set allows to perf invoke some user space BPF scripts on some
> point. uBPF scripts and kernel BPF scripts reside in one BPF object.
> They communicate with each other with BPF maps. uBPF scripts can invoke
> helper functions pr
On 4/19/16 3:09 AM, Philip Li wrote:
On Tue, Apr 19, 2016 at 10:33:34AM +0800, Fengguang Wu wrote:
Fengguang, any idea why build-bot sometimes silent?
Sorry I went off for some time.. Philip, would you help have a check?
Hi Alexei, i have done some investigation for this. Fengguang, pls corre
perf tracepoints.
Suggested-by: Steven Rostedt
Signed-off-by: Alexei Starovoitov
---
include/linux/trace_events.h | 5 +
include/trace/perf.h | 13 +++--
kernel/events/core.c | 20 +++-
3 files changed, 27 insertions(+), 11 deletions(-)
diff --git a
On 4/18/16 3:16 PM, Steven Rostedt wrote:
On Mon, 18 Apr 2016 14:43:07 -0700
Alexei Starovoitov wrote:
I was worried about this too, but single 'if' and two calls
(as in commit 98b5c2c65c295) is a better way, since it's faster, cleaner
and doesn't need t
On 4/18/16 1:29 PM, Steven Rostedt wrote:
On Mon, 4 Apr 2016 21:52:48 -0700
Alexei Starovoitov wrote:
introduce BPF_PROG_TYPE_TRACEPOINT program type and allow it to be
attached to tracepoints.
The tracepoint will copy the arguments in the per-cpu buffer and pass
it to the bpf program as its
On 4/18/16 1:47 PM, Steven Rostedt wrote:
On Mon, 18 Apr 2016 12:51:43 -0700
Alexei Starovoitov wrote:
yeah, it could be added to ftrace as well, but it won't be as effective
as perf_trace, since the cost of trace_event_buffer_reserve() in
trace_event_raw_event_() handler is signific
On 4/18/16 9:13 AM, Steven Rostedt wrote:
On Mon, 4 Apr 2016 21:52:46 -0700
Alexei Starovoitov wrote:
Hi Steven, Peter,
last time we discussed bpf+tracepoints it was a year ago [1] and the reason
we didn't proceed with that approach was that bpf would make arguments
arg1, arg2 to tra
On Sun, Apr 17, 2016 at 12:58:21PM -0400, Sasha Levin wrote:
> Hi all,
>
> I've hit the following while fuzzing with syzkaller inside a KVM tools guest
> running the latest -next kernel:
thanks for the report. Adding Tejun...
if I read the report correctly it's not about bpf, but rather points to
to be the same size as a pointer.
>
> Signed-off-by: Arnd Bergmann
> Fixes: 9940d67c93b5 ("bpf: support bpf_get_stackid() and
> bpf_perf_event_output() in tracepoint programs")
Thanks.
Acked-by: Alexei Starovoitov
I guess I started to rely on 0-day build-bot too much.
Th
equent call to perf_arch_fetch_caller_regs initializes the same fields on
all archs,
so we can safely drop memset from all of the above cases and move it into
perf_ftrace_function_call that calls it with stack allocated pt_regs.
Acked-by: Peter Zijlstra (Intel)
Signed-off-by: Alexei Starovoitov
---
dumped to user space via perf ring buffer
and broken application access it directly without consulting tracepoint/format.
Same rule applies here: static tracepoint fields should only be accessed
in a format defined in tracepoint/format. The order of fields and
field sizes are not an ABI.
Sig
ate bpf program is generated on the fly.
[1] http://thread.gmane.org/gmane.linux.kernel.api/8127/focus=8165
[2] https://github.com/iovisor/bcc/blob/master/tools/tplist.py
[3] https://github.com/iovisor/bcc/blob/master/tools/argdist.py
Alexei Starovoitov (10):
perf: optimize perf_fetch_caller_regs
perf: re
now all calls to perf_trace_buf_submit() pass 0 as 4th
argument which will be repurposed in the next patch which will
change the meaning of 1st arg of perf_tp_event() to event_type
Signed-off-by: Alexei Starovoitov
---
include/trace/perf.h | 7 ++-
include/trace/trace_events.h | 3
o -fno-strict-aliasing
Signed-off-by: Alexei Starovoitov
---
include/linux/perf_event.h | 2 +-
include/linux/trace_events.h| 8
include/trace/perf.h| 8
kernel/events/core.c| 6 --
kernel/trace/trace_event_perf.c | 39 +
Recognize "tracepoint/" section name prefix and attach the program
to that tracepoint.
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/
);
}
and on 4 cpus in parallel:
reads per sec
base (no tracepoints, no kprobes) 300k
with kprobe at urandom_read() 279k
with tracepoint at random:urandom_read290k
bpf progs attached to kprobe and tracepoint are noop.
Signed-off-by: Alexei Starovoi
needs two wrapper functions to fetch 'struct pt_regs *' to convert
tracepoint bpf context into kprobe bpf context to reuse existing
helper functions
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
kernel/bpf/stackmap.c| 2 +-
kernel/trace/bpf_tr
igned-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
include/linux/trace_events.h | 1 +
kernel/bpf/verifier.c| 6 +-
kernel/events/core.c | 8
kernel/trace/trace_events.c | 18 ++
5 files changed, 33 insertions(+), 1 deletion(-)
modify offwaketime to work with sched/sched_switch tracepoint
instead of kprobe into finish_task_switch
Signed-off-by: Alexei Starovoitov
---
samples/bpf/offwaketime_kern.c | 26 ++
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/samples/bpf
register tracepoint bpf program type and let it call the same set
of helper functions as BPF_PROG_TYPE_KPROBE
Signed-off-by: Alexei Starovoitov
---
kernel/trace/bpf_trace.c | 45 +++--
1 file changed, 43 insertions(+), 2 deletions(-)
diff --git a/kernel
On 4/5/16 11:16 AM, Peter Zijlstra wrote:
On Tue, Apr 05, 2016 at 11:09:30AM -0700, Alexei Starovoitov wrote:
@@ -67,6 +69,14 @@ perf_trace_##call(void *__data, proto
On 4/5/16 7:18 AM, Peter Zijlstra wrote:
On Mon, Apr 04, 2016 at 09:52:48PM -0700, Alexei Starovoitov wrote:
introduce BPF_PROG_TYPE_TRACEPOINT program type and allow it to be
attached to tracepoints.
More specifically the perf tracepoint handler, not tracepoints directly.
yes. perf
On 4/5/16 5:06 AM, Peter Zijlstra wrote:
On Mon, Apr 04, 2016 at 09:52:47PM -0700, Alexei Starovoitov wrote:
avoid memset in perf_fetch_caller_regs, since it's the critical path of all
tracepoints.
It's called from perf_sw_event_sched, perf_event_task_sched_in and all of
perf_tr
ew tests passed with x64 jit?
Acked-by: Alexei Starovoitov
On 4/5/16 3:02 AM, Naveen N. Rao wrote:
BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in
32-bit overflow.
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Cc: "David S. Miller"
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Cc: Paul Mackerras
Signed-off-
On 4/5/16 3:02 AM, Naveen N. Rao wrote:
Unsigned Jump-if-Greater-Than.
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Cc: "David S. Miller"
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Cc: Paul Mackerras
Signed-off-by: Naveen N. Rao
I think some of the tests already cov
On 4/5/16 3:02 AM, Naveen N. Rao wrote:
JMP_JSET tests incorrectly used BPF_JNE. Fix the same.
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Cc: "David S. Miller"
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Cc: Paul Mackerras
Signed-off-by: Naveen N. Rao
Good ca
needs two wrapper functions to fetch 'struct pt_regs *' to convert
tracepoint bpf context into kprobe bpf context to reuse existing
helper functions
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
kernel/bpf/stackmap.c| 2 +-
kernel/trace/bpf_tr
register tracepoint bpf program type and let it call the same set
of helper functions as BPF_PROG_TYPE_KPROBE
Signed-off-by: Alexei Starovoitov
---
kernel/trace/bpf_trace.c | 45 +++--
1 file changed, 43 insertions(+), 2 deletions(-)
diff --git a/kernel
modify offwaketime to work with sched/sched_switch tracepoint
instead of kprobe into finish_task_switch
Signed-off-by: Alexei Starovoitov
---
samples/bpf/offwaketime_kern.c | 26 ++
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/samples/bpf
am is generated on the fly.
[1] http://thread.gmane.org/gmane.linux.kernel.api/8127/focus=8165
[2] https://github.com/iovisor/bcc/blob/master/tools/tplist.py
[3] https://github.com/iovisor/bcc/blob/master/tools/argdist.py
Alexei Starovoitov (8):
perf: optimize perf_fetch_caller_regs
perf, bpf:
igned-off-by: Alexei Starovoitov
---
include/linux/bpf.h | 1 +
include/linux/trace_events.h | 1 +
kernel/bpf/verifier.c| 6 +-
kernel/events/core.c | 8
kernel/trace/trace_events.c | 18 ++
5 files changed, 33 insertions(+), 1 deletion(-)
Recognize "tracepoint/" section name prefix and attach the program
to that tracepoint.
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_load.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/
equent call to perf_arch_fetch_caller_regs initializes the same fields on
all archs,
so we can safely drop memset from all of the above cases and move it into
perf_ftrace_function_call that calls it with stack allocated pt_regs.
Signed-off-by: Alexei Starovoitov
---
include/linux/perf_event.h | 2 --
);
}
and on 4 cpus in parallel:
reads per sec
base (no tracepoints, no kprobes) 300k
with kprobe at urandom_read() 279k
with tracepoint at random:urandom_read290k
bpf progs attached to kprobe and tracepoint are noop.
Signed-off-by: Alexei Starovoi
ser space via perf ring buffer
and some application access it directly without consulting tracepoint/format.
Same rule applies here: static tracepoint fields should only be accessed
in a format defined in tracepoint/format. The order of fields and
field sizes are not an ABI.
Signed-off-by: Alexei S
REGS_IP() to access the instruction pointer.
>
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: David S. Miller
> Cc: Ananth N Mavinakayanahalli
> Cc: Michael Ellerman
> Signed-off-by: Naveen N. Rao
Acked-by: Alexei Starovoitov
On Mon, Apr 04, 2016 at 10:31:33PM +0530, Naveen N. Rao wrote:
> While at it, remove the generation of .s files and fix some typos in the
> related comment.
>
> Cc: Alexei Starovoitov
> Cc: David S. Miller
> Cc: Daniel Borkmann
> Cc: Ananth N Mavinakayanahalli
> Cc: Mi
implementing BPF tail calls and skb loads.
Cc: Matt Evans
Cc: Michael Ellerman
Cc: Paul Mackerras
Cc: Alexei Starovoitov
Cc: "David S. Miller"
Cc: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/ppc-opcode.h | 19 +-
arch/powerpc/net/Makefile
On 4/1/16 7:41 AM, Naveen N. Rao wrote:
On 2016/03/31 10:52AM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
...
+
+#ifdef __powerpc__
+#define BPF_KPROBE_READ_RET_IP(ip, ctx){ (ip) = (ctx)->link; }
+#define BPF_KRETPROBE_READ_RET_IP(ip,
On 4/1/16 7:37 AM, Naveen N. Rao wrote:
On 2016/03/31 08:19PM, Daniel Borkmann wrote:
On 03/31/2016 07:46 PM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
clang $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \
-D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused
901 - 1000 of 2320 matches
Mail list logo