Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-22 Thread Song Liu
> On Jul 22, 2020, at 8:40 AM, Peter Zijlstra wrote: > > On Tue, Jul 21, 2020 at 10:40:19PM +, Song Liu wrote: > >> We only need to block precise_ip >= 2. precise_ip == 1 is OK. > > Uuuh, how? Anything PEBS would have the same problem. Sure, precise_ip > == 1 will not correct the IP, bu

Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-22 Thread Peter Zijlstra
On Tue, Jul 21, 2020 at 10:40:19PM +, Song Liu wrote: > We only need to block precise_ip >= 2. precise_ip == 1 is OK. Uuuh, how? Anything PEBS would have the same problem. Sure, precise_ip == 1 will not correct the IP, but the stack will not match regardless. You need IP,SP(,BP) to be a con

Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-21 Thread Song Liu
> On Jul 21, 2020, at 3:43 PM, Alexei Starovoitov > wrote: > > On Tue, Jul 21, 2020 at 3:40 PM Song Liu wrote: >> >> We only need to block precise_ip >= 2. precise_ip == 1 is OK. > > Are you sure? > intel_pmu_hw_config() has: > if (event->attr.precise_ip) { >if (event->attr.sample_type

Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-21 Thread Alexei Starovoitov
On Tue, Jul 21, 2020 at 3:40 PM Song Liu wrote: > > We only need to block precise_ip >= 2. precise_ip == 1 is OK. Are you sure? intel_pmu_hw_config() has: if (event->attr.precise_ip) { if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) event->attr.sample_type |= __PERF_SAMPLE_CA

Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-21 Thread Song Liu
> On Jul 21, 2020, at 12:10 PM, Alexei Starovoitov > wrote: > > On Thu, Jul 16, 2020 at 03:59:32PM -0700, Song Liu wrote: >> + >> +BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx, >> + struct bpf_map *, map, u64, flags) >> +{ >> +struct perf_event *event = ctx-

Re: [PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-21 Thread Alexei Starovoitov
On Thu, Jul 16, 2020 at 03:59:32PM -0700, Song Liu wrote: > + > +BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx, > +struct bpf_map *, map, u64, flags) > +{ > + struct perf_event *event = ctx->event; > + struct perf_callchain_entry *trace; > + bool has_kern

[PATCH v3 bpf-next 1/2] bpf: separate bpf_get_[stack|stackid] for perf events BPF

2020-07-16 Thread Song Liu
Calling get_perf_callchain() on perf_events from PEBS entries may cause unwinder errors. To fix this issue, the callchain is fetched early. Such perf_events are marked with __PERF_SAMPLE_CALLCHAIN_EARLY. Similarly, calling bpf_get_[stack|stackid] on perf_events from PEBS may also cause unwinder er