> On Jul 22, 2020, at 8:40 AM, Peter Zijlstra wrote:
>
> On Tue, Jul 21, 2020 at 10:40:19PM +, Song Liu wrote:
>
>> We only need to block precise_ip >= 2. precise_ip == 1 is OK.
>
> Uuuh, how? Anything PEBS would have the same problem. Sure, precise_ip
> == 1 will not correct the IP, bu
On Tue, Jul 21, 2020 at 10:40:19PM +, Song Liu wrote:
> We only need to block precise_ip >= 2. precise_ip == 1 is OK.
Uuuh, how? Anything PEBS would have the same problem. Sure, precise_ip
== 1 will not correct the IP, but the stack will not match regardless.
You need IP,SP(,BP) to be a con
> On Jul 21, 2020, at 3:43 PM, Alexei Starovoitov
> wrote:
>
> On Tue, Jul 21, 2020 at 3:40 PM Song Liu wrote:
>>
>> We only need to block precise_ip >= 2. precise_ip == 1 is OK.
>
> Are you sure?
> intel_pmu_hw_config() has:
> if (event->attr.precise_ip) {
>if (event->attr.sample_type
On Tue, Jul 21, 2020 at 3:40 PM Song Liu wrote:
>
> We only need to block precise_ip >= 2. precise_ip == 1 is OK.
Are you sure?
intel_pmu_hw_config() has:
if (event->attr.precise_ip) {
if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
event->attr.sample_type |= __PERF_SAMPLE_CA
> On Jul 21, 2020, at 12:10 PM, Alexei Starovoitov
> wrote:
>
> On Thu, Jul 16, 2020 at 03:59:32PM -0700, Song Liu wrote:
>> +
>> +BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx,
>> + struct bpf_map *, map, u64, flags)
>> +{
>> +struct perf_event *event = ctx-
On Thu, Jul 16, 2020 at 03:59:32PM -0700, Song Liu wrote:
> +
> +BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx,
> +struct bpf_map *, map, u64, flags)
> +{
> + struct perf_event *event = ctx->event;
> + struct perf_callchain_entry *trace;
> + bool has_kern
Calling get_perf_callchain() on perf_events from PEBS entries may cause
unwinder errors. To fix this issue, the callchain is fetched early. Such
perf_events are marked with __PERF_SAMPLE_CALLCHAIN_EARLY.
Similarly, calling bpf_get_[stack|stackid] on perf_events from PEBS may
also cause unwinder er
7 matches
Mail list logo