Re: [PATCH] perf/core: Fix creating kernel counters for PMUs that override event->cpu

2019-07-24 Thread Mark Rutland
On Wed, Jul 24, 2019 at 03:53:24PM +0300, Leonard Crestez wrote:
> Some hardware PMU drivers will override perf_event.cpu inside their
> event_init callback. This causes a lockdep splat when initialized through
> the kernel API:
> 
> WARNING: CPU: 0 PID: 250 at kernel/events/core.c:2917 ctx_sched_out+0x78/0x208
> CPU: 0 PID: 250 Comm: systemd-udevd Not tainted 
> 5.3.0-rc1-next-20190723-00024-g94e04593c88a #80
> Hardware name: FSL i.MX8MM EVK board (DT)
> pstate: 4085 (nZcv daIf -PAN -UAO)
> pc : ctx_sched_out+0x78/0x208
> lr : ctx_sched_out+0x78/0x208
> sp : 127a3750
> x29: 127a3750 x28: 
> x27: 1162bf20 x26: 08cf3310
> x25: 127a3de0 x24: 115ff808
> x23: 7dffbff851b8 x22: 0004
> x21: 7dffbff851b0 x20: 
> x19: 7dffbffc51b0 x18: 0010
> x17: 0001 x16: 0007
> x15: 2e8ba2e8ba2e8ba3 x14: 5114
> x13: 117d5e30 x12: 11898378
> x11:  x10: 117d5000
> x9 : 0045 x8 : 
> x7 : 10168194 x6 : 117d59d0
> x5 : 0001 x4 : 80007db56128
> x3 : 80007db56128 x2 : 0d9c118347a77600
> x1 :  x0 : 0024
> Call trace:
>  ctx_sched_out+0x78/0x208
>  __perf_install_in_context+0x160/0x248
>  remote_function+0x58/0x68
>  generic_exec_single+0x100/0x180
>  smp_call_function_single+0x174/0x1b8
>  perf_install_in_context+0x178/0x188
>  perf_event_create_kernel_counter+0x118/0x160
> 
> Fix by calling perf_install_in_context with event->cpu, just like
> perf_event_open

Ouch; good spot!

> 
> Signed-off-by: Leonard Crestez 
> ---
> I don't understand why PMUs outside the core are bound to a CPU anyway,
> all this patch does is attempt to satisfy the assumptions made by
> __perf_install_in_context and ctx_sched_out at init time so that lockdep
> no longer complains.

If you care about the background:

It's necessary because portions of the perf core code rely on
serialization that can only be ensured when all management of the PMU
occurs on the same CPU. e.g. for the per-cpu ringbuffers.

There are also some system/uncore PMUs that exist for groups of CPUs
(e.g. clusters or sockets), but are exposed as a single logical PMU,
assocaited with one CPU per group.

> 
> ctx_sched_out asserts ctx->lock which seems to be taken by
> __perf_install_in_context:
> 
>   struct perf_event_context *ctx = event->ctx;
>   struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
>   ...
>   raw_spin_lock(&cpuctx->ctx.lock);
> 
> The lockdep warning happens when ctx != &cpuctx->ctx which can happen if
> __perf_install_in_context is called on a cpu other than event->cpu.
> 
> Found while working on this patch:
> https://patchwork.kernel.org/patch/11056785/
> 
>  kernel/events/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 026a14541a38..0463c1151bae 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -11272,11 +11272,11 @@ perf_event_create_kernel_counter(struct 
> perf_event_attr *attr, int cpu,
>   if (!exclusive_event_installable(event, ctx)) {
>   err = -EBUSY;
>   goto err_unlock;
>   }
>  
> - perf_install_in_context(ctx, event, cpu);
> + perf_install_in_context(ctx, event, event->cpu);
>   perf_unpin_context(ctx);
>   mutex_unlock(&ctx->mutex);
>  
>   return event;

This matches what we in a regular perf_event_open() syscall, and I
believe this is sane. I think we should also update the comment a few
lines above that refers to @cpu, since that's potentially misleading.
Could we change that from:

  Check if the @cpu we're creating an event for is online.

... to:

  Check if the new event's CPU is online.

With that:

Reviewed-by: Mark Rutland 

Thanks,
Mark.


[PATCH] perf/core: Fix creating kernel counters for PMUs that override event->cpu

2019-07-24 Thread Leonard Crestez
Some hardware PMU drivers will override perf_event.cpu inside their
event_init callback. This causes a lockdep splat when initialized through
the kernel API:

WARNING: CPU: 0 PID: 250 at kernel/events/core.c:2917 ctx_sched_out+0x78/0x208
CPU: 0 PID: 250 Comm: systemd-udevd Not tainted 
5.3.0-rc1-next-20190723-00024-g94e04593c88a #80
Hardware name: FSL i.MX8MM EVK board (DT)
pstate: 4085 (nZcv daIf -PAN -UAO)
pc : ctx_sched_out+0x78/0x208
lr : ctx_sched_out+0x78/0x208
sp : 127a3750
x29: 127a3750 x28: 
x27: 1162bf20 x26: 08cf3310
x25: 127a3de0 x24: 115ff808
x23: 7dffbff851b8 x22: 0004
x21: 7dffbff851b0 x20: 
x19: 7dffbffc51b0 x18: 0010
x17: 0001 x16: 0007
x15: 2e8ba2e8ba2e8ba3 x14: 5114
x13: 117d5e30 x12: 11898378
x11:  x10: 117d5000
x9 : 0045 x8 : 
x7 : 10168194 x6 : 117d59d0
x5 : 0001 x4 : 80007db56128
x3 : 80007db56128 x2 : 0d9c118347a77600
x1 :  x0 : 0024
Call trace:
 ctx_sched_out+0x78/0x208
 __perf_install_in_context+0x160/0x248
 remote_function+0x58/0x68
 generic_exec_single+0x100/0x180
 smp_call_function_single+0x174/0x1b8
 perf_install_in_context+0x178/0x188
 perf_event_create_kernel_counter+0x118/0x160

Fix by calling perf_install_in_context with event->cpu, just like
perf_event_open

Signed-off-by: Leonard Crestez 
---
I don't understand why PMUs outside the core are bound to a CPU anyway,
all this patch does is attempt to satisfy the assumptions made by
__perf_install_in_context and ctx_sched_out at init time so that lockdep
no longer complains.

ctx_sched_out asserts ctx->lock which seems to be taken by
__perf_install_in_context:

struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
...
raw_spin_lock(&cpuctx->ctx.lock);

The lockdep warning happens when ctx != &cpuctx->ctx which can happen if
__perf_install_in_context is called on a cpu other than event->cpu.

Found while working on this patch:
https://patchwork.kernel.org/patch/11056785/

 kernel/events/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 026a14541a38..0463c1151bae 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11272,11 +11272,11 @@ perf_event_create_kernel_counter(struct 
perf_event_attr *attr, int cpu,
if (!exclusive_event_installable(event, ctx)) {
err = -EBUSY;
goto err_unlock;
}
 
-   perf_install_in_context(ctx, event, cpu);
+   perf_install_in_context(ctx, event, event->cpu);
perf_unpin_context(ctx);
mutex_unlock(&ctx->mutex);
 
return event;
 
-- 
2.17.1