On Friday 18 May 2018 01:05 PM, Anju T Sudhakar wrote:
Call trace observed while running perf-fuzzer:
[ 329.228068] CPU: 43 PID: 9088 Comm: perf_fuzzer Not tainted
4.13.0-32-generic #35~lp1746225
[ 329.228070] task: c03f776ac900 task.stack: c03f77728000
[ 329.228071] NIP: c0299b70 LR: c02a4534 CTR: c029bb80
[ 329.228073] REGS: c03f7772b760 TRAP: 0700 Not tainted
(4.13.0-32-generic)
[ 329.228073] MSR: 9282b033
[ 329.228079] CR: 24008822 XER:
[ 329.228080] CFAR: c0299a70 SOFTE: 0
GPR00: c02a4534 c03f7772b9e0 c1606200 c03fef858908
GPR04: c03f776ac900 0001 003fee73
GPR08: c11220d8 0002
GPR12: c029bb80 c7a3d900
GPR16:
GPR20: c03f776ad090 c0c71354
GPR24: c03fef716780 003fee73 c03fe69d4200 c03f776ad330
GPR28: c11220d8 0001 c14c6108 c03fef858900
[ 329.228098] NIP [c0299b70] perf_pmu_sched_task+0x170/0x180
[ 329.228100] LR [c02a4534] __perf_event_task_sched_in+0xc4/0x230
[ 329.228101] Call Trace:
[ 329.228102] [c03f7772b9e0] [c02a0678]
perf_iterate_sb+0x158/0x2a0 (unreliable)
[ 329.228105] [c03f7772ba30] [c02a4534]
__perf_event_task_sched_in+0xc4/0x230
[ 329.228107] [c03f7772bab0] [c01396dc]
finish_task_switch+0x21c/0x310
[ 329.228109] [c03f7772bb60] [c0c71354] __schedule+0x304/0xb80
[ 329.228111] [c03f7772bc40] [c0c71c10] schedule+0x40/0xc0
[ 329.228113] [c03f7772bc60] [c01033f4] do_wait+0x254/0x2e0
[ 329.228115] [c03f7772bcd0] [c0104ac0] kernel_wait4+0xa0/0x1a0
[ 329.228117] [c03f7772bd70] [c0104c24] SyS_wait4+0x64/0xc0
[ 329.228121] [c03f7772be30] [c000b184] system_call+0x58/0x6c
[ 329.228121] Instruction dump:
[ 329.228123] 3beafea0 7faa4800 409eff18 e8010060 eb610028 ebc10040 7c0803a6
38210050
[ 329.228127] eb81ffe0 eba1ffe8 ebe1fff8 4e800020 <0fe0> 4bbc 6000
6042
[ 329.228131] ---[ end trace 8c46856d314c1811 ]---
[ 375.755943] hrtimer: interrupt took 31601 ns
The context switch call-backs for thread-imc are defined in sched_task function.
So when thread-imc events are grouped with software pmu events,
perf_pmu_sched_task hits the WARN_ON_ONCE condition, since software PMUs are
assumed not to have a sched_task defined.
Patch to move the thread_imc enable/disable opal call back from sched_task to
event_[add/del] function
Changes looks fine to me.
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Anju T Sudhakar
---
arch/powerpc/perf/imc-pmu.c | 108 +---
1 file changed, 51 insertions(+), 57 deletions(-)
diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index d7532e7..71d9ba7 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -866,59 +866,6 @@ static int thread_imc_cpu_init(void)
ppc_thread_imc_cpu_offline);
}
-void thread_imc_pmu_sched_task(struct perf_event_context *ctx,
- bool sched_in)
-{
- int core_id;
- struct imc_pmu_ref *ref;
-
- if (!is_core_imc_mem_inited(smp_processor_id()))
- return;
-
- core_id = smp_processor_id() / threads_per_core;
- /*
-* imc pmus are enabled only when it is used.
-* See if this is triggered for the first time.
-* If yes, take the mutex lock and enable the counters.
-* If not, just increment the count in ref count struct.
-*/
- ref = _imc_refc[core_id];
- if (!ref)
- return;
-
- if (sched_in) {
- mutex_lock(>lock);
- if (ref->refc == 0) {
- if (opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE,
-get_hard_smp_processor_id(smp_processor_id( {
- mutex_unlock(>lock);
- pr_err("thread-imc: Unable to start the counter\
- for core %d\n",
core_id);
- return;
- }
- }
- ++ref->refc;
- mutex_unlock(>lock);
- } else {
- mutex_lock(>lock);
- ref->refc--;
- if (ref->refc == 0) {
- if (opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE,
- get_hard_smp_processor_id(smp_processor_id( {
- mutex_unlock(>lock);
- pr_err("thread-imc: Unable to stop the counters\
-