On Tue, Mar 8, 2016 at 1:13 PM, Stephane Eranian <eran...@google.com> wrote: > Hi, > > On Tue, Mar 8, 2016 at 1:07 PM, Peter Zijlstra <pet...@infradead.org> wrote: >> On Tue, Mar 08, 2016 at 12:59:23PM -0800, Stephane Eranian wrote: >>> hi, >>> >>> On Mon, Mar 7, 2016 at 12:25 PM, Peter Zijlstra <pet...@infradead.org> >>> wrote: >>> > >>> > On Mon, Mar 07, 2016 at 07:27:31PM +0100, Jiri Olsa wrote: >>> > > On Mon, Mar 07, 2016 at 01:18:40PM +0100, Peter Zijlstra wrote: >>> > > > On Mon, Mar 07, 2016 at 11:24:13AM +0100, Peter Zijlstra wrote: >>> > > > >>> > > > > I suspect Andi is having something along: >>> > > > > >>> > > > > >>> > > > > lkml.kernel.org/r/1445458568-16956-1-git-send-email-a...@firstfloor.org >>> > > > > >>> > > > > applied to his tree. >>> > > > >>> > > > OK, I munged a bunch of patches together, please have a hard look at >>> > > > the >>> > > > end result found in: >>> > > > >>> > > > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git >>> > > > perf/core >>> > > > >>> >>> I ran this kernel on Haswell. Even with Andi's fixes the problem I >>> identified is >>> still there, so my patch is still needed. >> >> Right, your patch should be included in that kernel, or did I make a >> royal mess of things? >> > No, it is as expected for the OVF PMI fix. > >> I put Andi's late status ack on top of your patch. >> Ok, I ran into a problem on Broadwell with your branch with Andi's patches. I see a problem which had disappeared since SandyBridge:
11551.128422] ------------[ cut here ]------------ [11551.128435] WARNING: CPU: 3 PID: 12114 at arch/x86/events/intel/core.c:1868 intel_pmu_handle_irq+0x2da/0x4b0() [11551.128437] perfevents: irq loop stuck! [11551.128469] <NMI> [<ffffffff81663975>] dump_stack+0x4d/0x63 [11551.128479] [<ffffffff810b8657>] warn_slowpath_common+0x97/0xe0 [11551.128482] [<ffffffff810b8756>] warn_slowpath_fmt+0x46/0x50 [11551.128486] [<ffffffff8100b6ca>] intel_pmu_handle_irq+0x2da/0x4b0 [11551.128491] [<ffffffff81004569>] perf_event_nmi_handler+0x39/0x60 [11551.128494] [<ffffffff8107be61>] nmi_handle+0x61/0x110 [11551.128497] [<ffffffff8107c684>] default_do_nmi+0x44/0x110 [11551.128500] [<ffffffff8107c827>] do_nmi+0xd7/0x140 [11551.128504] [<ffffffff8166e127>] end_repeat_nmi+0x1a/0x1e [11551.128507] [<ffffffff81009dd6>] ? native_write_msr+0x6/0x30 [11551.128510] [<ffffffff81009dd6>] ? native_write_msr+0x6/0x30 [11551.128514] [<ffffffff81009dd6>] ? native_write_msr+0x6/0x30 [11551.128515] <<EOE>> [<ffffffff8100b385>] ? intel_pmu_enable_event+0x215/0x230 [11551.128520] [<ffffffff81005a0d>] x86_pmu_start+0x8d/0x120 [11551.128523] [<ffffffff810061db>] x86_pmu_enable+0x27b/0x2f0 [11551.128527] [<ffffffff8118d63d>] perf_pmu_enable+0x1d/0x30 [11551.128530] [<ffffffff81191bca>] ctx_resched+0x5a/0x70 [11551.128532] [<ffffffff81191d8c>] __perf_event_enable+0x1ac/0x210 [11551.128537] [<ffffffff81188f81>] event_function+0xa1/0x170 [11551.128540] [<ffffffff811899b0>] ? perf_duration_warn+0x70/0x70 [11551.128543] [<ffffffff811899f7>] remote_function+0x47/0x60 [11551.128547] [<ffffffff8112a178>] generic_exec_single+0xa8/0xb0 [11551.128550] [<ffffffff811899b0>] ? perf_duration_warn+0x70/0x70 [11551.128553] [<ffffffff811899b0>] ? perf_duration_warn+0x70/0x70 [11551.128555] [<ffffffff8112a298>] smp_call_function_single+0xa8/0x100 [11551.128559] [<ffffffff8118aec4>] event_function_call+0x84/0x100 [11551.128561] [<ffffffff81191be0>] ? ctx_resched+0x70/0x70 [11551.128564] [<ffffffff81191be0>] ? ctx_resched+0x70/0x70 [11551.128566] [<ffffffff81188ee0>] ? perf_ctx_lock+0x30/0x30 [11551.128570] [<ffffffff8118b050>] _perf_event_enable+0x60/0x80 [11551.128572] [<ffffffff8118fc61>] perf_ioctl+0x271/0x3e0 The infinite loop in the irq handler! But here it seems there is a race with a perf_events ioctl() to likely reset the period. I am not using the perf tool here just running a self-monitoring task. >> Also note, Ingo merged most of those patches today, all except the top >> 3, because Andi wanted to double check something. >