On Sat, Nov 09, 2013 at 04:22:57PM +0100, Frederic Weisbecker wrote: > > --- > > kernel/events/core.c | 14 ++++++++++++-- > > 1 file changed, 12 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/events/core.c b/kernel/events/core.c > > index 4dc078d18929..a3ad40f347c4 100644 > > --- a/kernel/events/core.c > > +++ b/kernel/events/core.c > > @@ -5289,6 +5289,16 @@ static void perf_log_throttle(struct perf_event > > *event, int enable) > > perf_output_end(&handle); > > } > > > > +static inline void perf_pending(struct perf_event *event) > > +{ > > + if (in_nmi()) { > > + irq_work_pending(&event->pending); > > I guess you mean irq_work_queue()?
Uhm yah > But there are much more reasons that just being in nmi to async > wakeups, signal sending, etc... The fact that an event can happen > anywhere (rq lock acquire or whatever) makes perf events all fragile > enough to always require irq work for these. Fair enough :/ > Probably what we need is rather some limit. Maybe we can't seriously > apply recursion checks here but perhaps the simple fact that we raise > an irq work from an irq work should trigger an alarm of some sort. I think irq_work was designed explicitly to allow this -- Oleg had some usecase for this. So my initial approach was trying to detect if there was a fasync signal pending and break out of the loop in that case; but fasync gives me a bloody headache. It looks like you cannot even determine the signum you need to test pending without acquiring locks, let alone find all the tasks it would raise it against. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/