On Fri, Nov 18, 2016 at 01:33:25PM +0100, Jiri Olsa wrote:
> On Fri, Nov 18, 2016 at 12:28:50PM +0100, Peter Zijlstra wrote:
> > On Fri, Nov 18, 2016 at 01:15:28AM +0100, Jiri Olsa wrote:
> > > Current uncore_validate_group code expects all events within
> > > the group to have same pmu.
> > > 
> > > This leads to constraint code using wrong boxes which leads
> > > in my case to touching uninitialized spinlocks, but could
> > > be probably worse.. depends on type and box details.
> > > 
> > > I get lockdep warning below for following perf stat:
> > >   # perf stat -vv -e 
> > > '{uncore_cbox_0/config=0x0334/,uncore_qpi_0/event=1/}' -a sleep 1
> > 
> > Hurm, we shouldn't be allowing that in the first place I think.
> > 
> > 
> > Let me stare at the generic group code, the intent was to only allow
> > software events to mix with hw events, nothing else.
> 
> yep, that's what's happening now.. but after the event_init callback

Ah yes indeed. Its the is_uncore_event() test in uncore_collect_event()
that's too lenient, that allows us to mix events from various uncore
boxes.

Would something like so fix things too? Because that is the point of
is_uncore_event() in collect(), to only collect events for _that_ pmu.

---
 arch/x86/events/intel/uncore.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index efca2685d876..7b1b34576886 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -319,9 +319,9 @@ static struct intel_uncore_box *uncore_alloc_box(struct 
intel_uncore_type *type,
  */
 static int uncore_pmu_event_init(struct perf_event *event);
 
-static bool is_uncore_event(struct perf_event *event)
+static bool is_box_event(struct intel_uncore_box *box, struct perf_event 
*event)
 {
-       return event->pmu->event_init == uncore_pmu_event_init;
+       return box->pmu == event->pmu;
 }
 
 static int
@@ -340,7 +340,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct 
perf_event *leader,
 
        n = box->n_events;
 
-       if (is_uncore_event(leader)) {
+       if (is_box_event(box, leader)) {
                box->event_list[n] = leader;
                n++;
        }
@@ -349,7 +349,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct 
perf_event *leader,
                return n;
 
        list_for_each_entry(event, &leader->sibling_list, group_entry) {
-               if (!is_uncore_event(event) ||
+               if (!is_box_event(box, event) ||
                    event->state <= PERF_EVENT_STATE_OFF)
                        continue;
 

Reply via email to