On Wed, May 20, 2015 at 05:25:34PM +0200, Peter Zijlstra wrote:

> > 
> > OK, so if you have the watchdog enabled, that's 1 event, and having a
> > max of 2 GP events, adding another 2 events is fail.
> 
> Hmm, so we count all events, including those scheduled on fixed purpose
> counters.
> 
> Lemme see if I can cure that.


Stephane, does something like the below make sense?

---
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c 
b/arch/x86/kernel/cpu/perf_event_intel.c
index 3998131..7e779e9 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2008,27 +2039,15 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, 
struct perf_event *event,
        xl = &excl_cntrs->states[tid];
        xlo = &excl_cntrs->states[o_tid];
 
-       /*
-        * do not allow scheduling of more than max_alloc_cntrs
-        * which is set to half the available generic counters.
-        * this helps avoid counter starvation of sibling thread
-        * by ensuring at most half the counters cannot be in
-        * exclusive mode. There is not designated counters for the
-        * limits. Any N/2 counters can be used. This helps with
-        * events with specifix counter constraints
-        */
-       if (xl->num_alloc_cntrs++ == xl->max_alloc_cntrs)
-               return &emptyconstraint;
-
        cx = c;
 
        /*
-        * because we modify the constraint, we need
+        * Because we modify the constraint, we need
         * to make a copy. Static constraints come
         * from static const tables.
         *
-        * only needed when constraint has not yet
-        * been cloned (marked dynamic)
+        * Only needed when constraint has not yet
+        * been cloned (marked dynamic).
         */
        if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
 
@@ -2062,6 +2081,22 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, 
struct perf_event *event,
         */
 
        /*
+        * Do not allow scheduling of more than max_alloc_cntrs
+        * which is set to half the available generic counters.
+        *
+        * This helps avoid counter starvation of sibling thread
+        * by ensuring at most half the counters cannot be in
+        * exclusive mode. There is not designated counters for the
+        * limits. Any N/2 counters can be used. This helps with
+        * events with specifix counter constraints
+        */
+       if (xl->num_alloc_cntrs++ >= xl->max_alloc_cntrs) {
+               /* wipe the GP counters */
+               cx->idxmsk64 &= ~((1ULL << INTEL_PMC_IDX_FIXED) - 1);
+               goto done;
+       }
+
+       /*
         * Modify static constraint with current dynamic
         * state of thread
         *
@@ -2086,6 +2121,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, 
struct perf_event *event,
                        __clear_bit(i, cx->idxmsk);
        }
 
+done:
        /*
         * recompute actual bit weight for scheduling algorithm
         */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to