On Fri, Oct 02, 2020 at 09:16:11AM -0400, Liang, Kan wrote:
> Tested-by: Kan Liang <kan.li...@linux.intel.com>

---
Subject: perf/x86: Fix n_metric for cancelled txn
From: Peter Zijlstra <pet...@infradead.org>
Date: Mon Oct  5 10:10:24 CEST 2020

When a group that has TopDown members is failed to be scheduled, any
later TopDown groups will not return valid values.

Here is an example.

A background perf that occupies all the GP counters and the fixed
counter 1.
 $perf stat -e "{cycles,cycles,cycles,cycles,cycles,cycles,cycles,
                 cycles,cycles}:D" -a

A user monitors a TopDown group. It works well, because the fixed
counter 3 and the PERF_METRICS are available.
 $perf stat -x, --topdown -- ./workload
   retiring,bad speculation,frontend bound,backend bound,
   18.0,16.1,40.4,25.5,

Then the user tries to monitor a group that has TopDown members.
Because of the cycles event, the group is failed to be scheduled.
 $perf stat -x, -e '{slots,topdown-retiring,topdown-be-bound,
                     topdown-fe-bound,topdown-bad-spec,cycles}'
                     -- ./workload
    <not counted>,,slots,0,0.00,,
    <not counted>,,topdown-retiring,0,0.00,,
    <not counted>,,topdown-be-bound,0,0.00,,
    <not counted>,,topdown-fe-bound,0,0.00,,
    <not counted>,,topdown-bad-spec,0,0.00,,
    <not counted>,,cycles,0,0.00,,

The user tries to monitor a TopDown group again. It doesn't work anymore.
 $perf stat -x, --topdown -- ./workload

    ,,,,,

In a txn, cancel_txn() is to truncate the event_list for a canceled
group and update the number of events added in this transaction.
However, the number of TopDown events added in this transaction is not
updated. The kernel will probably fail to add new Topdown events.

Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown 
metrics")
Reported-by: Andi Kleen <a...@linux.intel.com>
Reported-by: Kan Liang <kan.li...@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Tested-by: Kan Liang <kan.li...@linux.intel.com>
---
 arch/x86/events/core.c       |    3 +++
 arch/x86/events/perf_event.h |    1 +
 2 files changed, 4 insertions(+)

--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1066,6 +1066,7 @@ static int add_nr_metric_event(struct cp
                if (cpuc->n_metric == INTEL_TD_METRIC_NUM)
                        return -EINVAL;
                cpuc->n_metric++;
+               cpuc->n_txn_metric++;
        }
 
        return 0;
@@ -2065,6 +2066,7 @@ static void x86_pmu_start_txn(struct pmu
        perf_pmu_disable(pmu);
        __this_cpu_write(cpu_hw_events.n_txn, 0);
        __this_cpu_write(cpu_hw_events.n_txn_pair, 0);
+       __this_cpu_write(cpu_hw_events.n_txn_metric, 0);
 }
 
 /*
@@ -2091,6 +2093,7 @@ static void x86_pmu_cancel_txn(struct pm
        __this_cpu_sub(cpu_hw_events.n_added, 
__this_cpu_read(cpu_hw_events.n_txn));
        __this_cpu_sub(cpu_hw_events.n_events, 
__this_cpu_read(cpu_hw_events.n_txn));
        __this_cpu_sub(cpu_hw_events.n_pair, 
__this_cpu_read(cpu_hw_events.n_txn_pair));
+       __this_cpu_sub(cpu_hw_events.n_metric, 
__this_cpu_read(cpu_hw_events.n_txn_metric));
        perf_pmu_enable(pmu);
 }
 
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -236,6 +236,7 @@ struct cpu_hw_events {
        int                     n_txn;    /* the # last events in the below 
arrays;
                                             added in the current transaction */
        int                     n_txn_pair;
+       int                     n_txn_metric;
        int                     assign[X86_PMC_IDX_MAX]; /* event to counter 
assignment */
        u64                     tags[X86_PMC_IDX_MAX];
 

Reply via email to