> On 05-Mar-2021, at 11:20 AM, Athira Rajeev
> wrote:
>
>
>
>> On 24-Feb-2021, at 5:51 PM, Thadeu Lima de Souza Cascardo
>> wrote:
>>
>> EBB events must be under exclusive groups, so there is no mix of EBB and
>> non-EBB events on the same P
ncase it is not applicable for the particular arch.
Signed-off-by: Athira Rajeev
---
tools/perf/arch/powerpc/util/event.c | 7 +++
tools/perf/util/event.h | 1 +
tools/perf/util/sort.c | 19 +++
3 files changed, 27 insertions(+)
diff --git a/
No N/A7 4
Changelog:
Changes from v1 -> v2
Addressed Jiri's review comments:
- Display the new sort dimension 'p_stage_cyc' only
on supported architecture.
- Check for arch specific header string for matching
sort order
bit weight field.
if the sample type is PERF_SAMPLE_WEIGHT_STRUCT, memory subsystem
latency is stored in the low 32bits of perf_sample_weight structure.
Also for CPU_FTR_ARCH_31, capture the two cycle counter information in
two 16 bit fields of perf_sample_weight structure.
Signed-off-by: Ath
x27;var3_w' field of perf_sample_weight.
Add new sort function 'Pipeline Stage Cycle' and include this in
default_mem_sort_order[]. This new sort function may be used to denote
some other pipeline stage in another architecture. So add this to
list of sort entries that can have dyna
the architecture do not have this function, fall back to the
default header string value.
Signed-off-by: Athira Rajeev
---
tools/perf/util/event.h | 1 +
tools/perf/util/sort.c | 19 ++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/event.h b/too
wer 32 bits to sample->weight. If sample type
is 'PERF_SAMPLE_WEIGHT', store the full 64-bit to sample->weight.
Signed-off-by: Athira Rajeev
---
tools/perf/arch/powerpc/util/Build | 2 ++
tools/perf/arch/powerpc/util/event.c | 32
tools/per
> On 12-Mar-2021, at 6:26 PM, Jiri Olsa wrote:
>
> On Tue, Mar 09, 2021 at 09:04:00AM -0500, Athira Rajeev wrote:
>> The pipeline stage cycles details can be recorded on powerpc from
>> the contents of Performance Monitor Unit (PMU) registers. On
>> ISA v3.1 p
> On 12-Mar-2021, at 6:27 PM, Jiri Olsa wrote:
>
> On Tue, Mar 09, 2021 at 09:03:58AM -0500, Athira Rajeev wrote:
>> Currently the header string for different columns in perf report
>> is fixed. Some fields of perf sample could have different meaning
>> for diffe
x27;var3_w' field of perf_sample_weight.
Add new sort function 'Pipeline Stage Cycle' and include this in
default_mem_sort_order[]. This new sort function may be used to denote
some other pipeline stage in another architecture. So add this to
list of sort entries that can have dyna
wer 32 bits to sample->weight. If sample type
is 'PERF_SAMPLE_WEIGHT', store the full 64-bit to sample->weight.
Signed-off-by: Athira Rajeev
---
tools/perf/arch/powerpc/util/Build | 2 ++
tools/perf/arch/powerpc/util/event.c | 32
tools/per
If the architecture do not have this function, fall back to the
default header string value.
Signed-off-by: Athira Rajeev
---
tools/perf/util/event.h | 1 +
tools/perf/util/sort.c | 19 ++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/event
bit weight field.
if the sample type is PERF_SAMPLE_WEIGHT_STRUCT, memory subsystem
latency is stored in the low 32bits of perf_sample_weight structure.
Also for CPU_FTR_ARCH_31, capture the two cycle counter information in
two 16 bit fields of perf_sample_weight structure.
Signed-off-by: Ath
1 11L1 hit[k]
perf_event_exec [kernel.vmlinux] [k] 0xc007ffdd3288
[unknown] N/A N/A
No N/A7 4
Athira Rajeev (4):
powerpc/p
> On 24-Feb-2021, at 5:51 PM, Thadeu Lima de Souza Cascardo
> wrote:
>
> EBB events must be under exclusive groups, so there is no mix of EBB and
> non-EBB events on the same PMU. This requirement worked fine as perf core
> would not allow other pinned events to be scheduled together with exc
; Error:
> Invalid --fields key: `srcline_from'
>
> After patch:
>
> $ ./perf report -b -F +srcline_from --stdio
> # Samples: 8K of event 'cycles'
> # Event count (approx.): 8784
> ...
>
> Reported-by: Athira Rajeev
> Fixes: aa6b3c99236b ("perf
> On 03-Mar-2021, at 1:40 AM, Liang, Kan wrote:
>
>
>
> On 3/2/2021 12:08 PM, Thomas Richter wrote:
>> On 3/2/21 4:23 PM, Liang, Kan wrote:
>>>
>>>
>>> On 3/2/2021 9:48 AM, Thomas Richter wrote:
>>>> On 3/2/21 3:03 PM, Liang,
ory policy. Patch adds a fix to dynamically allocate size for the
two arrays and bitmask value based on the node numbers available in the
system. With the fix, perf numa benchmark will work with node configuration
on any system and thus removes the static MAX_NR_NODES value.
Si
> On 05-Feb-2021, at 8:21 PM, Liang, Kan wrote:
>
>
>
> On 2/5/2021 7:55 AM, Athira Rajeev wrote:
>>>> Because in other archs, the var2_w of ‘perf_sample_weight’ could be used
>>>> to capture something else than the Local INSTR Latency.
>>>
> On 04-Feb-2021, at 8:49 PM, Liang, Kan wrote:
>
>
>
> On 2/4/2021 8:11 AM, Athira Rajeev wrote:
>>> On 03-Feb-2021, at 1:39 AM, kan.li...@linux.intel.com wrote:
>>>
>>> From: Kan Liang
>>>
>>> The instruction latency inform
t; @@ -1365,6 +1365,49 @@ struct sort_entry sort_global_weight = {
> .se_width_idx = HISTC_GLOBAL_WEIGHT,
> };
>
> +static u64 he_ins_lat(struct hist_entry *he)
> +{
> + return he->stat.nr_events ? he->stat.ins_lat /
> he->stat.nr_events : 0;
&g
> On 13-Jan-2021, at 12:43 AM, Liang, Kan wrote:
>
>
>
> On 1/12/2021 12:24 AM, Athira Rajeev wrote:
>>> On 06-Jan-2021, at 1:27 AM, kan.li...@linux.intel.com wrote:
>>>
>>> From: Kan Liang
>>>
>>> Changes since V3:
>>>
> On 06-Jan-2021, at 1:27 AM, kan.li...@linux.intel.com wrote:
>
> From: Kan Liang
>
> Changes since V3:
> - Rebase on top of acme's perf/core branch
> commit c07b45a355ee ("perf record: Tweak "Lowering..." warning in
> record_opts__config_freq")
>
> Changes since V2:
> - Rebase on top of
ol 'p9_dd21_bl_ev'
> was not declared. Should it be static?
> arch/powerpc/perf/power9-pmu.c:115:5: warning: symbol 'p9_dd22_bl_ev'
> was not declared. Should it be static?
>
> Those symbols are used only in the files that define them so we declare
> them as static t
> On 21-Sep-2020, at 4:55 PM, Wang Wensheng wrote:
>
> Build kernel with `C=2`:
> arch/powerpc/perf/isa207-common.c:24:18: warning: symbol
> 'isa207_pmu_format_attr' was not declared. Should it be static?
> arch/powerpc/perf/power9-pmu.c:101:5: warning: symbol 'p9_dd21_bl_ev'
> was not declare
> On 28-Jul-2020, at 9:33 PM, Arnaldo Carvalho de Melo wrote:
>
> Em Tue, Jul 28, 2020 at 05:43:47PM +0200, Jiri Olsa escreveu:
>> On Tue, Jul 28, 2020 at 01:57:30AM -0700, Ian Rogers wrote:
>>> From: David Sharp
>>>
>>> evsel__config() would only set PERF_RECORD_PERIOD if it set attr->freq
> On 27-Jul-2020, at 12:29 PM, Ian Rogers wrote:
>
> From: David Sharp
>
> evsel__config() would only set PERF_RECORD_SAMPLE if it set attr->freq
Hi Ian,
Commit message says PERF_RECORD_SAMPLE. But since we are setting period here,
it has to say “PERF_SAMPLE_PERIOD” ?
Thanks
Athira
>
n see
>>> the above relies on preempt_count() already having been incremented with
>>> NMI_MASK.
>>
>> Hmm. My patch seems simpler.
>
> And your patches fix my error while Peter's do not:
>
>
> IRQs not enabled as expected
> WARNING: CPU: 0 PID:
sample. Hence decide the mask value based on the processor
version.
Signed-off-by: Anju T Sudhakar
[Decide extended mask at run time based on platform]
Signed-off-by: Athira Rajeev
Reviewed-by: Madhavan Srinivasan
---
tools/arch/powerpc/include/uapi/asm/perf_regs.h | 14 ++-
tools/perf/arch
mmcr0 0x82008090
mmcr1 0x1e00
mmcr2 0x0
... thread: perf:4784
Signed-off-by: Anju T Sudhakar
[Defined PERF_REG_EXTENDED_MASK at run time to add support for different
platforms ]
Signed-off-by: Athira Rajeev
Reviewed-by: Madhavan Srinivasan
---
arch/powerpc/include/asm
Patch set to add support for perf extended register capability in
powerpc. The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to
indicate the PMU which support extended registers. The generic code
define the mask of extended registers as 0 for non supported architectures.
patch 1/2 defines th
Patch set to add support for perf extended register capability in
powerpc. The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to
indicate the PMU which support extended registers. The generic code
define the mask of extended registers as 0 for non supported architectures.
patch 1/2 defines th
mmcr0 0x82008090
mmcr1 0x1e00
mmcr2 0x0
... thread: perf:4784
Signed-off-by: Anju T Sudhakar
[Defined PERF_REG_EXTENDED_MASK at run time to add support for different
platforms ]
Signed-off-by: Athira Rajeev
---
arch/powerpc/include/asm/perf_event_server.h | 8 +++
arch
sample. Hence decide the mask value based on the processor
version.
Signed-off-by: Anju T Sudhakar
[Decide extended mask at run time based on platform]
Signed-off-by: Athira Rajeev
---
tools/arch/powerpc/include/uapi/asm/perf_regs.h | 14 ++-
tools/perf/arch/powerpc/include/perf_regs.h
0x1e00
mmcr2 0x0
... thread: perf:4784
Signed-off-by: Anju T Sudhakar
[Defined PERF_REG_EXTENDED_MASK at run time to add support for different
platforms ]
Signed-off-by: Athira Rajeev
---
Changes from v1 -> v2
- PERF_REG_EXTENDED_MASK` is defined at runtime in the kernel
based on platf
35 matches
Mail list logo