On 8/10/2020 6:47 PM, Peter Zijlstra wrote:
On Mon, Aug 10, 2020 at 06:38:35PM -0400, Liang, Kan wrote:
On 8/10/2020 5:47 PM, Dave Hansen wrote:
It's probably best if we very carefully define up front what is getting
reported here. For instance, I believe we already have some fun cases
On 8/10/2020 5:47 PM, Dave Hansen wrote:
On 8/10/20 2:24 PM, Kan Liang wrote:
+static u64 __perf_get_page_size(struct mm_struct *mm, unsigned long addr)
+{
+ struct page *page;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+
On 8/10/2020 5:41 PM, Peter Zijlstra wrote:
On Mon, Aug 10, 2020 at 02:24:23PM -0700, Kan Liang wrote:
From: Stephane Eranian
When studying code layout, it is useful to capture the page size of the
sampled code address.
Add a new sample type for code page size.
The new sample type
On 8/10/2020 5:40 PM, Peter Zijlstra wrote:
On Mon, Aug 10, 2020 at 02:24:22PM -0700, Kan Liang wrote:
The new sample type, PERF_SAMPLE_DATA_PAGE_SIZE, requires the virtual
address. Update the data->addr if the sample type is set.
The large PEBS is disabled with the sample type, because
On 8/10/2020 5:39 PM, Peter Zijlstra wrote:
On Mon, Aug 10, 2020 at 02:24:21PM -0700, Kan Liang wrote:
Current perf can report both virtual addresses and physical addresses,
but not the page size. Without the page size information of the utilized
page, users cannot decide whether to
On 7/30/2020 12:44 PM, pet...@infradead.org wrote:
On Thu, Jul 30, 2020 at 11:54:35AM -0400, Liang, Kan wrote:
On 7/30/2020 8:58 AM, pet...@infradead.org wrote:
On Thu, Jul 30, 2020 at 05:38:15AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The counter value of a perf task may
On 7/30/2020 8:58 AM, pet...@infradead.org wrote:
On Thu, Jul 30, 2020 at 05:38:15AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The counter value of a perf task may leak to another RDPMC task.
Sure, but nowhere did you explain why that is a problem.
The RDPMC instruction
On 7/28/2020 9:44 AM, pet...@infradead.org wrote:
On Tue, Jul 28, 2020 at 09:28:39AM -0400, Liang, Kan wrote:
On 7/28/2020 9:09 AM, Peter Zijlstra wrote:
On Fri, Jul 24, 2020 at 03:10:52PM -0400, Liang, Kan wrote:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
On 7/28/2020 9:09 AM, Peter Zijlstra wrote:
On Fri, Jul 24, 2020 at 03:10:52PM -0400, Liang, Kan wrote:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 6cb079e0c9d9..010ac74afc09 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
On 7/24/2020 12:07 PM, Liang, Kan wrote:
On 7/24/2020 11:27 AM, pet...@infradead.org wrote:
On Fri, Jul 24, 2020 at 03:19:06PM +0200, pet...@infradead.org wrote:
On Thu, Jul 23, 2020 at 10:11:11AM -0700, kan.li...@linux.intel.com
wrote:
@@ -3375,6 +3428,72 @@ static int
On 7/24/2020 12:43 PM, pet...@infradead.org wrote:
On Fri, Jul 24, 2020 at 04:59:34PM +0200, Peter Zijlstra wrote:
On Fri, Jul 24, 2020 at 07:46:32AM -0700, Andi Kleen wrote:
Something that seems to 'work' is:
'{cycles,cpu/instructions,period=5/}', so maybe you can make the
group
On 7/24/2020 11:27 AM, pet...@infradead.org wrote:
On Fri, Jul 24, 2020 at 03:19:06PM +0200, pet...@infradead.org wrote:
On Thu, Jul 23, 2020 at 10:11:11AM -0700, kan.li...@linux.intel.com wrote:
@@ -3375,6 +3428,72 @@ static int intel_pmu_hw_config(struct perf_event *event)
if
On 7/24/2020 9:54 AM, Peter Zijlstra wrote:
On Fri, Jul 24, 2020 at 09:43:44AM -0400, Liang, Kan wrote:
On 7/24/2020 7:46 AM, pet...@infradead.org wrote:
On Fri, Jul 24, 2020 at 12:55:43PM +0200, pet...@infradead.org wrote:
+ event_sched_out(event, cpuctx, ctx
On 7/24/2020 7:46 AM, pet...@infradead.org wrote:
On Fri, Jul 24, 2020 at 12:55:43PM +0200, pet...@infradead.org wrote:
+ event_sched_out(event, cpuctx, ctx);
+ perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+}
Ah, so the problem here is that ERROR is actually recoverable
On 7/21/2020 9:10 AM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:51AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
Users fail to sample-read the slots and metrics events, e.g.,
perf record -e '{slots, topdown-retiring}:S'.
When reading the metrics event, the fixed
On 7/20/2020 1:33 PM, Cyrill Gorcunov wrote:
On Mon, Jul 20, 2020 at 06:50:51AM -0700, kan.li...@linux.intel.com wrote:
...
static unsigned int __init get_xsave_size(void)
{
unsigned int eax, ebx, ecx, edx;
@@ -710,7 +741,7 @@ static int __init init_xstate_size(void)
On 7/21/2020 9:10 AM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:51AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
Users fail to sample-read the slots and metrics events, e.g.,
perf record -e '{slots, topdown-retiring}:S'.
When reading the metrics event, the fixed
On 7/21/2020 10:31 AM, pet...@infradead.org wrote:
On Tue, Jul 21, 2020 at 10:23:36AM -0400, Liang, Kan wrote:
Patch 13 forces the slots event to be part of a metric group. In patch 7,
for a metric group, we only update the values once with slots event.
I think the normal case mentioned
On 7/21/2020 8:40 AM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:49AM -0700, kan.li...@linux.intel.com wrote:
+static inline u64 icl_get_metrics_event_value(u64 metric, u64 slots, int idx)
+{
+ u32 val;
+
+ /*
+* The metric is reported as an 8bit integer
On 7/21/2020 5:43 AM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:47AM -0700, kan.li...@linux.intel.com wrote:
@@ -1031,6 +1034,35 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int
n, int *assign)
return unsched ? -EINVAL : 0;
}
+static int
On 7/20/2020 12:22 PM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:46AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
Many items are checked in the intel_pmu_disable/enable_event. More items
will be added later, e.g. perf metrics events.
Use switch, which is more
On 7/20/2020 12:20 PM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:43AM -0700, kan.li...@linux.intel.com wrote:
/*
+ * There is no event-code assigned to the fixed-mode PMCs.
+ *
+ * For a fixed-mode PMC, which has an equivalent event on a general-purpose
+ * PMC, the event-code of
On 7/20/2020 1:41 PM, Peter Zijlstra wrote:
On Fri, Jul 17, 2020 at 07:05:47AM -0700, kan.li...@linux.intel.com wrote:
For the event mapping, a special 0x00 event code is used, which is
reserved for fake events. The metric events start from umask 0x10.
+#define INTEL_PMC_IDX_METRIC_BASE
On 7/9/2020 7:00 PM, Dave Hansen wrote:
On 7/8/20 2:51 AM, tip-bot2 for Kan Liang wrote:
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index 02dabc9..72ba4c5 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@
On 7/7/2020 3:48 PM, Bjorn Helgaas wrote:
[+cc Stephane in case he has thoughts on the perf driver claim issue]
On Thu, Jul 02, 2020 at 10:05:11AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
On Snow Ridge server, several performance monitoring counters are added
in the Root
On 7/3/2020 4:59 PM, Liang, Kan wrote:
On 7/3/2020 3:50 PM, Peter Zijlstra wrote:
On Fri, Jul 03, 2020 at 05:49:19AM -0700, kan.li...@linux.intel.com
wrote:
+static void intel_pmu_store_lbr(struct cpu_hw_events *cpuc,
+ struct lbr_entry *entries)
+{
+ struct
On 7/6/2020 6:25 AM, Peter Zijlstra wrote:
On Fri, Jul 03, 2020 at 04:59:49PM -0400, Liang, Kan wrote:
On 7/3/2020 3:50 PM, Peter Zijlstra wrote:
If I'm not mistaken, this correctly deals with LBR_FORMAT_INFO, so can't
we also use the intel_pmu_arch_lbr_read() function for that case
On 7/3/2020 3:50 PM, Peter Zijlstra wrote:
On Fri, Jul 03, 2020 at 05:49:19AM -0700, kan.li...@linux.intel.com wrote:
+static void intel_pmu_store_lbr(struct cpu_hw_events *cpuc,
+ struct lbr_entry *entries)
+{
+ struct perf_branch_entry *e;
+ struct
On 7/2/2020 3:40 AM, Peter Zijlstra wrote:
On Sat, Jun 13, 2020 at 04:09:45PM +0800, Like Xu wrote:
Like Xu (10):
perf/x86/core: Refactor hw->idx checks and cleanup
perf/x86/lbr: Add interface to get LBR information
perf/x86: Add constraint to create guest LBR event without hw
On 6/30/2020 11:49 AM, Peter Zijlstra wrote:
On Fri, Jun 26, 2020 at 11:20:11AM -0700, kan.li...@linux.intel.com wrote:
+ if (boot_cpu_has(X86_FEATURE_ARCH_LBR))
+ intel_pmu_arch_lbr_init();
+static inline bool is_lbr_call_stack_bit_set(u64 config)
+{
+ if
On 6/30/2020 11:01 AM, Peter Zijlstra wrote:
On Fri, Jun 26, 2020 at 11:20:05AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The LBR capabilities of Architecture LBR are retrieved from the CPUID
enumeration once at boot time. The capabilities have to be saved for
future usage.
On 6/30/2020 10:57 AM, Peter Zijlstra wrote:
On Fri, Jun 26, 2020 at 11:20:06AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The KVM may not support the MSRs of Architecture LBR. Accessing the
MSRs may cause #GP and crash the guest.
The MSRs have to be checked at guest boot
On 6/26/2020 2:19 PM, kan.li...@linux.intel.com wrote:
From: Kan Liang
CPUID.(EAX=07H, ECX=0):EDX[19] indicates whether Intel CPU support
Architectural LBRs.
The Architectural Last Branch Records (LBR) feature enables recording
of software path history by logging taken branches and other
On 6/22/2020 2:49 PM, Cyrill Gorcunov wrote:
On Fri, Jun 19, 2020 at 07:04:09AM -0700, kan.li...@linux.intel.com wrote:
...
+static void intel_pmu_arch_lbr_read_xsave(struct cpu_hw_events *cpuc)
+{
+ struct x86_perf_task_context_arch_lbr_xsave *xsave = cpuc->lbr_xsave;
+ struct
On 6/22/2020 2:05 PM, Dave Hansen wrote:
On 6/22/20 10:47 AM, Liang, Kan wrote:
I'm wondering if we should just take these copy_*regs_to_*() functions
and uninline them. Yeah, they are basically wrapping one instruction,
but it might literally be the most heavyweight instruction
On 6/22/2020 11:02 AM, Dave Hansen wrote:
On 6/22/20 7:52 AM, Liang, Kan wrote:
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -58,6 +58,7 @@ static short xsave_cpuid_features[] __initdata = {
* XSAVE buffer, both supervisor and user xstates.
*/
u64
On 6/19/2020 3:31 PM, Peter Zijlstra wrote:
On Fri, Jun 19, 2020 at 07:04:05AM -0700, kan.li...@linux.intel.com wrote:
KVM includes the header file fpu/internal.h. To avoid 'undefined
xfeatures_mask_all' compiling issue, xfeatures_mask_all has to be
exported.
diff --git
On 6/19/2020 3:41 PM, Peter Zijlstra wrote:
On Fri, Jun 19, 2020 at 07:04:08AM -0700, kan.li...@linux.intel.com wrote:
The XSAVE instruction requires 64-byte alignment for state buffers. A
64-byte aligned kmem_cache is created for architecture LBR.
+ pmu->task_ctx_cache =
On 6/19/2020 3:08 PM, Peter Zijlstra wrote:
On Fri, Jun 19, 2020 at 07:04:00AM -0700, kan.li...@linux.intel.com wrote:
+static void intel_pmu_arch_lbr_enable(bool pmi)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(_hw_events);
+ u64 debugctl, lbr_ctl = 0, orig_debugctl;
+
+
On 6/19/2020 2:40 PM, Peter Zijlstra wrote:
On Fri, Jun 19, 2020 at 07:03:59AM -0700, kan.li...@linux.intel.com wrote:
- if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
+ if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map || x86_pmu.lbr_ctl_map) {
+ union {
+
On 5/28/2020 10:02 AM, Andi Kleen wrote:
+
+ pr_warn_once("perf uncore: Access invalid address of %s.\n",
+box->pmu->type->name);
Pretty hard to debug without the invalid offset.
I will dump the box->io_addr and offset for debugging.
Please don't overengineer.
On 5/28/2020 9:33 AM, David Laight wrote:
From: kan.li...@linux.intel.com
Sent: 28 May 2020 14:15
...
+static inline bool is_valid_mmio_offset(struct intel_uncore_box *box,
+ unsigned long offset)
You need a better name, needs to start 'uncore_' and
On 5/28/2020 9:30 AM, Andi Kleen wrote:
On Thu, May 28, 2020 at 06:15:27AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
An oops will be triggered, if perf tries to access an invalid address
which exceeds the mapped area.
Check the address before the actual access to MMIO sapce
On 5/28/2020 9:29 AM, Andi Kleen wrote:
On Thu, May 28, 2020 at 06:15:26AM -0700, kan.li...@linux.intel.com wrote:
- box->io_addr = ioremap(addr, SNB_UNCORE_PCI_IMC_MAP_SIZE);
+ if (!type->mmio_map_size) {
+ pr_warn("perf uncore: Cannot ioremap for %s. Size of map
On 5/27/2020 11:17 AM, David Laight wrote:
From: Liang, Kan
Sent: 27 May 2020 16:01
On 5/27/2020 10:51 AM, David Laight wrote:
From: Liang, Kan
Sent: 27 May 2020 15:47
On 5/27/2020 8:59 AM, David Laight wrote:
From: kan.li...@linux.intel.com
Sent: 27 May 2020 13:31
From: Kan Liang
On 5/27/2020 10:51 AM, David Laight wrote:
From: Liang, Kan
Sent: 27 May 2020 15:47
On 5/27/2020 8:59 AM, David Laight wrote:
From: kan.li...@linux.intel.com
Sent: 27 May 2020 13:31
From: Kan Liang
When counting IMC uncore events on some TGL machines, an oops will be
triggered
On 5/27/2020 8:59 AM, David Laight wrote:
From: kan.li...@linux.intel.com
Sent: 27 May 2020 13:31
From: Kan Liang
When counting IMC uncore events on some TGL machines, an oops will be
triggered.
[ 393.101262] BUG: unable to handle page fault for address:
b45200e15858
[
Hi Peter,
Could you please take a look the patch, and apply the patch if it's OK?
Thanks,
Kan
On 4/2/2020 3:52 PM, kan.li...@linux.intel.com wrote:
From: Kan Liang
The uncore subsystem on Comet Lake is similar to Sky Lake.
The only difference is the new PCI IDs for IMC.
Share the perf code
On 10/22/2019 5:39 AM, Peter Zijlstra wrote:
On Mon, Oct 21, 2019 at 01:03:02PM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
In LBR call stack mode, the depth of reconstructed LBR call stack limits
to the number of LBR registers. With LBR Top-of-Stack (TOS) information,
perf
On 10/16/2019 5:50 AM, Alexey Budankov wrote:
Implement intel_pmu_lbr_sync_task_ctx() method that updates counter
of the events that requested LBR callstack data on a sample.
The counter can be zero for the case when task context belongs to
a thread that has just come from a block on a
On 10/8/2019 10:38 AM, Peter Zijlstra wrote:
On Tue, Oct 08, 2019 at 09:53:24AM -0400, Liang, Kan wrote:
On 10/8/2019 4:31 AM, Peter Zijlstra wrote:
On Mon, Oct 07, 2019 at 10:59:01AM -0700, kan.li...@linux.intel.com wrote:
diff --git a/include/linux/perf_event.h b/include/linux
On 10/8/2019 4:31 AM, Peter Zijlstra wrote:
On Mon, Oct 07, 2019 at 10:59:01AM -0700, kan.li...@linux.intel.com wrote:
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 61448c19a132..ee9ef0c4cb08 100644
--- a/include/linux/perf_event.h
+++
On 10/7/2019 2:24 PM, Ingo Molnar wrote:
* kan.li...@linux.intel.com wrote:
Performance impact:
The processing time may increase with the LBR stitching approach
enabled. The impact depends on the number of samples with stitched LBRs.
For sqlite's tcltest,
perf record --call-graph lbr --
On 10/7/2019 8:01 AM, Paolo Bonzini wrote:
On 30/09/19 09:22, Like Xu wrote:
-static int perf_event_period(struct perf_event *event, u64 __user *arg)
+static int _perf_event_period(struct perf_event *event, u64 value)
__perf_event_period or perf_event_period_locked would be more consistent
On 9/30/2019 11:52 AM, Peter Zijlstra wrote:
On Mon, Sep 16, 2019 at 06:41:22AM -0700, kan.li...@linux.intel.com wrote:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 71f3086a8adc..7ec0f350d2ac 100644
--- a/arch/x86/events/intel/core.c
+++
On 9/30/2019 12:21 PM, Peter Zijlstra wrote:
{
int idx = event->hwc.idx;
if (is_metric_idx(idx))
return;
// must be FIXED_SLOTS
The FIXED_SLOTS may not be in the group.
Argh.. can we mandate that it is? that is, if you want a metric thing,
you have
On 9/30/2019 10:53 AM, Peter Zijlstra wrote:
After that, I think we can simply do something like:
icl_update_topdown_event(..)
We should call this function in x86_pmu_commit_txn()?
In intel_pmu_read_event(), we just simply return, when TXN_READ is set
and is_topdown_count().
If so, it
Hi Peter,
Could you please take a look at the patch set?
Thanks,
Kan
On 9/16/2019 9:41 AM, kan.li...@linux.intel.com wrote:
From: Kan Liang
Icelake has support for measuring the level 1 TopDown metrics
directly in hardware. This is implemented by an additional METRICS
register, and a new
On 8/31/2019 5:19 AM, Peter Zijlstra wrote:
Then there is no mucking about with that odd counter/metrics msr pair
reset nonsense. Becuase that really stinks.
You have to write them to reset the internal counters.
But not for ever read, only on METRIC_OVF.
The precision are lost if the
On 8/29/2019 9:52 AM, Peter Zijlstra wrote:
On Thu, Aug 29, 2019 at 09:31:37AM -0400, Liang, Kan wrote:
On 8/28/2019 11:19 AM, Peter Zijlstra wrote:
+static int icl_set_topdown_event_period(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = >hw;
+ s64 left = local64_r
On 8/28/2019 11:19 AM, Peter Zijlstra wrote:
+static int icl_set_topdown_event_period(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = >hw;
+ s64 left = local64_read(>period_left);
+
+ /*
+* Clear PERF_METRICS and Fixed counter 3 in initialization.
+
On 8/28/2019 11:02 AM, Peter Zijlstra wrote:
Reset
==
The PERF_METRICS and Fixed counter 3 have to be reset for each read,
because:
- The 8bit metrics ratio values lose precision when the measurement
period gets longer.
So it musn't be too hot,
- The PERF_METRICS may report wrong
On 8/28/2019 3:52 AM, Peter Zijlstra wrote:
On Mon, Aug 26, 2019 at 07:47:34AM -0700, kan.li...@linux.intel.com wrote:
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 81b005e4c7d9..54534ff00940 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1033,18
On 8/28/2019 5:02 AM, Peter Zijlstra wrote:
On Wed, Aug 28, 2019 at 10:44:16AM +0200, Peter Zijlstra wrote:
Let me clean up this mess for you.
Here, how's that. Now we don't check is_metric_idx() _3_ times on the
enable/disable path and all the topdown crud is properly placed in the
fixed
+ /*
+* The new group must can be scheduled
+* together with current pinned events.
+* Otherwise, it will never get a chance
+* to be scheduled later.
That's wrapped short; also I don't think it is sufficient; what if you
happen to have a pinned event on
On 8/20/2019 10:10 AM, Peter Zijlstra wrote:
On Fri, Aug 16, 2019 at 10:49:10AM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
perf stat -M metrics relies on weak groups to reject unschedulable
groups and run them as non-groups.
This uses the group validation code in the kernel.
On 8/14/2019 11:59 PM, Haiyan Song wrote:
Add a Intel event file for perf.
Signed-off-by: Haiyan Song
Reviewed-by: Kan Liang
Thanks,
Kan
---
tools/perf/pmu-events/arch/x86/mapfile.csv | 1 +
tools/perf/pmu-events/arch/x86/tremontx/cache.json | 111 ++
Hi Peter,
Any comments for this series?
Thanks,
Kan
On 7/24/2019 1:17 PM, kan.li...@linux.intel.com wrote:
From: Kan Liang
Icelake has support for measuring the level 1 TopDown metrics
directly in hardware. This is implemented by an additional METRICS
register, and a new Fixed Counter 3
On 7/23/2019 9:45 PM, Eric Biggers wrote:
Title: WARNING in perf_reg_value
Last occurred: 25 days ago
Reported: 34 days ago
Branches: Mainline and others
Dashboard
/u
Signed-off-by: Yunying Sun
Thanks Yunying.
Reviewed-by: Kan Liang
Kan
---
arch/x86/events/intel/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 9e911a96972b..b35519cbc8b4 100644
--- a/arch
On 7/24/2019 2:32 AM, Haiyan Song wrote:
Hi,
This patch contains lines that longer than 998 characters,
I've sent it by 'git send-email', but when apply it,
prompt information "error: corrupt patch at line 2558".
https://lkml.org/lkml/2019/6/24/1278
I checked the line at 2558, it is
On 6/26/2019 9:47 AM, Arnaldo Carvalho de Melo wrote:
Em Wed, Jun 26, 2019 at 08:04:36PM +0900, Masanari Iida escreveu:
This patch fix some spelling typo in x86/*/floating-point.json
These are auto-generated files, glad that you CCed your fixes to the
Intel folks, hopefully they will in
On 6/20/2019 8:50 AM, Peter Zijlstra wrote:
On Mon, Jun 17, 2019 at 09:41:37PM +0800, Zhang Rui wrote:
After S3 suspend/resume, "perf stat -I 1000 -e power/energy-pkg/ -a"
reports an insane value for the very first sampling period after resume
as shown below.
19.278989977
On 6/19/2019 4:07 PM, Vince Weaver wrote:
On Wed, 19 Jun 2019, syzbot wrote:
syzbot found the following crash on:
HEAD commit:0011572c Merge branch 'for-5.2-fixes' of git://git.kernel...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=12c38d66a0
On 6/17/2019 11:56 AM, Arnaldo Carvalho de Melo wrote:
Em Mon, Jun 17, 2019 at 04:21:56PM +0200, Geert Uytterhoeven escreveu:
- Do not use apostrophes for plurals,
- Insert commas before "and",
- Spelling s/statisfied/satisfied/.
I think these files are generated from some other
On 6/14/2019 3:10 PM, Stephane Eranian wrote:
On Thu, Jun 13, 2019 at 9:13 AM Liang, Kan wrote:
On 6/1/2019 4:27 AM, Ian Rogers wrote:
Currently perf_rotate_context assumes that if the context's nr_events !=
nr_active a rotation is necessary for perf event multiplexing. With
cgroups
On 6/13/2019 9:48 PM, Haiyan Song wrote:
diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv
b/tools/perf/pmu-events/arch/x86/mapfile.csv
index d6984a3017e0..f8357a79641a 100644
--- a/tools/perf/pmu-events/arch/x86/mapfile.csv
+++ b/tools/perf/pmu-events/arch/x86/mapfile.csv
@@ -33,4
On 6/14/2019 7:28 AM, Jiri Olsa wrote:
hi,
the HPE server can do POST tracing and have enabled LBR
tracing during the boot, which makes check_msr fail falsly.
It looks like check_msr code was added only to check on guests
MSR access, would it be then ok to disable check_msr for real
On 6/1/2019 4:27 AM, Ian Rogers wrote:
Currently perf_rotate_context assumes that if the context's nr_events !=
nr_active a rotation is necessary for perf event multiplexing. With
cgroups, nr_events is the total count of events for all cgroups and
nr_active will not include events in a cgroup
On 6/6/2019 4:08 PM, Arnaldo Carvalho de Melo wrote:
Em Thu, Jun 06, 2019 at 04:12:10PM -0300, Arnaldo Carvalho de Melo escreveu:
Em Tue, Jun 04, 2019 at 03:50:41PM -0700, kan.li...@linux.intel.com escreveu:
From: Kan Liang
With the new CPUID.1F, a new level type of CPU topology, 'die',
On 5/29/2019 12:58 PM, Peter Zijlstra wrote:
On Wed, May 29, 2019 at 10:42:10AM -0400, Liang, Kan wrote:
On 5/29/2019 3:54 AM, Peter Zijlstra wrote:
cd09c0c40a97 ("perf events: Enable raw event support for Intel
unhalted_reference_cycles event")
We used the fake event=0x00,
On 6/3/2019 12:36 PM, Jiri Olsa wrote:
On Thu, May 30, 2019 at 07:53:47AM -0700, kan.li...@linux.intel.com wrote:
SNIP
+
static int perf_env__get_core(struct cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int core = -1, cpu = perf_env__get_cpu(env,
On 6/3/2019 12:34 PM, Peter Zijlstra wrote:
On Tue, Apr 30, 2019 at 05:53:42PM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The patch series intends to enable perf uncore support for Snow Ridge
server.
Here is the link for the uncore document.
On 6/3/2019 11:47 AM, Peter Zijlstra wrote:
On Mon, Jun 03, 2019 at 06:41:21AM -0700, kan.li...@linux.intel.com wrote:
@@ -4962,7 +4965,9 @@ __init int intel_pmu_init(void)
x86_pmu.cpu_events = get_icl_events_attrs();
x86_pmu.rtm_abort_event =
On 5/29/2019 3:57 AM, Peter Zijlstra wrote:
On Tue, May 28, 2019 at 02:24:56PM -0400, Liang, Kan wrote:
On 5/28/2019 9:48 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
diff --git a/include/linux/perf_event.h b/include/linux
On 5/29/2019 3:34 AM, Peter Zijlstra wrote:
+ wrmsrl(MSR_PERF_METRICS, 0);
+ wrmsrl(MSR_CORE_PERF_FIXED_CTR3, 0);
I don't get this, overflow happens on when we flip sign, so why is
programming 0 a sane thing to do?
Reset the counters (programming
On 5/29/2019 3:54 AM, Peter Zijlstra wrote:
On Tue, May 28, 2019 at 02:24:38PM -0400, Liang, Kan wrote:
On 5/28/2019 9:43 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
@@ -3287,6 +3304,13 @@ static int core_pmu_hw_config(struct
On 5/29/2019 3:28 AM, Peter Zijlstra wrote:
On Tue, May 28, 2019 at 02:21:49PM -0400, Liang, Kan wrote:
On 5/28/2019 8:15 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:48PM -0700, kan.li...@linux.intel.com wrote:
+/*
+ * We model PERF_METRICS as more magic fixed-mode PMCs, one
On 5/28/2019 5:00 AM, Jiri Olsa wrote:
On Thu, May 23, 2019 at 01:41:19PM -0700, kan.li...@linux.intel.com wrote:
SNIP
diff --git a/tools/perf/util/cputopo.c b/tools/perf/util/cputopo.c
index ece0710..f6e7db7 100644
--- a/tools/perf/util/cputopo.c
+++ b/tools/perf/util/cputopo.c
@@ -1,5
On 5/28/2019 4:59 AM, Jiri Olsa wrote:
On Thu, May 23, 2019 at 01:41:21PM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
The "sibling cores" actually shows the sibling CPUs of a socket.
The name "sibling cores" is very misleading.
Rename "sibling cores" to "sibling sockets"
by
On 5/28/2019 9:52 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:53PM -0700, kan.li...@linux.intel.com wrote:
From: Kan Liang
To get correct PERF_METRICS value, the fixed counter 3 must start from
0. It would bring problems when sampling read slots and topdown events.
For example,
On 5/28/2019 9:48 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index b980b9e95d2a..0d7081434d1d 100644
--- a/include/linux/perf_event.h
+++
On 5/28/2019 9:43 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
@@ -3287,6 +3304,13 @@ static int core_pmu_hw_config(struct perf_event *event)
return intel_pmu_bts_config(event);
}
+#define EVENT_CODE(e) (e->attr.config &
On 5/28/2019 9:30 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
+static u64 icl_metric_update_event(struct perf_event *event, u64 val)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(_hw_events);
+ struct hw_perf_event *hwc =
On 5/28/2019 8:43 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:50PM -0700, kan.li...@linux.intel.com wrote:
The 8bit metrics ratio values lose precision when the measurement period
gets longer.
To avoid this we always reset the metric value when reading, as we
already accumulate
On 5/28/2019 8:20 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:49PM -0700, kan.li...@linux.intel.com wrote:
From: Andi Kleen
The internal counters used for the metrics can overflow. If this happens
an overflow is triggered on the SLOTS fixed counter. Add special code
that resets
On 5/28/2019 8:15 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:48PM -0700, kan.li...@linux.intel.com wrote:
+/*
+ * We model PERF_METRICS as more magic fixed-mode PMCs, one for each metric
+ * and another for the whole slots counter
+ *
+ * Internally they all map to Fixed Ctr 3
On 5/28/2019 8:05 AM, Peter Zijlstra wrote:
On Tue, May 21, 2019 at 02:40:48PM -0700, kan.li...@linux.intel.com wrote:
From: Andi Kleen
Metrics counters (hardware counters containing multiple metrics)
are modeled as separate registers for each TopDown metric events,
with an extra reg being
On 5/28/2019 10:05 AM, Peter Zijlstra wrote:
On Tue, May 28, 2019 at 09:33:40AM -0400, Liang, Kan wrote:
Uncore PMU doesn't support sampling. It will return -EINVAL.
There is no regs support for counting. The request will be ignored.
I think current check for uncore is good enough
On 5/28/2019 4:56 AM, Peter Zijlstra wrote:
On Mon, May 27, 2019 at 12:07:55PM -0700, kan.li...@linux.intel.com wrote:
diff --git a/arch/x86/include/uapi/asm/perf_regs.h
b/arch/x86/include/uapi/asm/perf_regs.h
index ac67bbe..3a96971 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++
101 - 200 of 1326 matches
Mail list logo