Commit-ID: ada2f634cd50d050269b67b4e2966582387e7c27
Gitweb: http://git.kernel.org/tip/ada2f634cd50d050269b67b4e2966582387e7c27
Author: Vikas Shivappa
AuthorDate: Thu, 10 Mar 2016 15:32:08 -0800
Committer: Ingo Molnar
CommitDate: Mon, 21 Mar 2016 09:08:19 +0100
perf/x86/cqm: Fix CQM
Commit-ID: a223c1c7ab4cc64537dc4b911f760d851683768a
Gitweb: http://git.kernel.org/tip/a223c1c7ab4cc64537dc4b911f760d851683768a
Author: Vikas Shivappa
AuthorDate: Thu, 10 Mar 2016 15:32:07 -0800
Committer: Ingo Molnar
CommitDate: Mon, 21 Mar 2016 09:08:18 +0100
perf/x86/cqm: Fix CQM
Commit-ID: 33c3cc7acfd95968d74247f1a4e1b0727a07ed43
Gitweb: http://git.kernel.org/tip/33c3cc7acfd95968d74247f1a4e1b0727a07ed43
Author: Vikas Shivappa <vikas.shiva...@linux.intel.com>
AuthorDate: Thu, 10 Mar 2016 15:32:09 -0800
Committer: Ingo Molnar <mi...@kernel.org>
CommitD
Commit-ID: 2d4de8376ff1d94a5070cfa9092c59bfdc4e693e
Gitweb: http://git.kernel.org/tip/2d4de8376ff1d94a5070cfa9092c59bfdc4e693e
Author: Vikas Shivappa <vikas.shiva...@linux.intel.com>
AuthorDate: Thu, 10 Mar 2016 15:32:11 -0800
Committer: Ingo Molnar <mi...@kernel.org>
CommitD
Commit-ID: 2d4de8376ff1d94a5070cfa9092c59bfdc4e693e
Gitweb: http://git.kernel.org/tip/2d4de8376ff1d94a5070cfa9092c59bfdc4e693e
Author: Vikas Shivappa
AuthorDate: Thu, 10 Mar 2016 15:32:11 -0800
Committer: Ingo Molnar
CommitDate: Mon, 21 Mar 2016 09:08:20 +0100
perf/x86/mbm: Implement
Commit-ID: 33c3cc7acfd95968d74247f1a4e1b0727a07ed43
Gitweb: http://git.kernel.org/tip/33c3cc7acfd95968d74247f1a4e1b0727a07ed43
Author: Vikas Shivappa
AuthorDate: Thu, 10 Mar 2016 15:32:09 -0800
Committer: Ingo Molnar
CommitDate: Mon, 21 Mar 2016 09:08:19 +0100
perf/x86/mbm: Add Intel
Please see if the branch below works for you:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/core
You mean I test the mbm patches on top of this ?
I see you applied the mbm patches here already
Thanks,
Vikas
Please see if the branch below works for you:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/core
You mean I test the mbm patches on top of this ?
I see you applied the mbm patches here already
Thanks,
Vikas
On Fri, 11 Mar 2016, Peter Zijlstra wrote:
On Thu, Mar 10, 2016 at 03:32:06PM -0800, Vikas Shivappa wrote:
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
They were not (or at least not a recent copy of it); all the files got
On Fri, 11 Mar 2016, Peter Zijlstra wrote:
On Thu, Mar 10, 2016 at 03:32:06PM -0800, Vikas Shivappa wrote:
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
They were not (or at least not a recent copy of it); all the files got
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) initialization.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_inte
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
Thanks to Thomas and PeterZ for feedback on V5 and have tried to
implement feedback in this version.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
bandwidth from one
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) initialization.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 43 ++
1 file changed, 32
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
Thanks to Thomas and PeterZ for feedback on V5 and have tried to
implement feedback in this version.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
bandwidth from one
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_inte
is deallocated we need to update the ->count
variable.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 32 ++
1 file changed, 28 insertions(
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event
with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/include/asm/cpufeature.h | 2 +
arc
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 13 ++---
include/linux/perf_event.h | 1 +
2
is deallocated we need to update the ->count
variable.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 32 ++
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cq
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 140 +++--
1 file changed
with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/cpufeature.h | 2 +
arch/x86/kernel/cpu/common.c | 4 +-
arch/x86/kernel
y: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 130 +++--
1 file changed, 125 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.
and uses the
IA32_PQR_ASSOC_MSR to associate the RMID with the task. The tasks have a
common RMID for cqm(cache quality of service monitoring) and MBM. Hence
most of the scheduling code is reused from cqm.
Reviewed-by: Tony Luck
Signed-off-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch
On Mon, 7 Mar 2016, Peter Zijlstra wrote:
On Tue, Mar 01, 2016 at 03:48:26PM -0800, Vikas Shivappa wrote:
Lot of the scheduling code was taken out from Tony's patch and a 3-4
lines of change were added in the intel_cqm_event_read. Since the timer
is no more added on every context switch
On Mon, 7 Mar 2016, Peter Zijlstra wrote:
On Tue, Mar 01, 2016 at 03:48:26PM -0800, Vikas Shivappa wrote:
Lot of the scheduling code was taken out from Tony's patch and a 3-4
lines of change were added in the intel_cqm_event_read. Since the timer
is no more added on every context switch
On Tue, 8 Mar 2016, Peter Zijlstra wrote:
On Mon, Mar 07, 2016 at 11:27:26PM +, Luck, Tony wrote:
+ bytes = mbm_current->interval_bytes * MSEC_PER_SEC;
+ do_div(bytes, diff_time);
+ mbm_current->bandwidth = bytes;
+
On Tue, 8 Mar 2016, Peter Zijlstra wrote:
On Mon, Mar 07, 2016 at 11:27:26PM +, Luck, Tony wrote:
+ bytes = mbm_current->interval_bytes * MSEC_PER_SEC;
+ do_div(bytes, diff_time);
+ mbm_current->bandwidth = bytes;
+
On Mon, 7 Mar 2016, Peter Zijlstra wrote:
On Tue, Mar 01, 2016 at 03:48:23PM -0800, Vikas Shivappa wrote:
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -121,6 +121,7 @@ struct hw_perf_event {
struct { /* intel_cqm */
int
On Mon, 7 Mar 2016, Peter Zijlstra wrote:
On Tue, Mar 01, 2016 at 03:48:23PM -0800, Vikas Shivappa wrote:
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -121,6 +121,7 @@ struct hw_perf_event {
struct { /* intel_cqm */
int
On Tue, 8 Mar 2016, Thomas Gleixner wrote:
On Thu, 3 Mar 2016, Thomas Gleixner wrote:
On Thu, 3 Mar 2016, Vikas Shivappa wrote:
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
+ if (cqm_enabled && mbm
On Tue, 8 Mar 2016, Thomas Gleixner wrote:
On Thu, 3 Mar 2016, Thomas Gleixner wrote:
On Thu, 3 Mar 2016, Vikas Shivappa wrote:
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
+ if (cqm_enabled && mbm
On Tue, 8 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
Please fix the subject line prefix: "x86/perf/intel/cqm:"
Will fix..
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) init
On Tue, 8 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
Please fix the subject line prefix: "x86/perf/intel/cqm:"
Will fix..
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) init
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
+ if (cqm_enabled && mbm_enabled)
+ intel_cqm_events_group.attrs = intel_cmt_mbm_events_attr;
+ else if (!cqm_enabled && mbm_enabled)
+ intel_cqm_ev
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
+ if (cqm_enabled && mbm_enabled)
+ intel_cqm_events_group.attrs = intel_cmt_mbm_events_attr;
+ else if (!cqm_enabled && mbm_enabled)
+ intel_cqm_ev
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
Fixed mbm_timers leak in the in
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
Fixed mbm_timers leak in the intel_cqm_init function.
arch/x86/kernel/cpu/perf_event_intel_cqm.c
local b/w
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vi
local b/w
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
Fixed mbm_local and mbm_total leaks
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) initialization.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
Fixed the memory leak for cqm_rmid
Fixes the hotcpu notifier leak and other global variable memory leaks
during cqm(cache quality of service monitoring) initialization.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
Fixed the memory leak for cqm_rmid_ptrs as per Thomas feedback.
arch/x86/kernel/cpu
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
Leaks mbm_local and mbm_total
Will fix. Thanks for pointing out. I missed the ones which are done at the
next level of calls from the init. Will do a check on all the globals as
well.
Vikas
On Wed, 2 Mar 2016, Vikas Shivappa wrote:
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
Leaks mbm_local and mbm_total
Will fix. Thanks for pointing out. I missed the ones which are done at the
next level of calls from the init. Will do a check on all the globals as
well.
Vikas
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Tue, 1 Mar 2016, Vikas Shivappa wrote:
@@ -1397,8 +1543,11 @@ static int __init intel_cqm_init(void)
__perf_cpu_notifier(intel_cqm_cpu_notifier);
out:
cpu_notifier_register_done();
- if (ret)
+ if (ret
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Tue, 1 Mar 2016, Vikas Shivappa wrote:
@@ -1397,8 +1543,11 @@ static int __init intel_cqm_init(void)
__perf_cpu_notifier(intel_cqm_cpu_notifier);
out:
cpu_notifier_register_done();
- if (ret)
+ if (ret
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Tue, 1 Mar 2016, Vikas Shivappa wrote:
Fixes the hotcpu notifier leak and a memory leak during cqm(cache
quality of service monitoring) initialization.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vi
On Wed, 2 Mar 2016, Thomas Gleixner wrote:
On Tue, 1 Mar 2016, Vikas Shivappa wrote:
Fixes the hotcpu notifier leak and a memory leak during cqm(cache
quality of service monitoring) initialization.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu
Fixes the hotcpu notifier leak and a memory leak during cqm(cache
quality of service monitoring) initialization.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 17
is deallocated we need to update the ->count
variable.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 32 ++
1 file changed, 28 insertions(
local b/w
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vi
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event
Fixes the hotcpu notifier leak and a memory leak during cqm(cache
quality of service monitoring) initialization.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff
is deallocated we need to update the ->count
variable.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 32 ++
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cq
local b/w
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/cpufeature.h
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 132 -
1 file changed
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Reviewed-by: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_inte
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
Thanks to Thomas for feedback on V4 and have tried to implement his
feedback in this version.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
bandwidth from one level
y: Tony Luck <tony.l...@intel.com>
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 158 -
1 file changed, 154 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Reviewed-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 13 ++---
include/linux/perf_event.h | 1 +
2
The patch series has two preparatory patch for cqm and then 4 MBM
patches. Patches are based on tip perf/core.
Thanks to Thomas for feedback on V4 and have tried to implement his
feedback in this version.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
bandwidth from one level
cqm.
Lot of the scheduling code was taken out from Tony's patch and a 3-4
lines of change were added in the intel_cqm_event_read. Since the timer
is no more added on every context switch this change was made.
Reviewed-by: Tony Luck
Signed-off-by: Tony Luck
Signed-off-by: Vikas Shivappa
---
arch
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 24 Feb 2016, Vikas Shivappa wrote:
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
You really should register the notifier _AFTER_ registering the pmu. That
needs to be fixed anyway, because the existing code leaks the notifier AND
memory
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 24 Feb 2016, Vikas Shivappa wrote:
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
You really should register the notifier _AFTER_ registering the pmu. That
needs to be fixed anyway, because the existing code leaks the notifier AND
memory
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 10 Feb 2016, Vikas Shivappa wrote:
+static enum hrtimer_restart mbm_hrtimer_handle(struct hrtimer *hrtimer)
+{
+ if (list_empty(_groups))
+ goto out;
+
+ list_for_each_entry(iter, _groups, hw.cqm_groups_entry
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 10 Feb 2016, Vikas Shivappa wrote:
+static enum hrtimer_restart mbm_hrtimer_handle(struct hrtimer *hrtimer)
+{
+ if (list_empty(_groups))
+ goto out;
+
+ list_for_each_entry(iter, _groups, hw.cqm_groups_entry
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 10 Feb 2016, Vikas Shivappa wrote:ar
+static int intel_mbm_init(void)
+{
+ int ret = 0, array_size, maxid = cqm_max_rmid + 1;
+
+ mbm_socket_max = cpumask_weight(_cpumask);
This should use the new topology_max_packages
On Wed, 24 Feb 2016, Thomas Gleixner wrote:
On Wed, 10 Feb 2016, Vikas Shivappa wrote:ar
+static int intel_mbm_init(void)
+{
+ int ret = 0, array_size, maxid = cqm_max_rmid + 1;
+
+ mbm_socket_max = cpumask_weight(_cpumask);
This should use the new topology_max_packages
On Thu, 18 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Vikas Shivappa wrote:
Please stop top posting, finally!
But we have an extra static - static to avoid having it in the stack..
It's not about the cpu mask on the stack
On Thu, 18 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Vikas Shivappa wrote:
Please stop top posting, finally!
But we have an extra static - static to avoid having it in the stack..
It's not about the cpu mask on the stack
calculation, init, lot of unnecessary
and confusing constants and code..
http://lkml.kernel.org/r/alpine.DEB.2.11.1508192243081.3873@nanos
Thanks,
Vikas
On Wed, 10 Feb 2016, Vikas Shivappa wrote:
The V4 version of MBM is almost a complete rewrite of the prior
versions. It tries to address all
calculation, init, lot of unnecessary
and confusing constants and code..
http://lkml.kernel.org/r/alpine.DEB.2.11.1508192243081.3873@nanos
Thanks,
Vikas
On Wed, 10 Feb 2016, Vikas Shivappa wrote:
The V4 version of MBM is almost a complete rewrite of the prior
versions. It tries to address all
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Vikas Shivappa wrote:
Yes, please resend the rapl one. perf_uncore is a different trainwreck which I
fixed already:
lkml.kernel.org/r/20160217132903.767990...@linutronix.de
Ok , will resend the rapl separately.
the fix
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Vikas Shivappa wrote:
Yes, please resend the rapl one. perf_uncore is a different trainwreck which I
fixed already:
lkml.kernel.org/r/20160217132903.767990...@linutronix.de
Ok , will resend the rapl separately.
the fix
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
CQM is a strict per package facility. Use the proper cpumasks to lookup the
readers.
Sorry for the noise. PEBKAC: quilt refresh missing. Correct version below.
Thanks,
tglx
8<--
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
On Wed, 17 Feb 2016, Thomas Gleixner wrote:
CQM is a strict per package facility. Use the proper cpumasks to lookup the
readers.
Sorry for the noise. PEBKAC: quilt refresh missing. Correct version below.
Thanks,
tglx
8<--
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 13 ++---
include/linux/perf_event.h | 1 +
2 files changed, 11
is deallocated we need to update the ->count
variable.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 27 +--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
b/arch/x86/kernel/
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 111 -
1 file changed, 110 insertions(+), 1
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/cpufeature.h | 2 +
arch/x86/kernel/cpu
The V4 version of MBM is almost a complete rewrite of the prior
versions. It tries to address all of Thomas earlier
comments.
The patch series has one preparatory patch for cqm and then 4 MBM
patches. *Patches apply on 4.5-rc1*.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
cqm.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 159 -
1 file changed, 155 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index e45f5aa..b1c9663
The V4 version of MBM is almost a complete rewrite of the prior
versions. It tries to address all of Thomas earlier
comments.
The patch series has one preparatory patch for cqm and then 4 MBM
patches. *Patches apply on 4.5-rc1*.
Memory bandwitdh monitoring(MBM) provides OS/VMM a way to monitor
duling code is
reused from cqm.
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 159 -
1 file changed, 155 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
b/arc
by calibrating on the system. The overflow is really a function
of the max memory b/w that the socket can support, max counter value and
scaling factor.
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 111 -
intel_cqm_llc/total_bw - current total b/w
The tasks are associated with a Resouce Monitoring ID(RMID) just like in
cqm and OS uses a MSR write to indicate the RMID of the task during
scheduling.
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/include/asm/cpufea
a flag in the perf_event.hw which has other cqm related
fields. The field is updated at event creation and during grouping.
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 13 ++---
include/linux/perf_e
is deallocated we need to update the ->count
variable.
Signed-off-by: Vikas Shivappa <vikas.shiva...@linux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 27 +--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/k
and see if the bits stick. The probe is only done after
confirming that the CPU is HSW server. Other hardcoded values are:
- L3 cache bit mask must be at least two bits.
- Maximum CLOSids supported is always 4.
- Maximum bits support in cache bit mask is always 20.
Signed-off-by: Vikas Shivappa
Add documentation on using the cache allocation cgroup interface with
examples.
Signed-off-by: Vikas Shivappa
---
Documentation/cgroups/rdt.txt | 133 ++
1 file changed, 133 insertions(+)
create mode 100644 Documentation/cgroups/rdt.txt
diff --git
e time increase linearly.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_rapl.c | 35 ++---
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c
b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
index 5
. By default the child cgroups inherit
the capacity bitmask(CBM) from parent. User can change the CBM specified
in hex for each cgroup. Each unique bitmask is associated with a class
of service ID and an -ENOSPC is returned once we run out of
closids.
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm
'. This feature is used when allocating a line in
cache ie when pulling new data into the cache. The programming of the
hardware is done via programming MSRs (model specific registers).
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/cpufeature.h | 6 +-
arch/x86/include/asm/processor.h
in and to indicate the cache
capacity associated with the CLOSid. Currently cache allocation is
supported for L3 cache.
More information can be found in the Intel SDM June 2015, Volume 3,
section 17.16.
Signed-off-by: Vikas Shivappa
---
Documentation/x86/intel_rdt.txt | 109
packages. Other APIs are to read and write entries to the
clos_cbm_table.
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/intel_rdt.h | 4 ++
arch/x86/kernel/cpu/intel_rdt.c | 122 +++
2 files changed, 126 insertions(+)
diff --git a/arch/x86/include/asm
with the values of existing MSRs. Also the software cache
for IA32_PQR_ASSOC MSRs are reset during hot cpu notifications.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/intel_rdt.c | 72 +
1 file changed, 72 insertions(+)
diff --git a/arch/x86/kernel/cpu
if the task groups are bound to be
scheduled on a set of CPUs, the number of MSR writes is greatly
reduced.
- A per CPU cache of CLOSids is maintained to do the check so that we
dont have to do a rdmsr which actually costs a lot of cycles.
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm
field closid to task_struct
to keep track of the same.
Signed-off-by: Vikas Shivappa
---
arch/x86/include/asm/intel_rdt.h | 12 ++
arch/x86/kernel/cpu/intel_rdt.c | 85 +++-
include/linux/sched.h| 3 ++
3 files changed, 98 insertions(+), 2
ensive and also the time increases linearly.
Signed-off-by: Vikas Shivappa
---
arch/x86/kernel/cpu/perf_event_intel_cqm.c | 34 +++---
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
b/arch/x86/kernel/cpu/perf_event_intel
There was a push back from cgroup maintainer Tejun on cgroup interface
usage during the previous version of patches. This patch series splits
the prior V13 patches to separate the cqos framework parts which just
provide APis to manage the closid/cbm management, scheduling, hot cpu etc
and the
501 - 600 of 1052 matches
Mail list logo