o say we've detect big cores but are
> using small cores for scheduling.
Thanks for making the print more meaningful.
>
> Signed-off-by: Michael Neuling
FWIW,
Acked-by: Gautham R. Shenoy
> ---
> arch/powerpc/kernel/smp.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deleti
lerman
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
> ---
> arch/powerpc/include/asm/topology.h | 10
onfig.
> "error: _numa_cpu_lookup_table_ undeclared"
>
> No functional change
>
> Cc: linuxppc-dev
> Cc: Michael Ellerman
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
>
On Fri, Jul 17, 2020 at 01:49:26PM +0530, Gautham R Shenoy wrote:
> > +int cpu_to_coregroup_id(int cpu)
> > +{
> > + return cpu_to_core_id(cpu);
> > +}
>
>
> So, if has_coregroup_support() returns true, then since the core_group
> identification i
c: linuxppc-dev
> Cc: Michael Ellerman
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
This looks good to me from the discove
new sched
> domain.
>
> Cc: linuxppc-dev
> Cc: Michael Ellerman
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronam
Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
Good fix.
Reviewed-by: Gautham R. Shenoy
> ---
> arch/powerpc/kernel/smp.c |
t; Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
Reviewed-by: Gautham R. Shenoy
> ---
> arch/powerpc/kernel/smp.c | 1
; Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
> ---
> arch/powerpc/kernel/smp.c | 48 +++
> 1 file changed, 34 inserti
Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
I don't see a problem with this.
However, since we are now going to be maintaining a single
smallest coregroup, which currently
> corresponds to the penultimate domain in the device-tree.
>
> Cc: linuxppc-dev
> Cc: Michael Ellerman
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautha
loran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
We need this documented in the
Documentation/admin-guide/kernel-parameters.txt
Other than that, the patch
t; Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Signed-off-by: Srikar Dronamraju
> ---
> arch/powerpc/kernel/smp.c | 28 +++
> domain.
>
> Cc: linuxppc-dev
> Cc: LKML
> Cc: Michael Ellerman
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R
t; Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Cc: Jordan Niethe
> Signed-off-by: Srikar Dronamraju
>
E_FULL_CONTEXT set, in other places the kernel uses the
> "deep" states as terminology. Hence renaming the variable to be coherent
> to its semantics.
>
> Signed-off-by: Pratik Rajesh Sampat
Acked-by: Gautham R. Shenoy
> ---
> arch/powerpc/platforms/powernv/idle.c | 18 +++
ask if and only if shared_caches is set.
>
> Cc: linuxppc-dev
> Cc: LKML
> Cc: Michael Ellerman
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Bl
nar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Cc: Jordan Niethe
> Signed-off-by: Srikar Dronamraj
Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
> Cc: Jordan Niethe
> Signed-off-by: Srikar Dronamraju
Looks good to me.
Reviewed-by : Gautham R. Shenoy
> ---
> Changelog v1 -> v2:
> pow
gt; Cc: LKML
> Cc: Michael Ellerman
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Nick Piggin
> Cc: Oliver OHalloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Anton Blanchard
> Cc: Gautham R Shenoy
> Cc: Vaidyanathan Srinivasan
>
From: "Gautham R. Shenoy"
We are currently assuming that CEDE(0) has exit latency 10us, since
there is no way for us to query from the platform. However, if the
wakeup latency of an Extended CEDE state is smaller than 10us, then we
can be sure that the exit latency of CEDE(0) cann
From: "Gautham R. Shenoy"
This is a v3 of the patch series to parse the extended CEDE
information in the pseries-cpuidle driver.
The previous two versions of the patches can be found here:
v2:
https://lore.kernel.org/lkml/1596005254-25753-1-git-send-email-...@linux.vnet.ibm.com/
From: "Gautham R. Shenoy"
As per the PAPR, each H_CEDE call is associated with a latency-hint to
be passed in the VPA field "cede_latency_hint". The CEDE states that
we were implicitly entering so far is CEDE with latency-hint = 0.
This patch explicitly sets the latenc
From: "Gautham R. Shenoy"
Currently we use CEDE with latency-hint 0 as the only other idle state
on a dedicated LPAR apart from the polling "snooze" state.
The platform might support additional extended CEDE idle states, which
can be discovered through the "ibm,get-sy
signed int nid;
> +
> chips[i].id = chip[i];
> - cpumask_copy([i].mask, cpumask_of_node(chip[i]));
> + /*
> + * On powervn platforms firmware group id is same as chipd id.
But doesn't hurt to be safe :-)
Reviewed-by: Gautham R. Shenoy
>
y cleanup.")
> Cc: sta...@vger.kernel.org # v3.14
> Signed-off-by: Joel Stanley
Sorry I missed this v2.
The patch looks good to me.
Acked-by: Gautham R. Shenoy
> --
> v2:
> Use pr_warn instead of WARN
> Reword and print proccess name with pid in message
> Leave CPU
here, localize the sibling_mask variable to within the if
> condition.
>
> Cc: linuxppc-dev
> Cc: LKML
> Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Sheno
x7/cpumask
> 0
>
> Signed-off-by: Kajol Jain
This patch looks good to me.
Reviewed-by: Gautham R. Shenoy
> ---
> .../sysfs-bus-event_source-devices-hv_24x7| 7
> arch/powerpc/perf/hv-24x7.c | 36 +--
> 2 files changed, 41 inse
the
> check for machines lower than Power9
>
> Signed-off-by: Pratik Rajesh Sampat
Nice catch.
Reviewed-by: Gautham R. Shenoy
> ---
> arch/powerpc/platforms/powernv/idle.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/platfo
On Fri, Jul 03, 2020 at 06:16:40PM +0530, Pratik Rajesh Sampat wrote:
> Additional registers DAWR0, DAWRX0 may be lost on Power 10 for
> stop levels < 4.
Adding Ravi Bangoria to the cc.
> Therefore save the values of these SPRs before entering a "stop"
> state and restore their values on
On Tue, Jul 07, 2020 at 05:17:55PM +1000, Michael Neuling wrote:
> On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
> > PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
> > First is the addition of BHRB disable bit and second new filtering
> > modes for BHRB.
> >
From: "Gautham R. Shenoy"
Hi,
On pseries Dedicated Linux LPARs, apart from the polling snooze idle
state, we currently have the CEDE idle state which cedes the CPU to
the hypervisor with latency-hint = 0.
However, the PowerVM hypervisor supports additional extended CEDE
states,
Hi,
On Tue, Jul 07, 2020 at 04:41:34PM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy"
>
> Hi,
>
>
>
>
> Gautham R. Shenoy (5):
> cpuidle-pseries: Set the latency-hint before entering CEDE
> cpuidle-pseries: Add function to parse e
From: "Gautham R. Shenoy"
As per the PAPR, each H_CEDE call is associated with a latency-hint to
be passed in the VPA field "cede_latency_hint". The CEDE states that
we were implicitly entering so far is CEDE with latency-hint = 0.
This patch explicitly sets the latenc
From: "Gautham R. Shenoy"
Currently we use CEDE with latency-hint 0 as the only other idle state
on a dedicated LPAR apart from the polling "snooze" state.
The platform might support additional extended CEDE idle states, which
can be discovered through the "ibm,get-sy
From: "Gautham R. Shenoy"
We are currently assuming that CEDE(0) has exit latency 10us, since
there is no way for us to query from the platform. However, if the
wakeup latency of an Extended CEDE state is smaller than 10us, then we
can be sure that the exit latency of CEDE(0) cann
From: "Gautham R. Shenoy"
This patch exposes those extended CEDE states to the cpuidle framework
which are responsive to external interrupts and do not need an H_PROD.
Since as per the PAPR, all the extended CEDE states are non-responsive
to timers, we indicate this to the cpuidle sub
From: "Gautham R. Shenoy"
The Extended CEDE state with latency-hint = 1 is only different from
normal CEDE (with latency-hint = 0) in that a CPU in Extended CEDE(1)
does not wakeup on timer events. Both CEDE and Extended CEDE(1) map to
the same hardware idle state. Since we alrea
On Mon, Jul 13, 2020 at 03:23:21PM +1000, Nicholas Piggin wrote:
> Excerpts from Pratik Rajesh Sampat's message of July 10, 2020 3:22 pm:
> > Changelog v1 --> v2:
> > 1. Save-restore DAWR and DAWRX unconditionally as they are lost in
> > shallow idle states too
> > 2. Rename
Hi Kajol,
On Wed, Jun 24, 2020 at 03:47:54PM +0530, Kajol Jain wrote:
> Patch here adds a cpumask attr to hv_24x7 pmu along with ABI documentation.
>
> command:# cat /sys/devices/hv_24x7/cpumask
> 0
Since this sysfs interface is read-only, and the user cannot change
the CPU which will be making
designate
> count data.
>
> The offline function test and clear corresponding cpu in a cpumask
> and update cpumask to any other active cpu.
>
> Signed-off-by: Kajol Jain
Otherwise, looks good to me.
Reviewed-by: Gautham R. Shenoy
> ---
> arch/powerpc/perf/hv-24x7.
node.
>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Cc: Michal Hocko
> Cc: Mel Gorman
> Cc: Vlastimil Babka
> Cc: "Kirill A. Shutemov"
> Cc: Christopher Lameter
> Cc: Michael Ellerman
> Cc: Andrew Morton
: Michael Ellerman
> Cc: Andrew Morton
> Cc: Linus Torvalds
> Cc: Gautham R Shenoy
> Cc: Satheesh Rajendran
> Cc: David Hildenbrand
> Signed-off-by: Srikar Dronamraju
This patch looks good to me.
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
On Wed, Jun 24, 2020 at 05:58:31PM +0530, Madhavan Srinivasan wrote:
>
>
> On 6/24/20 4:26 PM, Gautham R Shenoy wrote:
> >Hi Kajol,
> >
> >On Wed, Jun 24, 2020 at 03:47:54PM +0530, Kajol Jain wrote:
> >>Patch here adds a cpumask attr to hv_24x7 pmu along with
uot; is the "ppc64_cpu" which
uses the presence of this file to assume that the system is SMT
capable.
Since we have "/sys/devices/system/cpu/smt/" these days, perhaps the
userspace utility can use that and we can get rid of the file
altogether ?
FWIW,
Acked-by: Gautham R. Shenoy
On Thu, Jun 18, 2020 at 05:57:13PM +0530, Kajol Jain wrote:
> Patch here adds a cpumask attr to hv_24x7 pmu along with ABI documentation.
>
> command:# cat /sys/devices/hv_24x7/cpumask
> 0
>
> Signed-off-by: Kajol Jain
> ---
> .../sysfs-bus-event_source-devices-hv_24x7| 6
>
Hello Kajol,
On Thu, Jun 18, 2020 at 05:57:12PM +0530, Kajol Jain wrote:
> Patch here adds cpu hotplug functions to hv_24x7 pmu.
> A new cpuhp_state "CPUHP_AP_PERF_POWERPC_HV_24x7_ONLINE" enum
> is added.
>
> The online function update the cpumask only if its NULL.
> As the primary intention for
>
> Fixes: 3aa565f53c39 ("powerpc/pseries: Add hooks to put the CPU into an
> appropriate offline state")
>
> Signed-off-by: Nathan Lynch
The patch looks good to me.
Reviewed-by: Gautham R. Shenoy
> ---
> Documentation/core-api/cpu_hotplug.rst|
From: "Gautham R. Shenoy"
Currently we use CEDE with latency-hint 0 as the only other idle state
on a dedicated LPAR apart from the polling "snooze" state.
The platform might support additional extended CEDE idle states, which
can be discovered through the "ibm,get-sy
From: "Gautham R. Shenoy"
As per the PAPR, each H_CEDE call is associated with a latency-hint to
be passed in the VPA field "cede_latency_hint". The CEDE states that
we were implicitly entering so far is CEDE with latency-hint = 0.
This patch explicitly sets the latenc
From: "Gautham R. Shenoy"
Hi,
This is a v2 of the patch series to parse the extended CEDE
information in the pseries-cpuidle driver.
The v1 of this patchset can be found here :
https://lore.kernel.org/linuxppc-dev/1594120299-31389-1-git-send-email-...@linux.vnet.ibm.com/
The chan
From: "Gautham R. Shenoy"
We are currently assuming that CEDE(0) has exit latency 10us, since
there is no way for us to query from the platform. However, if the
wakeup latency of an Extended CEDE state is smaller than 10us, then we
can be sure that the exit latency of CEDE(0) cann
Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Jordan Niethe
> Signed
Hello Rafael,
On Mon, Jul 27, 2020 at 04:14:12PM +0200, Rafael J. Wysocki wrote:
> On Tue, Jul 7, 2020 at 1:32 PM Gautham R Shenoy
> wrote:
> >
> > Hi,
> >
> > On Tue, Jul 07, 2020 at 04:41:34PM +0530, Gautham R. Shenoy wrote:
> > > Fr
gt; Cc: LKML
> Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Jordan Niethe
> Sign
Hi Srikar,
On Mon, Jul 20, 2020 at 11:18:16AM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-07-17 13:56:53]:
>
> > On Tue, Jul 14, 2020 at 10:06:23AM +0530, Srikar Dronamraju wrote:
> > > Lookup the coregroup id from the associativity array.
> > &
ncrease the resolution of IPI wakeup]
>
> Signed-off-by: Pratik Rajesh Sampat
The debugfs module looks good to me.
Reviewed-by: Gautham R. Shenoy
> ---
> drivers/cpuidle/Makefile | 1 +
> drivers/cpuidle/test-cpuidle_latency.c | 150 +
>
On Mon, Jul 20, 2020 at 11:49:11AM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-07-17 12:07:55]:
>
> > On Tue, Jul 14, 2020 at 10:06:19AM +0530, Srikar Dronamraju wrote:
> > > Currently "CACHE" domain happens to be the 2nd sched domain as per
>
ie, on explicit need from user. Also save/restore MMCRA in the
> restore path of state-loss idle state to make sure we keep BHRB disabled
> if it was not enabled on request at runtime.
>
> Signed-off-by: Athira Rajeev
For arch/powerpc/platforms/powernv/idle.c
Reviewed-by: Gauth
Hi Srikar,
On Mon, Jul 20, 2020 at 12:15:04PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-07-17 11:30:11]:
>
> > Hi Srikar,
> >
> > On Tue, Jul 14, 2020 at 10:06:18AM +0530, Srikar Dronamraju wrote:
> > > Current code assumes that cpumas
Hi Pratik,
On Fri, Jul 17, 2020 at 02:48:01PM +0530, Pratik Rajesh Sampat wrote:
> This patch adds support to trace IPI based and timer based wakeup
> latency from idle states
>
> Latches onto the test-cpuidle_latency kernel module using the debugfs
> interface to send IPIs or schedule a timer
Hi,
On Wed, Jul 22, 2020 at 12:37:41AM +1000, Nicholas Piggin wrote:
> Excerpts from Pratik Sampat's message of July 21, 2020 8:29 pm:
> >
> >
> > On 20/07/20 5:27 am, Nicholas Piggin wrote:
> >> Excerpts from Pratik Rajesh Sampat's message of July 18, 2020 4:53 am:
> >>> Replace the variable
> domain.
>
> Cc: linuxppc-dev
> Cc: LKML
> Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Va
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Jordan Niethe
> Signed-off-by: Srikar Dronamraju
Reviewed-by: Gautham R.
On Wed, Jul 22, 2020 at 12:27:47PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-07-22 11:51:14]:
>
> > Hi Srikar,
> >
> > > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> > > index 72f16dc0cb26..57468877499a 100644
&
ask if and only if shared_caches is set.
>
> Cc: linuxppc-dev
> Cc: LKML
> Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijl
Cc: Anton Blanchard
> Cc: Oliver O'Halloran
> Cc: Nathan Lynch
> Cc: Michael Neuling
> Cc: Gautham R Shenoy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Valentin Schneider
> Cc: Jordan Niethe
> Signed-off-by: Srikar Dronamraju
Reviewed-by: Gautham R. Shenoy
&g
Hi Srikar, Valentin,
On Wed, Jul 29, 2020 at 11:43:55AM +0530, Srikar Dronamraju wrote:
> * Valentin Schneider [2020-07-28 16:03:11]:
>
[..snip..]
> At this time the current topology would be good enough i.e BIGCORE would
> always be equal to a MC. However in future we could have chips that
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the po
From: "Gautham R. Shenoy"
The "ibm,thread-groups" device-tree property is an array that is used
to indicate if groups of threads within a core share certain
properties. It provides details of which property is being shared by
which groups of threads. This array can enco
From: "Gautham R. Shenoy"
The "ibm,thread-groups" device-tree property is an array that is used
to indicate if groups of threads within a core share certain
properties. It provides details of which property is being shared by
which groups of threads. This array can enco
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
Hello Srikar,
Thanks for taking a look at the patch.
On Mon, Dec 07, 2020 at 05:40:42PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:45]:
>
> > From: "Gautham R. Shenoy"
>
>
>
> >
> > static int
Hello Srikar,
On Mon, Dec 07, 2020 at 06:10:39PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:46]:
>
> > From: "Gautham R. Shenoy"
> >
> > On POWER systems, groups of threads within a core sharing the L2-cache
> > can be indic
On Mon, Dec 07, 2020 at 06:41:38PM +0530, Srikar Dronamraju wrote:
> * Gautham R. Shenoy [2020-12-04 10:18:47]:
>
> > From: "Gautham R. Shenoy"
> >
> >
> > Signed-off-by: Gautham R. Shenoy
> > ---
> >
> > +extern bool thread_group
On Wed, Dec 09, 2020 at 02:05:41PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-12-08 22:55:40]:
>
> > >
> > > NIT:
> > > tglx mentions in one of his recent comments to try keep a reverse fir tree
> > > ordering of variables where po
On Wed, Dec 09, 2020 at 02:09:21PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy [2020-12-08 23:26:47]:
>
> > > The drawback of this is even if cpus 0,2,4,6 are released L1 cache will
> > > not
> > > be released. Is this as expected?
> >
> >
From: "Gautham R. Shenoy"
Hi,
This is the v2 of the patchset to extend parsing of "ibm,thread-groups" property
to discover the Shared-L2 cache information.
The v1 can be found here :
https://lore.kernel.org/linuxppc-dev/1607057327-29822-1-git-send-email-...@
From: "Gautham R. Shenoy"
The "ibm,thread-groups" device-tree property is an array that is used
to indicate if groups of threads within a core share certain
properties. It provides details of which property is being shared by
which groups of threads. This array can enco
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
From: "Gautham R. Shenoy"
init_thread_group_l1_cache_map() initializes the per-cpu cpumask
thread_group_l1_cache_map with the core-siblings which share L1 cache
with the CPU. Make this function generic to the cache-property (L1 or
L2) and update a suitable mask. This is a prepara
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the po
From: "Gautham R. Shenoy"
On platforms which have the "ibm,thread-groups" property, the per-cpu
variable cpu_l1_cache_map keeps a track of which group of threads
within the same core share the L1 cache, Instruction and Data flow.
This patch renames the variable to "
From: "Gautham R. Shenoy"
The "ibm,thread-groups" device-tree property is an array that is used
to indicate if groups of threads within a core share certain
properties. It provides details of which property is being shared by
which groups of threads. This array can enco
From: "Gautham R. Shenoy"
Hi,
This is the v2 of the patchset to extend parsing of "ibm,thread-groups" property
to discover the Shared-L2 cache information.
The previous versions can be found here :
v2 :
https://lore.kernel.org/linuxppc-dev/1607533700-5
From: "Gautham R. Shenoy"
On platforms which have the "ibm,thread-groups" property, the per-cpu
variable cpu_l1_cache_map keeps a track of which group of threads
within the same core share the L1 cache, Instruction and Data flow.
This patch renames the variable to "
From: "Gautham R. Shenoy"
init_thread_group_l1_cache_map() initializes the per-cpu cpumask
thread_group_l1_cache_map with the core-siblings which share L1 cache
with the CPU. Make this function generic to the cache-property (L1 or
L2) and update a suitable mask. This is a prepara
From: "Gautham R. Shenoy"
On POWER platforms where only some groups of threads within a core
share the L2-cache (indicated by the ibm,thread-groups device-tree
property), we currently print the incorrect shared_cpu_map/list for
L2-cache in the sysfs.
This patch reports t
From: "Gautham R. Shenoy"
On POWER systems, groups of threads within a core sharing the L2-cache
can be indicated by the "ibm,thread-groups" property array with the
identifier "2".
This patch adds support for detecting this, and when present, populate
the po
frequency is throttled due to 'OCC Reset'.
>
> The sysfs attributes representing different throttle reasons
> like
> powercap, overtemp, supply_fault, overcurrent and occ_reset map
> to
This hunk for the powernv cpufreq driver looks good to me.
For these two hunks,
Reviewed-by: Gautham R. Shenoy
From: "Gautham R. Shenoy"
The helper function get_shared_cpu_map() was added in
'commit 500fe5f550ec ("powerpc/cacheinfo: Report the correct
shared_cpu_map on big-cores")'
and subsequently expanded upon in
'commit 0be47634db0b ("powerpc/cacheinfo: Print correct cach
From: "Gautham R. Shenoy"
Currently the cacheinfo code on powerpc indexes the "cache" objects
(modelling the L1/L2/L3 caches) where the key is device-tree node
corresponding to that cache. On some of the POWER server platforms
thread-groups within the core share differen
From: "Gautham R. Shenoy"
Hi,
Currently the cacheinfo code on powerpc indexes the "cache" objects
(modelling the L1/L2/L3 caches) where the key is device-tree node
corresponding to that cache. On some of the POWER server platforms
thread-groups within the core share differen
On Wed, Jun 16, 2021 at 07:12:40PM +0530, Pratik R. Sampat wrote:
> Adds a generic interface to represent the energy and frequency related
> PAPR attributes on the system using the new H_CALL
> "H_GET_ENERGY_SCALE_INFO".
>
> H_GET_EM_PARMS H_CALL was previously responsible for exporting this
>
Hello Pratik,
On Tue, Jun 15, 2021 at 10:39:49AM +0530, Pratik R. Sampat wrote:
> In the numa=off kernel command-line configuration init_chip_info() loops
> around the number of chips and attempts to copy the cpumask of that node
> which is NULL for all iterations after the first chip.
Thanks
esi_buf;
> + num_attrs = be64_to_cpu(esi_hdr->num_attrs);
Shouldn't we check for the esi_hdr->data_header_version here?
Currently we are only aware of the version 1. If we happen to run this
kernel code on a future platform which supports a different version,
wouldn't it be safer to bail out here ?
Otherwise this patch looks good to me.
Reviewed-by: Gautham R. Shenoy
--
Thanks and Regards
gautham.
From: "Gautham R. Shenoy"
Commit d947fb4c965c ("cpuidle: pseries: Fixup exit latency for
CEDE(0)") sets the exit latency of CEDE(0) based on the latency values
of the Extended CEDE states advertised by the platform
On POWER9 LPARs, the firmwares advertise a very low value o
From: "Gautham R. Shenoy"
Commit d947fb4c965c ("cpuidle: pseries: Fixup exit latency for
CEDE(0)") sets the exit latency of CEDE(0) based on the latency values
of the Extended CEDE states advertised by the platform
On some of the POWER9 LPARs, the older firmwares advert
anathan Srinivasan wrote:
> > > > * Michal Such?nek [2021-04-23 19:45:05]:
> > > >
> > > > > On Fri, Apr 23, 2021 at 09:29:39PM +0530, Vaidyanathan Srinivasan
> > > > > wrote:
> > > > > > * Michal Such?nek [2021-04-23 09:35:5
Hello Michal,
On Wed, Apr 28, 2021 at 10:03:26AM +0200, Michal Suchánek wrote:
> >
> That's a nice detailed explanation. Maybe you could summarize it in the
> commit message so that people looking at the patch in the future can
> tell where the value comes from.
Sure, I will do that and send a
501 - 600 of 623 matches
Mail list logo