isphere, and finally across hemispheres), do you have any
suggestions on how we could handle the same in the core scheduler?
--
Thanks and Regards
Srikar Dronamraju
* Vincent Guittot [2021-03-08 14:52:39]:
> On Fri, 26 Feb 2021 at 17:41, Srikar Dronamraju
> wrote:
> >
Thanks Vincent for your review comments.
> > +static int prefer_idler_llc(int this_cpu, int prev_cpu, int sync)
> > +{
> > + struct sched_domain_shared
Gautham R Shenoy
Signed-off-by: Gautham R Shenoy
Co-developed-by: Parth Shah
Signed-off-by: Parth Shah
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
v1:
http://lore.kernel.org/lkml/20210226164029.122432-1-sri...@linux.vnet.ibm.com/t/#u
- Make WA_WAKER default (Suggested by Rik)
-
* Peter Zijlstra [2021-03-02 10:10:32]:
> On Tue, Mar 02, 2021 at 01:09:46PM +0530, Srikar Dronamraju wrote:
> > > Oh, could be, I didn't grep :/ We could have core code keep track of the
> > > smt count I suppose.
> >
> > Could we use cpumask_
* Dietmar Eggemann [2021-03-02 10:53:06]:
> On 26/02/2021 17:40, Srikar Dronamraju wrote:
>
> [...]
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b13634..d49bfcdc4a19 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/
* Peter Zijlstra [2021-03-01 18:18:28]:
> On Mon, Mar 01, 2021 at 10:36:01PM +0530, Srikar Dronamraju wrote:
> > * Peter Zijlstra [2021-03-01 16:44:42]:
> >
> > > On Sat, Feb 27, 2021 at 02:56:07PM -0500, Rik van Riel wrote:
> > > > On Fri, 2021-02-26 at
* Peter Zijlstra [2021-03-01 16:40:33]:
> On Fri, Feb 26, 2021 at 10:10:29PM +0530, Srikar Dronamraju wrote:
> > +static int prefer_idler_llc(int this_cpu, int prev_cpu, int sync)
> > +{
> > + struct sched_domain_shared *tsds, *psds;
> > + int pnr_busy, pllc_size
* Peter Zijlstra [2021-03-01 16:44:42]:
> On Sat, Feb 27, 2021 at 02:56:07PM -0500, Rik van Riel wrote:
> > On Fri, 2021-02-26 at 22:10 +0530, Srikar Dronamraju wrote:
>
> > > + if (sched_feat(WA_WAKER) && tnr_busy < tllc_size)
> > > + return
we need to be conservative esp if we want to make WA_WAKER on by
default. I would still like to hear from other people if they think its ok
to enable it by default. I wonder if enabling it by default can cause some
load imbalances leading to more active load balance down the line. I
haven't benchmarked with WA_WAKER enabled.
Thanks Rik for your inputs.
--
Thanks and Regards
Srikar Dronamraju
r Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Co-developed-by: Gautham R Shenoy
Signed-off-by: Gautham R Shenoy
Co-developed-by: Parth Shah
Signed-off-by: Parth Shah
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 41 +++--
kernel/sched/featu
ailed.
> Aborted (core dumped)
> <<>>
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
* Yue Hu [2021-02-03 18:10:19]:
> On Wed, 3 Feb 2021 15:22:56 +0530
> Srikar Dronamraju wrote:
>
> > * Yue Hu [2021-02-03 12:20:10]:
> >
> >
> > sched_debug() would only be present in CONFIG_SCHED_DEBUG. Right?
> > In which case there would
hed_debug()) {
Same as above.
> pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
> --
> 1.9.1
>
--
Thanks and Regards
Srikar Dronamraju
y is being shared by
> which groups of threads. This array can encode information about
> multiple properties being shared by different thread-groups within the
> core.
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
0004 0006 0001
> 0003 0005 0007 0002
> 0002 0004 0002
> 0004 0006 0001 0003
> 0005 0007
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
00006 0001 0003
> 0005 0007
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
nstruction and Data flow.
>
> This patch renames the variable to "thread_group_l1_cache_map" to make
> it consistent with a subsequent patch which will introduce
> thread_group_l2_cache_map.
>
> This patch introduces no functional change.
>
Looks good to me.
R
property (L1 or
> L2) and update a suitable mask. This is a preparatory patch for the
> next patch where we will introduce discovery of thread-groups that
> share L2-cache.
>
> No functional change.
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
> Signed-off-by: Ga
n't we want to enforce that the siblings sharing L1 be a subset of
> the siblings sharing L2 ? Or do you recommend putting in a check for
> that somewhere ?
>
I didnt think about the case where the device-tree could show L2 to be a
subset of L1.
How about initializing thread_group_l2_cache_map itself with
cpu_l1_cache_map. It would be a simple one time operation and reduce the
overhead here every CPU online.
And it would help in your subsequent patch too. We dont want the cacheinfo
for L1 showing CPUs not present in L2.
--
Thanks and Regards
Srikar Dronamraju
just that there is
still something more left to be done.
--
Thanks and Regards
Srikar Dronamraju
> + zalloc_cpumask_var_node(mask, GFP_KERNEL, cpu_to_node(cpu));
> > >
> >
> > This hunk (and the next hunk) should be moved to next patch.
> >
>
> The next patch is only about introducing THREAD_GROUP_SHARE_L2. Hence
> I put in any other code in this
he first place. For example:- If for a P9
core with CPUs 0-7, the cache->shared_cpu_map for L1 would have 0-7 but
would display 0,2,4,6.
The drawback of this is even if cpus 0,2,4,6 are released L1 cache will not
be released. Is this as expected?
--
Thanks and Regards
Srikar Dronamraju
for_each_cpu(i, *mask) {
> + if (!cpu_online(i))
> + continue;
> + set_cpus_related(i, cpu, cpu_l2_cache_mask);
> + }
> +
> + return true;
> + }
> +
Ah this can be simplified to:
if (thread_group_shares_l2) {
cpumask_set_cpu(cpu, cpu_l2_cache_mask(cpu));
for_each_cpu(i, per_cpu(thread_group_l2_cache_map, cpu)) {
if (cpu_online(i))
set_cpus_related(i, cpu, cpu_l2_cache_mask);
}
}
No?
> l2_cache = cpu_to_l2cache(cpu);
> if (!l2_cache || !*mask) {
> /* Assume only core siblings share cache with this CPU */
--
Thanks and Regards
Srikar Dronamraju
int i_group_start = get_cpu_thread_group_start(i, tg);
>
> if (unlikely(i_group_start == -1)) {
> WARN_ON_ONCE(1);
> @@ -843,7 +881,7 @@ static int init_cpu_l1_cache_map(int cpu)
> }
>
> if (i_group_start == cpu_group_start)
> - cpumask_set_cpu(i, per_cpu(cpu_l1_cache_map, cpu));
> + cpumask_set_cpu(i, *mask);
> }
>
> out:
> @@ -924,7 +962,7 @@ static int init_big_cores(void)
> int cpu;
>
> for_each_possible_cpu(cpu) {
> - int err = init_cpu_l1_cache_map(cpu);
> + int err = init_cpu_cache_map(cpu, THREAD_GROUP_SHARE_L1);
>
> if (err)
> return err;
> --
> 1.9.4
>
--
Thanks and Regards
Srikar Dronamraju
; Fixes: 2b1444983508 ("uprobes, mm, x86: Add the ability to install and remove
> uprobes breakpoints")
> Cc: sta...@vger.kernel.org
> Reported-by: Kees Cook
> Signed-off-by: Masami Hiramatsu
Looks good to me.
Reviewed-by: Srikar Dronamraju
> ---
> arch/x86/kernel/upro
: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/kvm_guest.h | 10 ++
arch/powerpc/include/asm
: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/paravirt.h | 18 ++
1 file
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
Changelog:
v1->v2:
v1:
https://lore.kernel.org/linuxppc-dev/20201028123512.871051-1-sri...@linux.vnet.ibm.com/t/#u
- Moved a hunk to fix a no previous prototype warning reported by:
l...@intel.com
https://lists.01.
: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/kvm_guest.h | 4 ++--
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/kernel/firmware.c | 2 +-
arch/powerpc/platforms/pseries/smp.c | 2 +-
4 files
las Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Srikar Dronamraju (4):
powerpc: Refactor is_kvm_guest declaration to new header
powerpc: Rename is_kvm_guest to check_kvm_guest
powerpc: Reintrod
7;
> > powerpc64-linux-ld: mm/khugepaged.o:(.toc+0x0): undefined reference to
> > `node_reclaim_distance'
>
> Hm, OK.
> CONFIG_NUMA=y
> # CONFIG_SMP is not set
>
> Michael, Gautham, does anyone care about this config combination?
>
I can add #ifdef CONFIG_SMP where coregroup_enabled is being accessed
but I do feel CONFIG_NUMA but !CONFIG_SMP may not be a valid combination.
>
> Thanks.
--
Thanks and Regards
Srikar Dronamraju
* Waiman Long [2020-10-28 20:01:30]:
> > Srikar Dronamraju (4):
> >powerpc: Refactor is_kvm_guest declaration to new header
> >powerpc: Rename is_kvm_guest to check_kvm_guest
> >powerpc: Reintroduce is_kvm_guest
> >powerpc/paravirt: Use is_
is_kvm_guest() will be reused in subsequent patch in a new avatar. Hence
rename is_kvm_guest to check_kvm_guest. No additional changes.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter
Only code/declaration movement, in anticipation of doing a kvm-aware
vcpu_is_preempted. No additional changes.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin
If its a shared lpar but not a KVM guest, then see if the vCPU is
related to the calling vCPU. On PowerVM, only cores can be preempted.
So if one vCPU is a non-preempted state, we can decipher that all other
vCPUs sharing the same core are in non-preempted state.
Signed-off-by: Srikar Dronamraju
Introduce a static branch that would be set during boot if the OS
happens to be a KVM guest. Subsequent checks to see if we are on KVM
will rely on this static branch. This static branch would be used in
vcpu_is_preempted in a subsequent patch.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Srikar Dronamraju (4):
powerpc: Refactor is_kvm_guest declaration to new header
powerpc: Rename is_kvm_guest to check_kvm_guest
powerpc: Reintroduce is_kvm_guest
powerpc/paravirt: Use is_kvm_guest
imize update_coregroup_mask")
Fixes: 3ab33d6dc3e9 ("powerpc/smp: Optimize update_mask_by_l2")
Reported-by: Qian Cai
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Sch
ack the same submask. Remove sibling_mask in favour of submask_fn.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Qian Cai
---
arch/powerpc/kernel/smp.c |
1st patch was not part of previous posting.
2. Updated 2nd patch based on comments from Michael Ellerman
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Qian Cai
Srikar Dronamraju (2):
powe
owerpc/smp: Optimize update_mask_by_l2")
Reported-by: Qian Cai
Suggested-by: Qian Cai
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Qian Cai
---
arch
update_mask_by_l2 is called only once. But it passes cpu_l2_cache_mask
as parameter. Instead of passing cpu_l2_cache_mask, use it directly in
update_mask_by_l2.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver
: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
---
Changelog v2->v3:
Use GFP_ATOMIC instead of GFP_KERNEL since allocations need to
atomic at the time of CPU HotPlug
Reported by Qian Cai
arch/powerpc/kern
ed on the current platform is set to the previous domain.
Instead of waiting for the scheduler to degenerated try to consolidate
based on their masks and sd_flags. This is done just before setting
the scheduler topology.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ell
share l2_cache
mask. Also instead of setting one CPU at a time into cpu_l2_cache_mask,
copy the SMT4/sub mask at one shot.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Micha
Currently on hotplug/hotunplug, CPU iterates through all the CPUs in
its core to find threads in its thread group. However this info is
already captured in cpu_l1_cache_map. Hence reduce iterations and
cleanup add_cpu_to_smallcore_masks function.
Signed-off-by: Srikar Dronamraju
Tested-by
Move the logic for updating the coregroup mask of a CPU to its own
function. This will help in reworking the updation of coregroup mask in
subsequent patch.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver
Now that cpu_core_mask has been removed and topology_core_cpumask has
been updated to use cpu_cpu_mask, we no more need
get_physical_package_id.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton
through smaller but relevant cpumap.
If shared_cache is set, cpu_l2_cache_map should be relevant else
cpu_sibling_map would be relevant.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc
point to cpu_cpu_mask.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc:
Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Qian Cai
Srikar Dronamraju (11):
powerpc/topology: Update topology_core_cpumask
p
On Power, cpu_core_mask and cpu_cpu_mask refer to the same set of CPUs.
cpu_cpu_mask is needed by scheduler, hence look at deprecating
cpu_core_mask. Before deleting the cpu_core_mask, ensure its only user
is moved to cpu_cpu_mask.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
All the arch specific topology cpumasks are within a node/DIE.
However when setting these per CPU cpumasks, system traverses through
all the online CPUs. This is redundant.
Reduce the traversal to only CPUs that are online in the node to which
the CPU belongs to.
Signed-off-by: Srikar Dronamraju
* Qian Cai [2020-10-07 09:05:42]:
Hi Qian,
Thanks for testing and reporting the failure.
> On Mon, 2020-09-21 at 15:26 +0530, Srikar Dronamraju wrote:
> > All threads of a SMT4 core can either be part of this CPU's l2-cache
> > mask or not related to this CPU l2-cache mas
h
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/kernel/smp.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --
h
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 32 +++-
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a
: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Raje
: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 30 ++
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/s
utham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/kernel/smp.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/sm
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/k
: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 51 ++-
1 file changed, 45 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/ke
Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 26 ++
1 file c
Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/includ
uling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/include/asm/topology.h | 5 -
arch/powerpc/kernel/smp.c | 20
2
v
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Satheesh Rajendran
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Signed-off-by: Srikar Dronamraju
Changelog v1-&g
ff-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
---
arch/powerpc/include/asm/smp.h | 5 -
arch/powerpc/kernel/smp.c | 33 +++--
2 files changed, 7 insertions(+), 31 deletions(-)
diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/
Fix to make it work where CPUs dont have a l2-cache element.
>8-8<-
>From b25d47b01b7195b1df19083a4043fa6a87a901a3 Mon Sep 17 00:00:00 2001
From: Srikar Dronamraju
Date: Thu, 9 Jul 2020 13:33:38 +0530
Subject: [P
* Michael Ellerman [2020-09-13 11:46:41]:
> Srikar Dronamraju writes:
> > * Michael Ellerman [2020-09-11 21:55:23]:
> >
> >> Srikar Dronamraju writes:
> >> > Current code assumes that cpumask of cpus sharing a l2-cache mask will
> >&g
* Michael Ellerman [2020-09-11 21:55:23]:
> Srikar Dronamraju writes:
> > Current code assumes that cpumask of cpus sharing a l2-cache mask will
> > always be a superset of cpu_sibling_mask.
> >
> > Lets stop that assumption. cpu_l2_cache_mask is a superset of
>
pull task
> from B since B's nr_running is larger than min_imbalance. But the code
> is saying imbalance=0 by finding A's nr_running is smaller than
> min_imbalance.
>
> Will share more test data if you need.
>
> >
> > --
> > Mel Gorman
> > SUSE Labs
>
> Thanks
> Barry
--
Thanks and Regards
Srikar Dronamraju
oves the tertiary condition added as part of that
> commit and added a check for NULL and -EAGAIN.
>
> Fixes: 2ed6edd33a21("perf: Add cond_resched() to task_function_call()")
> Signed-off-by: Kajol Jain
> Reported-by: Srikar Dronamraju
Tested-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
* xunlei [2020-08-25 10:11:24]:
> On 2020/8/24 PM9:38, Srikar Dronamraju wrote:
> > * Xunlei Pang [2020-08-24 20:30:19]:
> >
> >> We've met problems that occasionally tasks with full cpumask
> >> (e.g. by putting it into a cpuset or setting to full affini
l the cpus in this sched_domain_span(sd)
cpu_smt_mask(target) would already limit to the sched_domain_span(sd) so I
am not sure how this can help?
--
Thanks and Regards
Srikar Dronamraju
* Michal Hocko [2020-08-18 09:37:12]:
> On Tue 18-08-20 09:32:52, David Hildenbrand wrote:
> > On 12.08.20 08:01, Srikar Dronamraju wrote:
> > > Hi Andrew, Michal, David
> > >
> > > * Andrew Morton [2020-08-06 21:32:11]:
> > >
> > >
->vm_flags & VM_LOCKED) && !PageCompound(old_page))
> munlock_vma_page(old_page);
> put_page(old_page);
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
number.
>
> This update removes the described assumption by simply calling
> numa_node_to_cpus() interface and using the returned mask for
> binding CPUs to nodes.
>
> Also, variable types and names made consistent in functions
> using cpumask.
>
> Cc: Satheesh Rajendran
> }
> }
> + numa_free_cpumask(cpu);
>
> - return false; /* lets fall back to nocpus safely */
> + return ret;
> }
>
> static cpu_set_t bind_to_cpu(int target_cpu)
> --
> 1.8.3.1
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
Hi Andrew, Michal, David
* Andrew Morton [2020-08-06 21:32:11]:
> On Fri, 3 Jul 2020 18:28:23 +0530 Srikar Dronamraju
> wrote:
>
> > > The memory hotplug changes that somehow because you can hotremove numa
> > > nodes and therefore make the nodemask sparse but that
: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v1
Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v2 -> v3:
Removed node caching part. Rewrote the Commit msg (Michael Ellerman)
Renamed to powerpc/
chael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 104 +++---
1 file ch
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Nick Piggin
Cc: Oliver OHalloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Anton Blanchard
Cc: Gautham R Shenoy
Cc: Vaidyan
Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4 ->v5:
Updated commit msg to specify actual implementation of
cpu_to_coregroup_id is in a subsequent patch (Michael Ellerman)
Changelog v3 ->v4:
if coregroup_support doesn
Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changel
vasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4 ->v5:
Updated commit msg on why cpumask need not be freed.
(Michael Ellerman)
arch/powerpc/kernel/smp.c | 7 +++
1 file changed, 3 insertions(+), 4 d
h
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4->v5:
Updated commit msg with current abstract natur
Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
Changelog v4
ned-off-by: Srikar Dronamraju
---
Changelog v1 -> v2:
Move coregroup_enabled before getting associativity (Gautham)
arch/powerpc/mm/numa.c | 20
1 file changed, 20 insertions(+)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0d57779e7942..8b3b3e
henoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v2 -> v3:
Rewrote changelog (Gautham)
Renamed to powerpc/smp: Move topology fixups
SD_SHARE_CPUCAPACITY?
> /*
>* Buddy candidates are cache hot:
> */
> --
> 2.28.0.163.g6104cc2f0b6-goog
>
--
Thanks and Regards
Srikar Dronamraju
: LKML
Cc: Michael Ellerman
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
Modified com
uling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Acked-by; Peter Zijlstra (Intel)
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
Update the commit msg
of reboot
they would only have the older P8 topology. After reboot the kernel topology
would change, but the userspace is made to believe that they are running on
SMT8 core by way of keeping the sibling_cpumask at SMT8 core level.
--
Thanks and Regards
Srikar Dronamraju
Zijlstra (Intel)
>
> An updated Changelog that recaps some of this discussion might also be
> nice.
Okay, will surely do the needful.
--
Thanks and Regards
Srikar Dronamraju
* pet...@infradead.org [2020-08-04 12:45:20]:
> On Tue, Aug 04, 2020 at 09:03:06AM +0530, Srikar Dronamraju wrote:
> > cpu_smt_mask tracks topology_sibling_cpumask. This would be good for
> > most architectures. One of the users of cpu_smt_mask(), would be to
> > identify
: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/cputhreads.h | 1 -
arch/powerpc/include/asm/smp.h
chael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
include/linux/topology.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
eflect your LLC situation via this
> flag to make cpus_share_cache() work properly.
I detect if the LLC is shared at BIGCORE, and if they are shared at BIGCORE,
then I dynamically rename the DOMAIN as CACHE and enable
SD_SHARE_PKG_RESOURCES in that domain.
>
> [1]: https://linuxplumbersconf.org/event/4/contributions/484/
Thanks for the pointer.
--
Thanks and Regards
Srikar Dronamraju
* Michael Ellerman [2020-07-31 18:02:21]:
> Srikar Dronamraju writes:
> > Lookup the coregroup id from the associativity array.
>
Thanks Michael for all your comments and inputs.
> It's slightly strange that this is called in patch 9, but only properly
> imple
* Michael Ellerman [2020-07-31 17:52:15]:
> Srikar Dronamraju writes:
> > If allocated earlier and the search fails, then cpumask need to be
> > freed. However cpu_l1_cache_map can be allocated after we search thread
> > group.
>
> It's not freed anywhere
1 - 100 of 881 matches
Mail list logo