Re: [PATCH] sched/fair: add protection for delta of wait time

2021-01-17 Thread Vincent Guittot
On Sun, 17 Jan 2021 at 13:31, Jiang Biao wrote: > > From: Jiang Biao > > delta in update_stats_wait_end() might be negative, which would > make following statistics go wrong. Could you describe the use case that generates a negative delta ? rq_clock is always increasing so this should not lead

Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()

2021-01-14 Thread Vincent Guittot
On Thu, 14 Jan 2021 at 14:53, Mel Gorman wrote: > > On Thu, Jan 14, 2021 at 02:25:32PM +0100, Vincent Guittot wrote: > > On Thu, 14 Jan 2021 at 10:35, Mel Gorman > > wrote: > > > > > > On Wed, Jan 13, 2021 at 06:03:00PM +0100, Vincent Guittot wrote: > >

Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()

2021-01-14 Thread Vincent Guittot
On Thu, 14 Jan 2021 at 10:35, Mel Gorman wrote: > > On Wed, Jan 13, 2021 at 06:03:00PM +0100, Vincent Guittot wrote: > > > @@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p, > > > struct sched_domain *sd, int t > > > fo

[tip: sched/core] sched/fair: Skip idle cfs_rq

2021-01-14 Thread tip-bot2 for Vincent Guittot
The following commit has been merged into the sched/core branch of tip: Commit-ID: fc488ffd4297f661b3e9d7450dcdb9089a53df7c Gitweb: https://git.kernel.org/tip/fc488ffd4297f661b3e9d7450dcdb9089a53df7c Author:Vincent Guittot AuthorDate:Thu, 07 Jan 2021 11:33:23 +01:00

[tip: sched/core] sched/fair: Reduce cases for active balance

2021-01-14 Thread tip-bot2 for Vincent Guittot
The following commit has been merged into the sched/core branch of tip: Commit-ID: e9b9734b74656abb585a7f6fabf1d30ce00e51ea Gitweb: https://git.kernel.org/tip/e9b9734b74656abb585a7f6fabf1d30ce00e51ea Author:Vincent Guittot AuthorDate:Thu, 07 Jan 2021 11:33:25 +01:00

[tip: sched/core] sched/fair: Don't set LBF_ALL_PINNED unnecessarily

2021-01-14 Thread tip-bot2 for Vincent Guittot
The following commit has been merged into the sched/core branch of tip: Commit-ID: 8a41dfcda7a32ed4435c00d98a9dc7156b08b671 Gitweb: https://git.kernel.org/tip/8a41dfcda7a32ed4435c00d98a9dc7156b08b671 Author:Vincent Guittot AuthorDate:Thu, 07 Jan 2021 11:33:24 +01:00

Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()

2021-01-13 Thread Vincent Guittot
On Mon, 11 Jan 2021 at 16:50, Mel Gorman wrote: > > Both select_idle_core() and select_idle_cpu() do a loop over the same > cpumask. Observe that by clearing the already visited CPUs, we can > fold the iteration and iterate a core at a time. > > All we need to do is remember any non-idle CPU we

Re: [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores

2021-01-13 Thread Vincent Guittot
On Mon, 11 Jan 2021 at 16:50, Mel Gorman wrote: > > From: Peter Zijlstra (Intel) > > Instead of calculating how many (logical) CPUs to scan, compute how > many cores to scan. > > This changes behaviour for anything !SMT2. > > Signed-off-by: Peter Zijlstra (Intel) > Signed-off-by: Mel Gorman >

Re: pmwg/integ bisection: baseline.login on rk3328-rock64

2021-01-13 Thread Vincent Guittot
On Wed, 13 Jan 2021 at 15:49, Arnd Bergmann wrote: > > On Tue, Jan 12, 2021 at 2:46 PM Vincent Guittot > wrote: > > On Tue, 12 Jan 2021 at 12:25, Guillaume Tucker > > wrote: > > > > > > Some details can be found here: > > > > > > h

Re: [PATCH] sched: pull tasks when CPU is about to run SCHED_IDLE tasks

2021-01-13 Thread Vincent Guittot
On Wed, 13 Jan 2021 at 04:14, chin wrote: > > > > > At 2021-01-12 16:18:51, "Vincent Guittot" wrote: > >On Tue, 12 Jan 2021 at 07:59, chin wrote: > >> > >> > >> > >> > >> At 2021-01-11 19:04:19, &qu

Re: pmwg/integ bisection: baseline.login on rk3328-rock64

2021-01-12 Thread Vincent Guittot
Hi Guillaume On Tue, 12 Jan 2021 at 12:25, Guillaume Tucker wrote: > > Hi Vincent, > > Please see the bisection report below about a boot failure on > rk3328-rock64 with the pwmg/integ branch. > > Reports aren't automatically sent to the public while we're > trialin

Re: [PATCH v3 1/1] can: dev: add software tx timestamps

2021-01-12 Thread Vincent MAILHOL
On Tue. 12 Jan 2021 at 16:58, Marc Kleine-Budde wrote: > > On 1/12/21 1:00 AM, Vincent MAILHOL wrote: > [...] > > > Mark: do you want me to send a v4 of that patch with above > > comment removed or can you directly do the change in your testing > > branch? > >

Re: [PATCH] sched: pull tasks when CPU is about to run SCHED_IDLE tasks

2021-01-12 Thread Vincent Guittot
On Tue, 12 Jan 2021 at 07:59, chin wrote: > > > > > At 2021-01-11 19:04:19, "Vincent Guittot" wrote: > >On Mon, 11 Jan 2021 at 09:27, chin wrote: > >> > >> > >> At 2020-12-23 19:30:26, "Vincent Guittot" > >>

Re: [PATCH v3 1/1] can: dev: add software tx timestamps

2021-01-11 Thread Vincent MAILHOL
On Tue. 12 Jan 2021 at 11:14, Richard Cochran wrote: > > On Tue, Jan 12, 2021 at 09:00:33AM +0900, Vincent MAILHOL wrote: > > Out of curiosity, which programs do you use? I guess wireshark > > but please let me know if you use any other programs (I just use > > to write

Re: [PATCH v3 1/1] can: dev: add software tx timestamps

2021-01-11 Thread Vincent MAILHOL
On Tue. 12 Jan 2021 at 02:11, Richard Cochran wrote: > > On Sun, Jan 10, 2021 at 09:49:03PM +0900, Vincent Mailhol wrote: > > * The hardware rx timestamp of a local loopback message is the > > hardware tx timestamp. This means that there are no needs to

[PATCH 3/4] cpu/hotplug: Add cpuhp_invoke_callback_range()

2021-01-11 Thread vincent . donnefort
From: Vincent Donnefort Factorizing and unifying cpuhp callback range invocations, especially for the hotunplug path, where two different ways of decrementing were used. The first one, decrements before the callback is called: cpuhp_thread_fun() state = st->state; st->

[PATCH 2/4] cpu/hotplug: CPUHP_BRINGUP_CPU exception in fail injection

2021-01-11 Thread vincent . donnefort
From: Vincent Donnefort The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are triggered by the CPUHP_BRINGUP_CPU step. If the latter doesn't run, none of the atomic can. Hence, rollback is not possible after a hotunplug CPUHP_BRINGUP_CPU step failure and the "fail"

[PATCH 4/4] cpu/hotplug: Fix CPU down rollback

2021-01-11 Thread vincent . donnefort
From: Vincent Donnefort After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish the job. The steps left are as followed: ++ | CPUHP_TEARDOWN_CPU | -> If fails state is CPUHP_TEARDOWN_CPU ++ | ATOMIC STATES| ->

[PATCH 0/4] cpu/hotplug: rollback and "fail" interface fixes

2021-01-11 Thread vincent . donnefort
From: Vincent Donnefort This patch-set intends mainly to fix HP rollback, which is currently broken, due to an inconsistent "state" usage and an issue with CPUHP_AP_ONLINE_IDLE. It also improves the "fail" interface, which can now be reset and will reject CPUHP_BRINGUP_CP

[PATCH 1/4] cpu/hotplug: Allowing to reset fail injection

2021-01-11 Thread vincent . donnefort
From: Vincent Donnefort Currently, the only way of resetting this file is to actually try to run a hotplug, hotunplug or both. This is quite annoying for testing and, as the default value for this file is -1, it seems quite natural to let a user write it. Signed-off-by: Vincent Donnefort

Re: [RFC][PATCH 1/5] sched/fair: Fix select_idle_cpu()s cost accounting

2021-01-11 Thread Vincent Guittot
On Fri, 8 Jan 2021 at 20:49, Peter Zijlstra wrote: > > On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote: > > Another thing that worries me, is that we use the avg_idle of the > > local cpu, which is obviously not idle otherwise it would have been > > sele

Re: [RFC][PATCH 1/5] sched/fair: Fix select_idle_cpu()s cost accounting

2021-01-11 Thread Vincent Guittot
On Fri, 8 Jan 2021 at 20:45, Peter Zijlstra wrote: > > On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote: > > Also, there is another problem (that I'm investigating) which is that > > this_rq()->avg_idle is stalled when your cpu is busy. Which means that >

Re: [RFC][PATCH 1/5] sched/fair: Fix select_idle_cpu()s cost accounting

2021-01-11 Thread Vincent Guittot
On Fri, 8 Jan 2021 at 17:14, Mel Gorman wrote: > > On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote: > > > > Trying to bias the avg_scan_cost with: loops <<= 2; > > > > will just make avg_scan_cost lost any kind of meaning because it

Re: [PATCH] sched: pull tasks when CPU is about to run SCHED_IDLE tasks

2021-01-11 Thread Vincent Guittot
On Mon, 11 Jan 2021 at 09:27, chin wrote: > > > At 2020-12-23 19:30:26, "Vincent Guittot" wrote: > >On Wed, 23 Dec 2020 at 09:32, wrote: > >> > >> From: Chen Xiaoguang > >> > >> Before a CPU switches from running SCHED_NORMAL task to

[PATCH v3 1/1] can: dev: add software tx timestamps

2021-01-10 Thread Vincent Mailhol
/lkml/2021/1/10/54 Signed-off-by: Vincent Mailhol --- drivers/net/can/dev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index 3486704c8a95..850759c7677f 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -481,6 +481,7 @@ int

[PATCH v3 0/1] Add software TX timestamps to the CAN devices

2021-01-10 Thread Vincent Mailhol
between the kernel tx software timestamp and the userland tx software timestamp). v2 was a mistake, please ignore it (fogot to do git add, changes were not reflected...) v3 reflects the comments that Jeroen made in https://lkml.org/lkml/2021/1/10/54 Vincent Mailhol (1): can: dev: add software

[PATCH v2 1/1] can: dev: add software tx timestamps

2021-01-10 Thread Vincent Mailhol
/lkml/2021/1/10/54 Signed-off-by: Vincent Mailhol --- drivers/net/can/dev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index 3486704c8a95..3904e0874543 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -484,6 +484,8 @@ int

[PATCH v2 0/1] Add software TX timestamps to the CAN devices

2021-01-10 Thread Vincent Mailhol
between the kernel tx software timestamp and the userland tx software timestamp). v2 reflects the comments that Jeroen made in https://lkml.org/lkml/2021/1/10/54 Vincent Mailhol (1): can: dev: add software tx timestamps drivers/net/can/dev.c | 2 ++ 1 file changed, 2 insertions(+) -- 2.26.2

Re: [PATCH 1/1] can: dev: add software tx timestamps

2021-01-10 Thread Vincent MAILHOL
Hello Jeroen, On Sun. 10 Jan 2021 at 20:29, Jeroen Hofstee wrote: > > Hello Vincent, > > On 1/10/21 11:35 AM, Vincent Mailhol wrote: > > Call skb_tx_timestamp() within can_put_echo_skb() so that a software > > tx timestamp gets attached on the skb. > > > [..] >

[PATCH 0/1] Add software TX timestamps to the CAN devices

2021-01-10 Thread Vincent Mailhol
between the kernel tx software timestamp and the userland tx software timestamp). Vincent Mailhol (1): can: dev: add software tx timestamps drivers/net/can/dev.c | 2 ++ 1 file changed, 2 insertions(+) -- 2.26.2

[PATCH 1/1] can: dev: add software tx timestamps

2021-01-10 Thread Vincent Mailhol
for the error queue in CAN RAW sockets (which is needed for tx timestamps) was introduced in: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=eb88531bdbfaafb827192d1fc6c5a3fcc4fadd96 Signed-off-by: Vincent Mailhol --- drivers/net/can/dev.c | 2 ++ 1 file changed, 2

Re: [RFC][PATCH 1/5] sched/fair: Fix select_idle_cpu()s cost accounting

2021-01-08 Thread Vincent Guittot
On Fri, 8 Jan 2021 at 15:41, Mel Gorman wrote: > > On Fri, Jan 08, 2021 at 02:41:19PM +0100, Vincent Guittot wrote: > > > 1. avg_scan_cost is now based on the average scan cost of a rq but > > >avg_idle is still scaled to the domain size. This is a bit problemat

Re: [RFC][PATCH 1/5] sched/fair: Fix select_idle_cpu()s cost accounting

2021-01-08 Thread Vincent Guittot
On Fri, 8 Jan 2021 at 11:27, Mel Gorman wrote: > > On Tue, Dec 15, 2020 at 08:59:11AM +0100, Peter Zijlstra wrote: > > On Tue, Dec 15, 2020 at 11:36:35AM +0800, Li, Aubrey wrote: > > > On 2020/12/15 0:48, Peter Zijlstra wrote: > > > > We compute the average cost of the total scan, but then use it

Re: [PATCH 3/3 v2] sched/fair: reduce cases for active balance

2021-01-08 Thread Vincent Guittot
On Thu, 7 Jan 2021 at 18:40, Valentin Schneider wrote: > > On 07/01/21 13:20, Vincent Guittot wrote: > > On Thu, 7 Jan 2021 at 12:26, Valentin Schneider > > wrote: > >> > @@ -9499,13 +9499,32 @@ asym_active_balance(struct lb_env *env) > >>

Re: [PATCH 2/3 v2] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-07 Thread Vincent Guittot
On Thu, 7 Jan 2021 at 16:08, Tao Zhou wrote: > > Hi Vincent, > > On Thu, Jan 07, 2021 at 11:33:24AM +0100, Vincent Guittot wrote: > > Setting LBF_ALL_PINNED during active load balance is only valid when there > > is only 1 running task on the rq otherwise this ends up in

Re: [PATCH 3/3 v2] sched/fair: reduce cases for active balance

2021-01-07 Thread Vincent Guittot
On Thu, 7 Jan 2021 at 12:26, Valentin Schneider wrote: > > On 07/01/21 11:33, Vincent Guittot wrote: > > Active balance is triggered for a number of voluntary cases like misfit > > or pinned tasks cases but also after that a number of load balance > > attempts

[PATCH 3/3 v2] sched/fair: reduce cases for active balance

2021-01-07 Thread Vincent Guittot
)) and the waiting task will end up to be selected after a number of attempts. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 45 +++-- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

[PATCH 1/3 v2] sched/fair: skip idle cfs_rq

2021-01-07 Thread Vincent Guittot
Don't waste time checking whether an idle cfs_rq could be the busiest queue. Furthermore, this can end up selecting a cfs_rq with a high load but being idle in case of migrate_load. Signed-off-by: Vincent Guittot Reviewed-by: Valentin Schneider --- kernel/sched/fair.c | 5 - 1 file changed

[PATCH 0/3 v2] Reduce number of active LB

2021-01-07 Thread Vincent Guittot
: change how LBF_ALL_PINNED is managed as proposed by Valentin - patch 3: updated comment and fix typos Vincent Guittot (3): sched/fair: skip idle cfs_rq sched/fair: don't set LBF_ALL_PINNED unnecessarily sched/fair: reduce cases for active balance kernel/sched/fair.c | 57

[PATCH 2/3 v2] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-07 Thread Vincent Guittot
set it by default. It is then cleared when we find one task that can be pulled when calling detach_tasks() or during active migration. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel

Re: [PATCH 1/1] sched/fair:Avoid unnecessary assignment to cfs_rq->on_list

2021-01-07 Thread Vincent Guittot
On Thu, 7 Jan 2021 at 02:57, wrote: > > From: jun qian > > Obviously, cfs_rq->on_list is already equal to 1 when cfs_rq->on_list > is assigned a value of 1, so an else branch is needed to avoid unnecessary > assignment operations. > > Signed-off-by: jun qian > --- > kernel/sched/fair.c | 4

Re: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters

2021-01-06 Thread Vincent Guittot
utside the cluster: > target cpu > 19 -> 17 > 13 -> 15 > 23 -> 20 > 23 -> 20 > 19 -> 17 > 13 -> 15 > 16 -> 17 > 19 -> 17 > 7 -> 5 > 10 -> 11 > 23 -> 20 > *23 -> 4 > ... > > Signed-off-by: Barr

Re: [PATCH 2/3] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-06 Thread Vincent Guittot
On Wed, 6 Jan 2021 at 16:13, Valentin Schneider wrote: > > On 06/01/21 14:34, Vincent Guittot wrote: > > Setting LBF_ALL_PINNED during active load balance is only valid when there > > is only 1 running task on the rq otherwise this ends up increasing the > > balance inte

Re: [PATCH 2/3] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-06 Thread Vincent Guittot
On Wed, 6 Jan 2021 at 16:32, Peter Zijlstra wrote: > > On Wed, Jan 06, 2021 at 04:20:55PM +0100, Vincent Guittot wrote: > > > This case here is : > > we have 2 tasks TA and TB on the rq. > > The waiting one TB can't migrate for a reason other than the pinned c

Re: [PATCH 3/3] sched/fair: reduce cases for active balance

2021-01-06 Thread Vincent Guittot
On Wed, 6 Jan 2021 at 16:13, Peter Zijlstra wrote: > > On Wed, Jan 06, 2021 at 02:34:19PM +0100, Vincent Guittot wrote: > > Active balance is triggered for a number of voluntary case like misfit or > cases > > pinned

Re: [PATCH 2/3] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-06 Thread Vincent Guittot
On Wed, 6 Jan 2021 at 16:10, Peter Zijlstra wrote: > > On Wed, Jan 06, 2021 at 02:34:18PM +0100, Vincent Guittot wrote: > > Setting LBF_ALL_PINNED during active load balance is only valid when there > > is only 1 running task on the rq otherwise this ends up increasing the &g

[PATCH 3/3] sched/fair: reduce cases for active balance

2021-01-06 Thread Vincent Guittot
. The threshold on the upper limit of the task's load will decrease with the number of failed LB until the task has migrated. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 43 +-- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel

[PATCH 2/3] sched/fair: don't set LBF_ALL_PINNED unnecessarily

2021-01-06 Thread Vincent Guittot
Setting LBF_ALL_PINNED during active load balance is only valid when there is only 1 running task on the rq otherwise this ends up increasing the balance interval whereas other tasks could migrate after the next interval once they become cache-cold as an example. Signed-off-by: Vincent Guittot

[PATCH 1/3] sched/fair: skip idle cfs_rq

2021-01-06 Thread Vincent Guittot
Don't waste time checking whether an idle cfs_rq could be the busiest queue. Furthermore, this can end up selecting a cfs_rq with a high load but being idle in case of migrate_load. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 5 - 1 file changed, 4 insertions(+), 1 deletion

[PATCH 0/3] Reduce number of active LB

2021-01-06 Thread Vincent Guittot
Few improvements related to active LB and the increase of LB interval. I haven't seen any performcne impact on various benchmarks except for -stress-ng mmapfork : +4.54% on my octo-core arm64 But this was somewhat expected as the changes impact mainly corner cases. Vincent Guittot (3): sched

Re: Is there a reason not to use -@ to compile devicetrees ?

2021-01-04 Thread Vincent Pelletier
Ping ? On Mon, 21 Dec 2020 14:47:07 +, Vincent Pelletier wrote: > Distro: https://raspi.debian.net/ (sid) > Hardware: Raspberry Pi Zero W > Kernel version: 5.9.11 (linux-image-5.9.0-4-rpi) > > To access a device connected to my pi, I need the spi0 bus, and would > like to

Re: [PATCH v2 5/5] interconnect: qcom: Add MSM8939 interconnect provider driver

2021-01-02 Thread Vincent Knecht
Le vendredi 04 décembre 2020 à 15:53 +0800, Jun Nie a écrit : > Add driver for the Qualcomm interconnect buses found in MSM8939 based > platforms. The topology consists of four NoCs that are controlled by > a remote processor that collects the aggregated bandwidth for each > master-slave pairs. >

Re: [RFC][PATCH 2/5] sched/fair: Make select_idle_cpu() proportional to cores

2020-12-23 Thread Vincent Guittot
On Mon, 14 Dec 2020 at 18:07, Peter Zijlstra wrote: > > Instead of calculating how many (logical) CPUs to scan, compute how > many cores to scan. > > This changes behaviour for anything !SMT2. > > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/sched/core.c | 19 ++- >

Re: [RFC][PATCH 0/5] select_idle_sibling() wreckage

2020-12-23 Thread Vincent Guittot
On Wed, 16 Dec 2020 at 19:07, Vincent Guittot wrote: > > On Wed, 16 Dec 2020 at 14:00, Li, Aubrey wrote: > > > > Hi Peter, > > > > On 2020/12/15 0:48, Peter Zijlstra wrote: > > > Hai, here them patches Mel asked for. They've not (yet) been through the >

Re: [PATCH] sched: pull tasks when CPU is about to run SCHED_IDLE tasks

2020-12-23 Thread Vincent Guittot
On Wed, 23 Dec 2020 at 09:32, wrote: > > From: Chen Xiaoguang > > Before a CPU switches from running SCHED_NORMAL task to > SCHED_IDLE task, trying to pull SCHED_NORMAL tasks from other Could you explain more in detail why you only care about this use case in particular and not the general

Is there a reason not to use -@ to compile devicetrees ?

2020-12-21 Thread Vincent Pelletier
and use spi0 with no further change. So now I wonder why this option is not enabled while there are these sections which seem to not be usable without an overlay ? And further, why it does not seem to be possible to enable with a kernel config option ? I must be missing something obvious, but I'm still failing to see it. Regards, -- Vincent Pelletier

Re: [PATCH v3] sched/fair: Avoid stale CPU util_est value for schedutil in task dequeue

2020-12-18 Thread Vincent Guittot
1). > maybe add a Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT") > Signed-off-by: Xuewen Yan > Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot > --- > Changes since v2: > -modify the comment > -move util_est_dequeue above within_margin

Re: [RFC][PATCH 0/5] select_idle_sibling() wreckage

2020-12-16 Thread Vincent Guittot
On Wed, 16 Dec 2020 at 14:00, Li, Aubrey wrote: > > Hi Peter, > > On 2020/12/15 0:48, Peter Zijlstra wrote: > > Hai, here them patches Mel asked for. They've not (yet) been through the > > robots, so there might be some build fail for configs I've not used. > > > > Benchmark time :-) > > > > Here

Re: [PATCH] fair/util_est: Separate util_est_dequeue() for cfs_rq_util_change

2020-12-15 Thread Vincent Guittot
On Mon, 14 Dec 2020 at 19:46, Dietmar Eggemann wrote: > > On 11/12/2020 13:03, Ryan Y wrote: > > Hi Dietmar, > > > > Yes! That's exactly what I meant. > > > >> The issue is that sugov_update_[shared\|single] -> sugov_get_util() -> > >> cpu_util_cfs() operates on an old

[PATCH v2] net: korina: fix return value

2020-12-14 Thread Vincent Stehlé
-by: Jakub Kicinski Signed-off-by: Vincent Stehlé Cc: David S. Miller Cc: Florian Fainelli --- Changes since v1: - Keep freeing the packet but return NETDEV_TX_OK, as suggested by Jakub drivers/net/ethernet/korina.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/d

Re: [PATCH] net: korina: remove busy skb free

2020-12-14 Thread Vincent Stehlé
change the return value to NETDEV_TX_OK instead. Hi Jakub, Thanks for the review. Ok, if this is the preferred fix I will respin the patch this way. Best regards, Vincent.

Re: [PATCH] net: korina: remove busy skb free

2020-12-14 Thread Vincent Stehlé
On Mon, Dec 14, 2020 at 11:03:12AM +0100, Julian Wiedmann wrote: > On 13.12.20 18:20, Vincent Stehlé wrote: ... > > @@ -216,7 +216,6 @@ static int korina_send_packet(struct sk_buff *skb, > > struct net_device *dev) > > netif_stop_queue(dev);

Re: [RFC PATCH v7] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-14 Thread Vincent Guittot
On Fri, 11 Dec 2020 at 18:45, Peter Zijlstra wrote: > > On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote: > > The prequisite patch to make that approach work was rejected though > > as on its own, it's not very helpful and Vincent didn't like that the > > load

Re: [RFC PATCH v7] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-14 Thread Vincent Guittot
ance of still being > > idle vs one we checked earlier/longer-ago. > > > > I suppose we benchmark both and see which is liked best. > > > > I originally did something like that on purpose too but Vincent called > it out so it is worth mentioning now to avoid surprise

[PATCH] powerpc/ps3: use dma_mapping_error()

2020-12-13 Thread Vincent Stehlé
The DMA address returned by dma_map_single() should be checked with dma_mapping_error(). Fix the ps3stor_setup() function accordingly. Fixes: 80071802cb9c ("[POWERPC] PS3: Storage Driver Core") Signed-off-by: Vincent Stehlé Cc: Geoff Levand Cc: Geert Uytterhoeven --- drivers/ps3/ps3

[PATCH] net: korina: remove busy skb free

2020-12-13 Thread Vincent Stehlé
The ndo_start_xmit() method must not attempt to free the skb to transmit when returning NETDEV_TX_BUSY. Fix the korina_send_packet() function accordingly. Fixes: ef11291bcd5f ("Add support the Korina (IDT RC32434) Ethernet MAC") Signed-off-by: Vincent Stehlé Cc: David S. Miller

Re: [PATCH 0/4] Reduce scanning of runqueues in select_idle_sibling

2020-12-12 Thread Vincent Guittot
On Fri, 11 Dec 2020 at 11:23, Mel Gorman wrote: > > On Fri, Dec 11, 2020 at 10:51:17AM +0100, Vincent Guittot wrote: > > On Thu, 10 Dec 2020 at 12:04, Mel Gorman > > wrote: > > > > > > On Thu, Dec 10, 2020 at 10:38:37AM +0100, Vincent Guittot wrote: &

Re: [RFC PATCH v8] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-11 Thread Vincent Guittot
On Fri, 11 Dec 2020 at 16:19, Li, Aubrey wrote: > > On 2020/12/11 23:07, Vincent Guittot wrote: > > On Thu, 10 Dec 2020 at 02:44, Aubrey Li wrote: > >> > >> Add idle cpumask to track idle cpus in sched domain. Every time > >> a CPU enters idle, the CP

Re: [RFC PATCH v8] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-11 Thread Vincent Guittot
iting path > - set SCHED_IDLE cpu in idle cpumask to allow it as a wakeup target > > v1->v2: > - idle cpumask is updated in the nohz routines, by initializing idle > cpumask with sched_domain_span(sd), nohz=off case remains the original > behavior > > Cc: Peter Zijlstra >

Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during late hotplug

2020-12-11 Thread Vincent Donnefort
On Fri, Dec 11, 2020 at 01:13:35PM +, Valentin Schneider wrote: > On 11/12/20 12:51, Valentin Schneider wrote: > >> In that case maybe we should check for the cpu_active_mask here too ? > > > > Looking at it again, I think we might need to. > > > > IIUC you can end up with pools bound to a

Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during late hotplug

2020-12-11 Thread Vincent Donnefort
come > up. > + */ Does this comment still stand ? IIUC, we should always be in the POOL_DISASSOCIATED case if the CPU from cpumask is offline. Unless a pool->attrs->cpumask can have several CPUs. In that case maybe we should check for the cpu_active_mask here too ? -- Vincent > + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); > + } > > list_add_tail(>node, >workers); > worker->pool = pool; > -- > 2.27.0 >

Re: [PATCH 0/4] Reduce scanning of runqueues in select_idle_sibling

2020-12-11 Thread Vincent Guittot
On Thu, 10 Dec 2020 at 12:04, Mel Gorman wrote: > > On Thu, Dec 10, 2020 at 10:38:37AM +0100, Vincent Guittot wrote: > > > while testing your patchset and Aubrey one on top of tip, I'm facing > > > some perf regression on my arm64 numa system on hackbench and reaim. &g

Re: [PATCH 0/4] Reduce scanning of runqueues in select_idle_sibling

2020-12-10 Thread Vincent Guittot
On Thu, 10 Dec 2020 at 09:00, Vincent Guittot wrote: > > On Wed, 9 Dec 2020 at 15:37, Mel Gorman wrote: > > > > On Tue, Dec 08, 2020 at 03:34:57PM +, Mel Gorman wrote: > > > Changelog since v1 > > > o Drop single-pass patch

Re: [PATCH 3/4] sched/fair: Do not replace recent_used_cpu with the new target

2020-12-10 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 17:14, Vincent Guittot wrote: > > On Tue, 8 Dec 2020 at 16:35, Mel Gorman wrote: > > > > After select_idle_sibling, p->recent_used_cpu is set to the > > new target. However on the next wakeup, prev will be the same as > > recent_used_cpu u

Re: [PATCH 0/4] Reduce scanning of runqueues in select_idle_sibling

2020-12-10 Thread Vincent Guittot
On Wed, 9 Dec 2020 at 15:37, Mel Gorman wrote: > > On Tue, Dec 08, 2020 at 03:34:57PM +, Mel Gorman wrote: > > Changelog since v1 > > o Drop single-pass patch > > (vincent) > > o Scope

Re: [RFC PATCH v7] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-09 Thread Vincent Guittot
On Wed, 9 Dec 2020 at 11:58, Li, Aubrey wrote: > > On 2020/12/9 16:15, Vincent Guittot wrote: > > Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit : > >> Add idle cpumask to track idle cpus in sched domain. Every time > >> a CPU enters idle,

Re: [RFC PATCH v7] sched/fair: select idle cpu from idle cpumask for task wakeup

2020-12-09 Thread Vincent Guittot
updated in the nohz routines, by initializing idle > cpumask with sched_domain_span(sd), nohz=off case remains the original > behavior. > > Cc: Peter Zijlstra > Cc: Mel Gorman > Cc: Vincent Guittot > Cc: Qais Yousef > Cc: Valentin Schneider > Cc: Jiang Biao

Re: [PATCH 4/4] sched/fair: Return an idle cpu if one is found after a failed search for an idle core

2020-12-08 Thread Vincent Guittot
.92%* > > Note that there is a significant corner case. As the SMT scan may be > terminated early, not all CPUs have been visited and select_idle_cpu() > is still called for a full scan. This case is handled in the next > patch. > > Signed-off-by: Mel Gorman Reviewed-by: V

Re: [PATCH 3/4] sched/fair: Do not replace recent_used_cpu with the new target

2020-12-08 Thread Vincent Guittot
ts cover low utilisation to over saturation. > > If graphed over time, the graphs show that the sched domain is only > scanned at negligible rates until the machine is fully busy. With > low utilisation, the "Fast Success Rate" is almost 100% until the > machine is fully busy

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-08 Thread Vincent Guittot
hree years. As the intent of SIS_PROP is to reduce > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus > on SIS_PROP as a throttling mechanism. > > Signed-off-by: Mel Gorman Reviewed-by: Vincent Guittot > --- > kernel/sched/fair.c | 20 +--

Re: [PATCH 2/4] sched/fair: Move avg_scan_cost calculations under SIS_PROP

2020-12-08 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 16:35, Mel Gorman wrote: > > As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP > even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP > check and while we are at it, exclude the cost of initialising the CPU > mask f

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-08 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 16:12, Mel Gorman wrote: > > On Tue, Dec 08, 2020 at 03:47:40PM +0100, Vincent Guittot wrote: > > > I considered it but made the choice to exclude the cost of cpumask_and() > > > from the avg_scan_cost instead. It's minor but when doing the orig

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-08 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 14:54, Mel Gorman wrote: > > On Tue, Dec 08, 2020 at 02:43:10PM +0100, Vincent Guittot wrote: > > On Tue, 8 Dec 2020 at 14:36, Mel Gorman wrote: > > > > > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > >

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-08 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 14:36, Mel Gorman wrote: > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > > Nitpick: > > > > > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > > > completely in

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-08 Thread Vincent Guittot
On Tue, 8 Dec 2020 at 11:59, Mel Gorman wrote: > > On Tue, Dec 08, 2020 at 11:07:19AM +0100, Dietmar Eggemann wrote: > > On 07/12/2020 10:15, Mel Gorman wrote: > > > SIS_AVG_CPU was introduced as a means of avoiding a search when the > > > average search cost indicated that the search would

Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters

2020-12-07 Thread Vincent Guittot
On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song) wrote: > > > > > -Original Message- > > From: Vincent Guittot [mailto:vincent.guit...@linaro.org] > > Sent: Thursday, December 3, 2020 10:39 PM > > To: Song Bao Hua (Barry Song) > > C

Re: [PATCH 3/4] sched/fair: Return an idle cpu if one is found after a failed search for an idle core

2020-12-07 Thread Vincent Guittot
.92%* > > Note that there is a significant corner case. As the SMT scan may be > terminated early, not all CPUs have been visited and select_idle_cpu() > is still called for a full scan. This case is handled in the next > patch. > > Signed-off-by: Mel Gorman Reviewed-by: V

Re: [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU

2020-12-07 Thread Vincent Guittot
On Mon, 7 Dec 2020 at 10:15, Mel Gorman wrote: > > SIS_AVG_CPU was introduced as a means of avoiding a search when the > average search cost indicated that the search would likely fail. It > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make > select_idle_cpu() more

Re: [RFC PATCH 0/4] Reduce worst-case scanning of runqueues in select_idle_sibling

2020-12-07 Thread Vincent Guittot
On Mon, 7 Dec 2020 at 10:15, Mel Gorman wrote: > > This is a minimal series to reduce the amount of runqueue scanning in > select_idle_sibling in the worst case. > > Patch 1 removes SIS_AVG_CPU because it's unused. > > Patch 2 improves the hit rate of p->recent_used_cpu to reduce the amount >

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Fri, 4 Dec 2020 at 16:40, Mel Gorman wrote: > > On Fri, Dec 04, 2020 at 04:23:48PM +0100, Vincent Guittot wrote: > > On Fri, 4 Dec 2020 at 15:31, Mel Gorman wrote: > > > > > > On Fri, Dec 04, 2020 at 02:47:48PM +0100, Vincent Guittot wrote: > > > > &g

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Fri, 4 Dec 2020 at 15:31, Mel Gorman wrote: > > On Fri, Dec 04, 2020 at 02:47:48PM +0100, Vincent Guittot wrote: > > > IIUC, select_idle_core and select_idle_cpu share the same > > > cpumask(select_idle_mask)? > > > If the target's sibling is r

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Fri, 4 Dec 2020 at 14:40, Li, Aubrey wrote: > > On 2020/12/4 21:17, Vincent Guittot wrote: > > On Fri, 4 Dec 2020 at 14:13, Vincent Guittot > > wrote: > >> > >> On Fri, 4 Dec 2020 at 12:30, Mel Gorman > >> wrote: > >>> > >>

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Fri, 4 Dec 2020 at 14:13, Vincent Guittot wrote: > > On Fri, 4 Dec 2020 at 12:30, Mel Gorman wrote: > > > > On Fri, Dec 04, 2020 at 11:56:36AM +0100, Vincent Guittot wrote: > > > > The intent was that the sibling might still be an idle candidate. In > > &g

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Fri, 4 Dec 2020 at 12:30, Mel Gorman wrote: > > On Fri, Dec 04, 2020 at 11:56:36AM +0100, Vincent Guittot wrote: > > > The intent was that the sibling might still be an idle candidate. In > > > the current draft of the series, I do not even clear this so th

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-04 Thread Vincent Guittot
On Thu, 3 Dec 2020 at 18:52, Mel Gorman wrote: > > On Thu, Dec 03, 2020 at 05:38:03PM +0100, Vincent Guittot wrote: > > On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote: > > > > > > The target CPU is definitely not idle in both select_idle_core and > > >

Re: [PATCH 06/10] sched/fair: Clear the target CPU from the cpumask of CPUs searched

2020-12-03 Thread Vincent Guittot
On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote: > > The target CPU is definitely not idle in both select_idle_core and > select_idle_cpu. For select_idle_core(), the SMT is potentially > checked unnecessarily as the core is definitely not idle if the > target is busy. For select_idle_cpu(), the

Re: [PATCH 04/10] sched/fair: Return an idle cpu if one is found after a failed search for an idle core

2020-12-03 Thread Vincent Guittot
On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote: > > select_idle_core is called when SMT is active and there is likely a free > core available. It may find idle CPUs but this information is simply > discarded and the scan starts over again with select_idle_cpu. > > This patch caches information on

Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters

2020-12-03 Thread Vincent Guittot
On Thu, 3 Dec 2020 at 10:39, Vincent Guittot wrote: > > On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song) > wrote: > > > > > > > > > -----Original Message- > > > From: Vincent Guittot [mailto:vincent.guit...@linaro.org] > > > Sent:

Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters

2020-12-03 Thread Vincent Guittot
On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song) wrote: > > > > > -Original Message- > > From: Vincent Guittot [mailto:vincent.guit...@linaro.org] > > Sent: Thursday, December 3, 2020 10:04 PM > > To: Song Bao Hua (Barry Song) > > C

Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters

2020-12-03 Thread Vincent Guittot
On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song) wrote: > > > > > Sorry. Please ignore this. I added some printk here while testing > > one numa. Will update you the data in another email. > > Re-tested in one NUMA node(cpu0-cpu23): > > g=1 > Running in threaded mode with 1 groups using 40

<    1   2   3   4   5   6   7   8   9   10   >