On Sun, 17 Jan 2021 at 13:31, Jiang Biao wrote:
>
> From: Jiang Biao
>
> delta in update_stats_wait_end() might be negative, which would
> make following statistics go wrong.
Could you describe the use case that generates a negative delta ?
rq_clock is always increasing so this should not lead
On Thu, 14 Jan 2021 at 14:53, Mel Gorman wrote:
>
> On Thu, Jan 14, 2021 at 02:25:32PM +0100, Vincent Guittot wrote:
> > On Thu, 14 Jan 2021 at 10:35, Mel Gorman
> > wrote:
> > >
> > > On Wed, Jan 13, 2021 at 06:03:00PM +0100, Vincent Guittot wrote:
> >
On Thu, 14 Jan 2021 at 10:35, Mel Gorman wrote:
>
> On Wed, Jan 13, 2021 at 06:03:00PM +0100, Vincent Guittot wrote:
> > > @@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p,
> > > struct sched_domain *sd, int t
> > > fo
The following commit has been merged into the sched/core branch of tip:
Commit-ID: fc488ffd4297f661b3e9d7450dcdb9089a53df7c
Gitweb:
https://git.kernel.org/tip/fc488ffd4297f661b3e9d7450dcdb9089a53df7c
Author:Vincent Guittot
AuthorDate:Thu, 07 Jan 2021 11:33:23 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: e9b9734b74656abb585a7f6fabf1d30ce00e51ea
Gitweb:
https://git.kernel.org/tip/e9b9734b74656abb585a7f6fabf1d30ce00e51ea
Author:Vincent Guittot
AuthorDate:Thu, 07 Jan 2021 11:33:25 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 8a41dfcda7a32ed4435c00d98a9dc7156b08b671
Gitweb:
https://git.kernel.org/tip/8a41dfcda7a32ed4435c00d98a9dc7156b08b671
Author:Vincent Guittot
AuthorDate:Thu, 07 Jan 2021 11:33:24 +01:00
On Mon, 11 Jan 2021 at 16:50, Mel Gorman wrote:
>
> Both select_idle_core() and select_idle_cpu() do a loop over the same
> cpumask. Observe that by clearing the already visited CPUs, we can
> fold the iteration and iterate a core at a time.
>
> All we need to do is remember any non-idle CPU we
On Mon, 11 Jan 2021 at 16:50, Mel Gorman wrote:
>
> From: Peter Zijlstra (Intel)
>
> Instead of calculating how many (logical) CPUs to scan, compute how
> many cores to scan.
>
> This changes behaviour for anything !SMT2.
>
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Mel Gorman
>
On Wed, 13 Jan 2021 at 15:49, Arnd Bergmann wrote:
>
> On Tue, Jan 12, 2021 at 2:46 PM Vincent Guittot
> wrote:
> > On Tue, 12 Jan 2021 at 12:25, Guillaume Tucker
> > wrote:
> > >
> > > Some details can be found here:
> > >
> > > h
On Wed, 13 Jan 2021 at 04:14, chin wrote:
>
>
>
>
> At 2021-01-12 16:18:51, "Vincent Guittot" wrote:
> >On Tue, 12 Jan 2021 at 07:59, chin wrote:
> >>
> >>
> >>
> >>
> >> At 2021-01-11 19:04:19, &qu
Hi Guillaume
On Tue, 12 Jan 2021 at 12:25, Guillaume Tucker
wrote:
>
> Hi Vincent,
>
> Please see the bisection report below about a boot failure on
> rk3328-rock64 with the pwmg/integ branch.
>
> Reports aren't automatically sent to the public while we're
> trialin
On Tue. 12 Jan 2021 at 16:58, Marc Kleine-Budde wrote:
>
> On 1/12/21 1:00 AM, Vincent MAILHOL wrote:
> [...]
>
> > Mark: do you want me to send a v4 of that patch with above
> > comment removed or can you directly do the change in your testing
> > branch?
>
>
On Tue, 12 Jan 2021 at 07:59, chin wrote:
>
>
>
>
> At 2021-01-11 19:04:19, "Vincent Guittot" wrote:
> >On Mon, 11 Jan 2021 at 09:27, chin wrote:
> >>
> >>
> >> At 2020-12-23 19:30:26, "Vincent Guittot"
> >>
On Tue. 12 Jan 2021 at 11:14, Richard Cochran wrote:
>
> On Tue, Jan 12, 2021 at 09:00:33AM +0900, Vincent MAILHOL wrote:
> > Out of curiosity, which programs do you use? I guess wireshark
> > but please let me know if you use any other programs (I just use
> > to write
On Tue. 12 Jan 2021 at 02:11, Richard Cochran wrote:
>
> On Sun, Jan 10, 2021 at 09:49:03PM +0900, Vincent Mailhol wrote:
> > * The hardware rx timestamp of a local loopback message is the
> > hardware tx timestamp. This means that there are no needs to
From: Vincent Donnefort
Factorizing and unifying cpuhp callback range invocations, especially for
the hotunplug path, where two different ways of decrementing were used. The
first one, decrements before the callback is called:
cpuhp_thread_fun()
state = st->state;
st->
From: Vincent Donnefort
The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
triggered by the CPUHP_BRINGUP_CPU step. If the latter doesn't run, none
of the atomic can. Hence, rollback is not possible after a hotunplug
CPUHP_BRINGUP_CPU step failure and the "fail"
From: Vincent Donnefort
After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish
the job. The steps left are as followed:
++
| CPUHP_TEARDOWN_CPU | -> If fails state is CPUHP_TEARDOWN_CPU
++
| ATOMIC STATES| ->
From: Vincent Donnefort
This patch-set intends mainly to fix HP rollback, which is currently broken,
due to an inconsistent "state" usage and an issue with CPUHP_AP_ONLINE_IDLE.
It also improves the "fail" interface, which can now be reset and will reject
CPUHP_BRINGUP_CP
From: Vincent Donnefort
Currently, the only way of resetting this file is to actually try to run
a hotplug, hotunplug or both. This is quite annoying for testing and, as
the default value for this file is -1, it seems quite natural to let a
user write it.
Signed-off-by: Vincent Donnefort
On Fri, 8 Jan 2021 at 20:49, Peter Zijlstra wrote:
>
> On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote:
> > Another thing that worries me, is that we use the avg_idle of the
> > local cpu, which is obviously not idle otherwise it would have been
> > sele
On Fri, 8 Jan 2021 at 20:45, Peter Zijlstra wrote:
>
> On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote:
> > Also, there is another problem (that I'm investigating) which is that
> > this_rq()->avg_idle is stalled when your cpu is busy. Which means that
>
On Fri, 8 Jan 2021 at 17:14, Mel Gorman wrote:
>
> On Fri, Jan 08, 2021 at 04:10:51PM +0100, Vincent Guittot wrote:
> > > > Trying to bias the avg_scan_cost with: loops <<= 2;
> > > > will just make avg_scan_cost lost any kind of meaning because it
On Mon, 11 Jan 2021 at 09:27, chin wrote:
>
>
> At 2020-12-23 19:30:26, "Vincent Guittot" wrote:
> >On Wed, 23 Dec 2020 at 09:32, wrote:
> >>
> >> From: Chen Xiaoguang
> >>
> >> Before a CPU switches from running SCHED_NORMAL task to
/lkml/2021/1/10/54
Signed-off-by: Vincent Mailhol
---
drivers/net/can/dev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
index 3486704c8a95..850759c7677f 100644
--- a/drivers/net/can/dev.c
+++ b/drivers/net/can/dev.c
@@ -481,6 +481,7 @@ int
between the kernel tx
software timestamp and the userland tx software timestamp).
v2 was a mistake, please ignore it (fogot to do git add, changes were
not reflected...)
v3 reflects the comments that Jeroen made in
https://lkml.org/lkml/2021/1/10/54
Vincent Mailhol (1):
can: dev: add software
/lkml/2021/1/10/54
Signed-off-by: Vincent Mailhol
---
drivers/net/can/dev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
index 3486704c8a95..3904e0874543 100644
--- a/drivers/net/can/dev.c
+++ b/drivers/net/can/dev.c
@@ -484,6 +484,8 @@ int
between the kernel tx
software timestamp and the userland tx software timestamp).
v2 reflects the comments that Jeroen made in
https://lkml.org/lkml/2021/1/10/54
Vincent Mailhol (1):
can: dev: add software tx timestamps
drivers/net/can/dev.c | 2 ++
1 file changed, 2 insertions(+)
--
2.26.2
Hello Jeroen,
On Sun. 10 Jan 2021 at 20:29, Jeroen Hofstee wrote:
>
> Hello Vincent,
>
> On 1/10/21 11:35 AM, Vincent Mailhol wrote:
> > Call skb_tx_timestamp() within can_put_echo_skb() so that a software
> > tx timestamp gets attached on the skb.
> >
> [..]
>
between the kernel tx
software timestamp and the userland tx software timestamp).
Vincent Mailhol (1):
can: dev: add software tx timestamps
drivers/net/can/dev.c | 2 ++
1 file changed, 2 insertions(+)
--
2.26.2
for the error queue in CAN RAW sockets (which is needed for tx
timestamps) was introduced in:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=eb88531bdbfaafb827192d1fc6c5a3fcc4fadd96
Signed-off-by: Vincent Mailhol
---
drivers/net/can/dev.c | 2 ++
1 file changed, 2
On Fri, 8 Jan 2021 at 15:41, Mel Gorman wrote:
>
> On Fri, Jan 08, 2021 at 02:41:19PM +0100, Vincent Guittot wrote:
> > > 1. avg_scan_cost is now based on the average scan cost of a rq but
> > >avg_idle is still scaled to the domain size. This is a bit problemat
On Fri, 8 Jan 2021 at 11:27, Mel Gorman wrote:
>
> On Tue, Dec 15, 2020 at 08:59:11AM +0100, Peter Zijlstra wrote:
> > On Tue, Dec 15, 2020 at 11:36:35AM +0800, Li, Aubrey wrote:
> > > On 2020/12/15 0:48, Peter Zijlstra wrote:
> > > > We compute the average cost of the total scan, but then use it
On Thu, 7 Jan 2021 at 18:40, Valentin Schneider
wrote:
>
> On 07/01/21 13:20, Vincent Guittot wrote:
> > On Thu, 7 Jan 2021 at 12:26, Valentin Schneider
> > wrote:
> >> > @@ -9499,13 +9499,32 @@ asym_active_balance(struct lb_env *env)
> >>
On Thu, 7 Jan 2021 at 16:08, Tao Zhou wrote:
>
> Hi Vincent,
>
> On Thu, Jan 07, 2021 at 11:33:24AM +0100, Vincent Guittot wrote:
> > Setting LBF_ALL_PINNED during active load balance is only valid when there
> > is only 1 running task on the rq otherwise this ends up in
On Thu, 7 Jan 2021 at 12:26, Valentin Schneider
wrote:
>
> On 07/01/21 11:33, Vincent Guittot wrote:
> > Active balance is triggered for a number of voluntary cases like misfit
> > or pinned tasks cases but also after that a number of load balance
> > attempts
)) and the waiting task will end up to be selected after a
number of attempts.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 45 +++--
1 file changed, 23 insertions(+), 22 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
Don't waste time checking whether an idle cfs_rq could be the busiest
queue. Furthermore, this can end up selecting a cfs_rq with a high load
but being idle in case of migrate_load.
Signed-off-by: Vincent Guittot
Reviewed-by: Valentin Schneider
---
kernel/sched/fair.c | 5 -
1 file changed
: change how LBF_ALL_PINNED is managed as proposed by Valentin
- patch 3: updated comment and fix typos
Vincent Guittot (3):
sched/fair: skip idle cfs_rq
sched/fair: don't set LBF_ALL_PINNED unnecessarily
sched/fair: reduce cases for active balance
kernel/sched/fair.c | 57
set it by default. It is then cleared
when we find one task that can be pulled when calling detach_tasks() or
during active migration.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
On Thu, 7 Jan 2021 at 02:57, wrote:
>
> From: jun qian
>
> Obviously, cfs_rq->on_list is already equal to 1 when cfs_rq->on_list
> is assigned a value of 1, so an else branch is needed to avoid unnecessary
> assignment operations.
>
> Signed-off-by: jun qian
> ---
> kernel/sched/fair.c | 4
utside the cluster:
> target cpu
> 19 -> 17
> 13 -> 15
> 23 -> 20
> 23 -> 20
> 19 -> 17
> 13 -> 15
> 16 -> 17
> 19 -> 17
> 7 -> 5
> 10 -> 11
> 23 -> 20
> *23 -> 4
> ...
>
> Signed-off-by: Barr
On Wed, 6 Jan 2021 at 16:13, Valentin Schneider
wrote:
>
> On 06/01/21 14:34, Vincent Guittot wrote:
> > Setting LBF_ALL_PINNED during active load balance is only valid when there
> > is only 1 running task on the rq otherwise this ends up increasing the
> > balance inte
On Wed, 6 Jan 2021 at 16:32, Peter Zijlstra wrote:
>
> On Wed, Jan 06, 2021 at 04:20:55PM +0100, Vincent Guittot wrote:
>
> > This case here is :
> > we have 2 tasks TA and TB on the rq.
> > The waiting one TB can't migrate for a reason other than the pinned c
On Wed, 6 Jan 2021 at 16:13, Peter Zijlstra wrote:
>
> On Wed, Jan 06, 2021 at 02:34:19PM +0100, Vincent Guittot wrote:
> > Active balance is triggered for a number of voluntary case like misfit or
> cases
> > pinned
On Wed, 6 Jan 2021 at 16:10, Peter Zijlstra wrote:
>
> On Wed, Jan 06, 2021 at 02:34:18PM +0100, Vincent Guittot wrote:
> > Setting LBF_ALL_PINNED during active load balance is only valid when there
> > is only 1 running task on the rq otherwise this ends up increasing the
&g
. The
threshold on the upper limit of the task's load will decrease with the
number of failed LB until the task has migrated.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 43 +--
1 file changed, 21 insertions(+), 22 deletions(-)
diff --git a/kernel
Setting LBF_ALL_PINNED during active load balance is only valid when there
is only 1 running task on the rq otherwise this ends up increasing the
balance interval whereas other tasks could migrate after the next interval
once they become cache-cold as an example.
Signed-off-by: Vincent Guittot
Don't waste time checking whether an idle cfs_rq could be the busiest
queue. Furthermore, this can end up selecting a cfs_rq with a high load
but being idle in case of migrate_load.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 5 -
1 file changed, 4 insertions(+), 1 deletion
Few improvements related to active LB and the increase of LB interval.
I haven't seen any performcne impact on various benchmarks except for
-stress-ng mmapfork : +4.54% on my octo-core arm64
But this was somewhat expected as the changes impact mainly corner cases.
Vincent Guittot (3):
sched
Ping ?
On Mon, 21 Dec 2020 14:47:07 +, Vincent Pelletier
wrote:
> Distro: https://raspi.debian.net/ (sid)
> Hardware: Raspberry Pi Zero W
> Kernel version: 5.9.11 (linux-image-5.9.0-4-rpi)
>
> To access a device connected to my pi, I need the spi0 bus, and would
> like to
Le vendredi 04 décembre 2020 à 15:53 +0800, Jun Nie a écrit :
> Add driver for the Qualcomm interconnect buses found in MSM8939 based
> platforms. The topology consists of four NoCs that are controlled by
> a remote processor that collects the aggregated bandwidth for each
> master-slave pairs.
>
On Mon, 14 Dec 2020 at 18:07, Peter Zijlstra wrote:
>
> Instead of calculating how many (logical) CPUs to scan, compute how
> many cores to scan.
>
> This changes behaviour for anything !SMT2.
>
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> kernel/sched/core.c | 19 ++-
>
On Wed, 16 Dec 2020 at 19:07, Vincent Guittot
wrote:
>
> On Wed, 16 Dec 2020 at 14:00, Li, Aubrey wrote:
> >
> > Hi Peter,
> >
> > On 2020/12/15 0:48, Peter Zijlstra wrote:
> > > Hai, here them patches Mel asked for. They've not (yet) been through the
>
On Wed, 23 Dec 2020 at 09:32, wrote:
>
> From: Chen Xiaoguang
>
> Before a CPU switches from running SCHED_NORMAL task to
> SCHED_IDLE task, trying to pull SCHED_NORMAL tasks from other
Could you explain more in detail why you only care about this use case
in particular and not the general
and use spi0 with no further change.
So now I wonder why this option is not enabled while there are these
sections which seem to not be usable without an overlay ?
And further, why it does not seem to be possible to enable with a
kernel config option ?
I must be missing something obvious, but I'm still failing to see it.
Regards,
--
Vincent Pelletier
1).
>
maybe add a
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
> Signed-off-by: Xuewen Yan
> Reviewed-by: Dietmar Eggemann
Reviewed-by: Vincent Guittot
> ---
> Changes since v2:
> -modify the comment
> -move util_est_dequeue above within_margin
On Wed, 16 Dec 2020 at 14:00, Li, Aubrey wrote:
>
> Hi Peter,
>
> On 2020/12/15 0:48, Peter Zijlstra wrote:
> > Hai, here them patches Mel asked for. They've not (yet) been through the
> > robots, so there might be some build fail for configs I've not used.
> >
> > Benchmark time :-)
> >
>
> Here
On Mon, 14 Dec 2020 at 19:46, Dietmar Eggemann wrote:
>
> On 11/12/2020 13:03, Ryan Y wrote:
> > Hi Dietmar,
> >
> > Yes! That's exactly what I meant.
> >
> >> The issue is that sugov_update_[shared\|single] -> sugov_get_util() ->
> >> cpu_util_cfs() operates on an old
-by: Jakub Kicinski
Signed-off-by: Vincent Stehlé
Cc: David S. Miller
Cc: Florian Fainelli
---
Changes since v1:
- Keep freeing the packet but return NETDEV_TX_OK, as suggested by Jakub
drivers/net/ethernet/korina.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/d
change the return value to NETDEV_TX_OK instead.
Hi Jakub,
Thanks for the review.
Ok, if this is the preferred fix I will respin the patch this way.
Best regards,
Vincent.
On Mon, Dec 14, 2020 at 11:03:12AM +0100, Julian Wiedmann wrote:
> On 13.12.20 18:20, Vincent Stehlé wrote:
...
> > @@ -216,7 +216,6 @@ static int korina_send_packet(struct sk_buff *skb,
> > struct net_device *dev)
> > netif_stop_queue(dev);
On Fri, 11 Dec 2020 at 18:45, Peter Zijlstra wrote:
>
> On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote:
> > The prequisite patch to make that approach work was rejected though
> > as on its own, it's not very helpful and Vincent didn't like that the
> > load
ance of still being
> > idle vs one we checked earlier/longer-ago.
> >
> > I suppose we benchmark both and see which is liked best.
> >
>
> I originally did something like that on purpose too but Vincent called
> it out so it is worth mentioning now to avoid surprise
The DMA address returned by dma_map_single() should be checked with
dma_mapping_error(). Fix the ps3stor_setup() function accordingly.
Fixes: 80071802cb9c ("[POWERPC] PS3: Storage Driver Core")
Signed-off-by: Vincent Stehlé
Cc: Geoff Levand
Cc: Geert Uytterhoeven
---
drivers/ps3/ps3
The ndo_start_xmit() method must not attempt to free the skb to transmit
when returning NETDEV_TX_BUSY. Fix the korina_send_packet() function
accordingly.
Fixes: ef11291bcd5f ("Add support the Korina (IDT RC32434) Ethernet MAC")
Signed-off-by: Vincent Stehlé
Cc: David S. Miller
On Fri, 11 Dec 2020 at 11:23, Mel Gorman wrote:
>
> On Fri, Dec 11, 2020 at 10:51:17AM +0100, Vincent Guittot wrote:
> > On Thu, 10 Dec 2020 at 12:04, Mel Gorman
> > wrote:
> > >
> > > On Thu, Dec 10, 2020 at 10:38:37AM +0100, Vincent Guittot wrote:
&
On Fri, 11 Dec 2020 at 16:19, Li, Aubrey wrote:
>
> On 2020/12/11 23:07, Vincent Guittot wrote:
> > On Thu, 10 Dec 2020 at 02:44, Aubrey Li wrote:
> >>
> >> Add idle cpumask to track idle cpus in sched domain. Every time
> >> a CPU enters idle, the CP
iting path
> - set SCHED_IDLE cpu in idle cpumask to allow it as a wakeup target
>
> v1->v2:
> - idle cpumask is updated in the nohz routines, by initializing idle
> cpumask with sched_domain_span(sd), nohz=off case remains the original
> behavior
>
> Cc: Peter Zijlstra
>
On Fri, Dec 11, 2020 at 01:13:35PM +, Valentin Schneider wrote:
> On 11/12/20 12:51, Valentin Schneider wrote:
> >> In that case maybe we should check for the cpu_active_mask here too ?
> >
> > Looking at it again, I think we might need to.
> >
> > IIUC you can end up with pools bound to a
come
> up.
> + */
Does this comment still stand ? IIUC, we should always be in the
POOL_DISASSOCIATED case if the CPU from cpumask is offline. Unless a
pool->attrs->cpumask can have several CPUs. In that case maybe we should check
for the cpu_active_mask here too ?
--
Vincent
> + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
> + }
>
> list_add_tail(>node, >workers);
> worker->pool = pool;
> --
> 2.27.0
>
On Thu, 10 Dec 2020 at 12:04, Mel Gorman wrote:
>
> On Thu, Dec 10, 2020 at 10:38:37AM +0100, Vincent Guittot wrote:
> > > while testing your patchset and Aubrey one on top of tip, I'm facing
> > > some perf regression on my arm64 numa system on hackbench and reaim.
&g
On Thu, 10 Dec 2020 at 09:00, Vincent Guittot
wrote:
>
> On Wed, 9 Dec 2020 at 15:37, Mel Gorman wrote:
> >
> > On Tue, Dec 08, 2020 at 03:34:57PM +, Mel Gorman wrote:
> > > Changelog since v1
> > > o Drop single-pass patch
On Tue, 8 Dec 2020 at 17:14, Vincent Guittot wrote:
>
> On Tue, 8 Dec 2020 at 16:35, Mel Gorman wrote:
> >
> > After select_idle_sibling, p->recent_used_cpu is set to the
> > new target. However on the next wakeup, prev will be the same as
> > recent_used_cpu u
On Wed, 9 Dec 2020 at 15:37, Mel Gorman wrote:
>
> On Tue, Dec 08, 2020 at 03:34:57PM +, Mel Gorman wrote:
> > Changelog since v1
> > o Drop single-pass patch
> > (vincent)
> > o Scope
On Wed, 9 Dec 2020 at 11:58, Li, Aubrey wrote:
>
> On 2020/12/9 16:15, Vincent Guittot wrote:
> > Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit :
> >> Add idle cpumask to track idle cpus in sched domain. Every time
> >> a CPU enters idle,
updated in the nohz routines, by initializing idle
> cpumask with sched_domain_span(sd), nohz=off case remains the original
> behavior.
>
> Cc: Peter Zijlstra
> Cc: Mel Gorman
> Cc: Vincent Guittot
> Cc: Qais Yousef
> Cc: Valentin Schneider
> Cc: Jiang Biao
.92%*
>
> Note that there is a significant corner case. As the SMT scan may be
> terminated early, not all CPUs have been visited and select_idle_cpu()
> is still called for a full scan. This case is handled in the next
> patch.
>
> Signed-off-by: Mel Gorman
Reviewed-by: V
ts cover low utilisation to over saturation.
>
> If graphed over time, the graphs show that the sched domain is only
> scanned at negligible rates until the machine is fully busy. With
> low utilisation, the "Fast Success Rate" is almost 100% until the
> machine is fully busy
hree years. As the intent of SIS_PROP is to reduce
> the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus
> on SIS_PROP as a throttling mechanism.
>
> Signed-off-by: Mel Gorman
Reviewed-by: Vincent Guittot
> ---
> kernel/sched/fair.c | 20 +--
On Tue, 8 Dec 2020 at 16:35, Mel Gorman wrote:
>
> As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP
> even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP
> check and while we are at it, exclude the cost of initialising the CPU
> mask f
On Tue, 8 Dec 2020 at 16:12, Mel Gorman wrote:
>
> On Tue, Dec 08, 2020 at 03:47:40PM +0100, Vincent Guittot wrote:
> > > I considered it but made the choice to exclude the cost of cpumask_and()
> > > from the avg_scan_cost instead. It's minor but when doing the orig
On Tue, 8 Dec 2020 at 14:54, Mel Gorman wrote:
>
> On Tue, Dec 08, 2020 at 02:43:10PM +0100, Vincent Guittot wrote:
> > On Tue, 8 Dec 2020 at 14:36, Mel Gorman wrote:
> > >
> > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote:
> > > >
On Tue, 8 Dec 2020 at 14:36, Mel Gorman wrote:
>
> On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote:
> > > > Nitpick:
> > > >
> > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go
> > > > completely in
On Tue, 8 Dec 2020 at 11:59, Mel Gorman wrote:
>
> On Tue, Dec 08, 2020 at 11:07:19AM +0100, Dietmar Eggemann wrote:
> > On 07/12/2020 10:15, Mel Gorman wrote:
> > > SIS_AVG_CPU was introduced as a means of avoiding a search when the
> > > average search cost indicated that the search would
On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song)
wrote:
>
>
>
> > -Original Message-
> > From: Vincent Guittot [mailto:vincent.guit...@linaro.org]
> > Sent: Thursday, December 3, 2020 10:39 PM
> > To: Song Bao Hua (Barry Song)
> > C
.92%*
>
> Note that there is a significant corner case. As the SMT scan may be
> terminated early, not all CPUs have been visited and select_idle_cpu()
> is still called for a full scan. This case is handled in the next
> patch.
>
> Signed-off-by: Mel Gorman
Reviewed-by: V
On Mon, 7 Dec 2020 at 10:15, Mel Gorman wrote:
>
> SIS_AVG_CPU was introduced as a means of avoiding a search when the
> average search cost indicated that the search would likely fail. It
> was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make
> select_idle_cpu() more
On Mon, 7 Dec 2020 at 10:15, Mel Gorman wrote:
>
> This is a minimal series to reduce the amount of runqueue scanning in
> select_idle_sibling in the worst case.
>
> Patch 1 removes SIS_AVG_CPU because it's unused.
>
> Patch 2 improves the hit rate of p->recent_used_cpu to reduce the amount
>
On Fri, 4 Dec 2020 at 16:40, Mel Gorman wrote:
>
> On Fri, Dec 04, 2020 at 04:23:48PM +0100, Vincent Guittot wrote:
> > On Fri, 4 Dec 2020 at 15:31, Mel Gorman wrote:
> > >
> > > On Fri, Dec 04, 2020 at 02:47:48PM +0100, Vincent Guittot wrote:
> > > > &g
On Fri, 4 Dec 2020 at 15:31, Mel Gorman wrote:
>
> On Fri, Dec 04, 2020 at 02:47:48PM +0100, Vincent Guittot wrote:
> > > IIUC, select_idle_core and select_idle_cpu share the same
> > > cpumask(select_idle_mask)?
> > > If the target's sibling is r
On Fri, 4 Dec 2020 at 14:40, Li, Aubrey wrote:
>
> On 2020/12/4 21:17, Vincent Guittot wrote:
> > On Fri, 4 Dec 2020 at 14:13, Vincent Guittot
> > wrote:
> >>
> >> On Fri, 4 Dec 2020 at 12:30, Mel Gorman
> >> wrote:
> >>>
> >>
On Fri, 4 Dec 2020 at 14:13, Vincent Guittot wrote:
>
> On Fri, 4 Dec 2020 at 12:30, Mel Gorman wrote:
> >
> > On Fri, Dec 04, 2020 at 11:56:36AM +0100, Vincent Guittot wrote:
> > > > The intent was that the sibling might still be an idle candidate. In
> > &g
On Fri, 4 Dec 2020 at 12:30, Mel Gorman wrote:
>
> On Fri, Dec 04, 2020 at 11:56:36AM +0100, Vincent Guittot wrote:
> > > The intent was that the sibling might still be an idle candidate. In
> > > the current draft of the series, I do not even clear this so th
On Thu, 3 Dec 2020 at 18:52, Mel Gorman wrote:
>
> On Thu, Dec 03, 2020 at 05:38:03PM +0100, Vincent Guittot wrote:
> > On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote:
> > >
> > > The target CPU is definitely not idle in both select_idle_core and
> > >
On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote:
>
> The target CPU is definitely not idle in both select_idle_core and
> select_idle_cpu. For select_idle_core(), the SMT is potentially
> checked unnecessarily as the core is definitely not idle if the
> target is busy. For select_idle_cpu(), the
On Thu, 3 Dec 2020 at 15:11, Mel Gorman wrote:
>
> select_idle_core is called when SMT is active and there is likely a free
> core available. It may find idle CPUs but this information is simply
> discarded and the scan starts over again with select_idle_cpu.
>
> This patch caches information on
On Thu, 3 Dec 2020 at 10:39, Vincent Guittot wrote:
>
> On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> wrote:
> >
> >
> >
> > > -----Original Message-
> > > From: Vincent Guittot [mailto:vincent.guit...@linaro.org]
> > > Sent:
On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
wrote:
>
>
>
> > -Original Message-
> > From: Vincent Guittot [mailto:vincent.guit...@linaro.org]
> > Sent: Thursday, December 3, 2020 10:04 PM
> > To: Song Bao Hua (Barry Song)
> > C
On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
wrote:
>
> >
> > Sorry. Please ignore this. I added some printk here while testing
> > one numa. Will update you the data in another email.
>
> Re-tested in one NUMA node(cpu0-cpu23):
>
> g=1
> Running in threaded mode with 1 groups using 40
401 - 500 of 6120 matches
Mail list logo