The following commit has been merged into the sched/core branch of tip:
Commit-ID: b0fb1eb4f04ae4768231b9731efb1134e22053a4
Gitweb:
https://git.kernel.org/tip/b0fb1eb4f04ae4768231b9731efb1134e22053a4
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:33 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 5e23e474431529b7d1480f649ce33d0e9c1b2e48
Gitweb:
https://git.kernel.org/tip/5e23e474431529b7d1480f649ce33d0e9c1b2e48
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:32 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 11f10e5420f6cecac7d4823638bff040c257aba9
Gitweb:
https://git.kernel.org/tip/11f10e5420f6cecac7d4823638bff040c257aba9
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:36 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912
Gitweb:
https://git.kernel.org/tip/0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:31 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 57abff067a084889b6e06137e61a3dc3458acd56
Gitweb:
https://git.kernel.org/tip/57abff067a084889b6e06137e61a3dc3458acd56
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:38 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 490ba971d8b498ba3a47999ab94c6a0d1830ad41
Gitweb:
https://git.kernel.org/tip/490ba971d8b498ba3a47999ab94c6a0d1830ad41
Author:Vincent Guittot
AuthorDate:Fri, 18 Oct 2019 15:26:28 +02:00
On Mon, 21 Oct 2019 at 09:50, Ingo Molnar wrote:
>
>
> * Vincent Guittot wrote:
>
> > Several wrong task placement have been raised with the current load
> > balance algorithm but their fixes are not always straight forward and
> > end up with using biased values
Dear sir ,
I am M HUREL Vincent , purchasing and sales manager of ABB FRANCE . Our
Company specialised in Supplying computer hardware and Electronic . We
want to extend our supplier list because of concurrency in prices on the
international market . We are seeking a supplier with whom we can
On Fri, 18 Oct 2019 at 16:44, Douglas Raillard wrote:
>
>
>
> On 10/18/19 1:07 PM, Peter Zijlstra wrote:
> > On Fri, Oct 18, 2019 at 12:46:25PM +0100, Douglas Raillard wrote:
> >
> >>> What I don't see is how that that difference makes sense as input to:
> >>>
> >>> cost(x) : (1 + x) * cost_j
On Fri, 18 Oct 2019 at 16:44, Douglas Raillard wrote:
>
>
>
> On 10/18/19 1:07 PM, Peter Zijlstra wrote:
> > On Fri, Oct 18, 2019 at 12:46:25PM +0100, Douglas Raillard wrote:
> >
> >>> What I don't see is how that that difference makes sense as input to:
> >>>
> >>> cost(x) : (1 + x) * cost_j
of using the load.
Signed-off-by: Vincent Guittot
Acked-by: Valentin Schneider
---
kernel/sched/fair.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9b8e20d..670856d 100644
--- a/kernel/sched/fair.c
+++ b/kernel
.
- find_busiest_group() checks if there is an imbalance between local and
busiest group.
- calculate_imbalance() decides what have to be moved.
Finally, the now unused field total_running of struct sd_lb_stats has been
removed.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 611
being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 24 ++--
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e09fe12b
().
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 384 ++--
1 file changed, 256 insertions(+), 128 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ed1800d..fbaafae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched
clean up load_balance and remove meaningless calculation and fields before
adding new algorithm.
Signed-off-by: Vincent Guittot
Acked-by: Rik van Riel
---
kernel/sched/fair.c | 105 +---
1 file changed, 1 insertion(+), 104 deletions(-)
diff
of nr_running in the statistics and use it to detect such
situation.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5ae5281..e09fe12b 100644
--- a/kernel/sched/fair.c
before comparing
runnable load and it's worth aligning the wake up path with the
load_balance.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 670856d
find_idlest_group() now reads CPU's load_avg in 2 different ways.
Consolidate the function to read and use load_avg only once and simplify
the algorithm to only look for the group with lowest load_avg.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 50
When there is only 1 cpu per group, using the idle cpus to evenly spread
tasks doesn't make sense and nr_running is a better metrics.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 40
1 file changed, 28 insertions(+), 12 deletions(-)
diff
Rename sum_nr_running to sum_h_nr_running because it effectively tracks
cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
when needed.
There is no functional changes.
Signed-off-by: Vincent Guittot
Acked-by: Rik van Riel
Reviewed-by: Valentin Schneider
---
kernel
order code
- some minor code fixes
- optimize the find_idles_group()
Not covered in this patchset:
- Better detection of overloaded and fully busy state, especially for cases
when nr_running > nr CPUs.
Vincent Guittot (11):
sched/fair: clean up asym packing
sched/fair: rename s
the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
Acked-by: Rik van Riel
---
kernel/sched/fair.c | 63 ++---
1 file changed, 16 insertions(+), 47 deletions(-)
diff --git
Hi Thara,
On Thu, 17 Oct 2019 at 18:40, Thara Gopinath wrote:
>
> On 10/17/2019 04:44 AM, Vincent Guittot wrote:
> > Hi Thara,
> >
> > On Wed, 16 Oct 2019 at 23:22, Thara Gopinath
> > wrote:
> >>
> >> Hi Vincent,
> >>
> >>
Hi Thara,
On Wed, 16 Oct 2019 at 23:22, Thara Gopinath wrote:
>
> Hi Vincent,
>
> Thanks for the review
> On 10/14/2019 11:50 AM, Vincent Guittot wrote:
> > Hi Thara,
> >
> > On Mon, 14 Oct 2019 at 02:58, Thara Gopinath
> > wrote:
> >>
>
On Wed, 16 Oct 2019 at 09:21, Parth Shah wrote:
>
>
>
> On 9/19/19 1:03 PM, Vincent Guittot wrote:
>
> [...]
>
> > Signed-off-by: Vincent Guittot
> > ---
> > kernel/sched/fair.c | 585
> > ++--
> >
Hi Parth,
On Wed, 16 Oct 2019 at 09:21, Parth Shah wrote:
>
>
>
> On 9/19/19 1:03 PM, Vincent Guittot wrote:
> > Several wrong task placement have been raised with the current load
> > balance algorithm but their fixes are not always straight forward and
> > e
Hi Thara,
On Mon, 14 Oct 2019 at 02:58, Thara Gopinath wrote:
>
> Add thermal.c and thermal.h files that provides interface
> APIs to initialize, update/average, track, accumulate and decay
> thermal pressure per cpu basis. A per cpu structure max_capacity_info is
> introduced to keep track of
On Mon, 14 Oct 2019 at 16:52, Peter Zijlstra wrote:
>
>
> The energy aware schedutil patches remimded me this was still pending.
>
> On Fri, Aug 02, 2019 at 10:47:25AM +0100, Patrick Bellasi wrote:
> > Hi Peter, Vincent,
> > is there anything different I can do on thi
Hi Thara,
On Mon, 14 Oct 2019 at 02:58, Thara Gopinath wrote:
>
> Extrapolating on the exisitng framework to track rt/dl utilization using
s/exisitng/existing/
> pelt signals, add a similar mechanism to track thermal pressue. The
s/pessure/pressure/
> difference here from rt/dl utilization
On Mon, 14 Oct 2019 at 14:16, Quentin Perret wrote:
>
> Hi Valentin,
>
> On Monday 14 Oct 2019 at 12:47:10 (+0100), Valentin Schneider wrote:
> > While the static key is correctly initialized as being disabled, it will
> > remain forever enabled once turned on. This means that if we start with an
een hotplugged out.
>
> Disable the static key when destroying domains, and let
> build_sched_domains() (re) enable it as needed.
>
> Cc:
> Fixes: df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU
> capacity optimizations")
> Signed-off-by: Valentin
Dear sir ,
I am M HUREL Vincent , purchasing and sales manager of ABB FRANCE . Our
Company specialised in Supplying computer hardware and Electronic . We
want to extend our supplier list because of concurrency in prices on the
international market . We are seeking a supplier with whom we can
Dear sir ,
I am M HUREL Vincent , purchasing and sales manager of ABB FRANCE . Our
Company specialised in Supplying computer hardware and Electronic . We
want to extend our supplier list because of concurrency in prices on the
international market . We are seeking a supplier with whom we can
On Wed, 9 Oct 2019 at 21:33, Phil Auld wrote:
>
> On Tue, Oct 08, 2019 at 05:53:11PM +0200 Vincent Guittot wrote:
> > Hi Phil,
> >
>
> ...
>
> > While preparing v4, I have noticed that I have probably oversimplified
> > the end of find_idlest
On Wed, 9 Oct 2019 at 11:23, Parth Shah wrote:
>
>
>
> On 10/8/19 6:58 PM, Hillf Danton wrote:
> >
> > On Mon, 7 Oct 2019 14:00:49 +0530 Parth Shah wrote:
> >> +/*
> >> + * Try to find a non idle core in the system based on few heuristics:
> >> + * - Keep track of overutilized (>80% util) and
On Tue, 8 Oct 2019 at 19:55, Peter Zijlstra wrote:
>
> On Thu, Sep 19, 2019 at 09:33:35AM +0200, Vincent Guittot wrote:
> > + if (busiest->group_type == group_asym_packing) {
> > + /*
> > + * In case of asym capacity, we w
On Tue, 8 Oct 2019 at 19:39, Peter Zijlstra wrote:
>
> On Tue, Oct 08, 2019 at 05:30:02PM +0200, Vincent Guittot wrote:
>
> > This is how I plan to get ride of the problem:
> > + if (busiest->group_weight == 1 || sds->prefer_sibling) {
> > +
On Mon, 7 Oct 2019 at 18:54, Parth Shah wrote:
>
>
>
> On 10/7/19 5:49 PM, Vincent Guittot wrote:
> > On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
> >>
> >> The algorithm finds the first non idle core in the system and tries to
> >> place a task in
Hi Phil,
On Tue, 8 Oct 2019 at 16:33, Phil Auld wrote:
>
> Hi Vincent,
>
> On Thu, Sep 19, 2019 at 09:33:31AM +0200 Vincent Guittot wrote:
> > Several wrong task placement have been raised with the current load
> > balance algorithm but their fixes are not always straigh
Le Tuesday 08 Oct 2019 à 15:34:04 (+0100), Valentin Schneider a écrit :
> On 08/10/2019 15:16, Peter Zijlstra wrote:
> > On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
> >
> >> Yeah, right shift on signed negative values are implementation defined.
> >
> > Seriously? Even
Sorry, I missed the comment. Christoph's suggestion is also good to me.
I will modify it as you suggested.
Thanks
On Tue, Oct 8, 2019 at 12:31 AM Paul Walmsley wrote:
>
> On Mon, 7 Oct 2019, Christoph Hellwig wrote:
>
> > On Mon, Oct 07, 2019 at 09:08:23AM -0700, Paul Walmsley wrote:
> > >
On Mon, 7 Oct 2019 at 17:14, Rik van Riel wrote:
>
> On Thu, 2019-09-19 at 09:33 +0200, Vincent Guittot wrote:
> > runnable load has been introduced to take into account the case where
> > blocked load biases the wake up path which may end to select an
> > overloaded
>
On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
>
> The algorithm finds the first non idle core in the system and tries to
> place a task in the idle CPU in the chosen core. To maintain
> cache hotness, work of finding non idle core starts from the prev_cpu,
> which also reduces task ping-pong
On Fri, 4 Oct 2019 at 10:24, Giovanni Gherdovich wrote:
>
> On Thu, 2019-10-03 at 20:31 -0700, Srinivas Pandruvada wrote:
> > On Thu, 2019-10-03 at 20:05 +0200, Rafael J. Wysocki wrote:
> > > On Wednesday, October 2, 2019 2:29:26 PM CEST Giovanni Gherdovich
> > > wrote:
> > > > From: Srinivas
On Tue, Oct 01, 2019 at 06:09:06PM EDT, Rob Herring wrote:
>On Wed, Sep 18, 2019 at 04:06:37PM -0400, vincent.cheng...@renesas.com wrote:
>> From: Vincent Cheng
Hi Rob,
Welcome back. Thank-you for providing feedback.
>>
>> Add device tree binding doc for the IDT ClockMa
On Tue, 1 Oct 2019 at 19:47, Valentin Schneider
wrote:
>
> On 19/09/2019 08:33, Vincent Guittot wrote:
>
> [...]
>
> > @@ -8283,69 +8363,133 @@ static inline void update_sd_lb_stats(struct
> > lb_env *env, struct sd_lb_stats *sd
> > */
> > static inline
On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann wrote:
>
> On 01/10/2019 10:14, Vincent Guittot wrote:
> > On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> > wrote:
> >>
> >> Hi Vincent,
> >>
> >> On 19/09/2019 09:33, Vincent Guittot wrote:
&g
On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann wrote:
>
> On 01/10/2019 10:14, Vincent Guittot wrote:
> > On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> > wrote:
> >>
> >> Hi Vincent,
> >>
> >> On 19/09/2019 09:33, Vincent Guittot wro
On Tue, 1 Oct 2019 at 19:12, Valentin Schneider
wrote:
>
> On 19/09/2019 08:33, Vincent Guittot wrote:
> > clean up load_balance and remove meaningless calculation and fields before
> > adding new algorithm.
> >
> > Signed-off-by: Vincent Guittot
>
> We'll
group_asym_packing
On Tue, 1 Oct 2019 at 10:15, Dietmar Eggemann wrote:
>
> On 19/09/2019 09:33, Vincent Guittot wrote:
>
>
> [...]
>
> > @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env
> > *env,
> > }
> >
On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann wrote:
>
> Hi Vincent,
>
> On 19/09/2019 09:33, Vincent Guittot wrote:
>
> these are just some comments & questions based on a code study. Haven't
> run any tests with it yet.
>
> [...]
>
> > The type of
On Mon, 30 Sep 2019 at 03:13, Rik van Riel wrote:
>
> On Thu, 2019-09-19 at 09:33 +0200, Vincent Guittot wrote:
> >
> > Also the load balance decisions have been consolidated in the 3
> > functions
> > below after removing the few bypasses
On Sat, Sep 28, 2019 at 6:56 AM Christoph Hellwig wrote:
>
> Oh and s/rsicv/riscv/ in the subject, please.
Oh! Thank you for finding this typo.
I will correct it.
returned %d\n", cnt);
>> +return cnt;
>> +} else if (cnt != 2) {
>> +dev_err(>dev,
>> +"i2c_transfer sent only %d of %d messages\n", cnt, 2);
>> +return -EIO;
>> +}
>> +
>>
From: Vincent Cheng
The IDT ClockMatrix (TM) family includes integrated devices that provide
eight PLL channels. Each PLL channel can be independently configured as a
frequency synthesizer, jitter attenuator, digitally controlled
oscillator (DCO), or a digital phase lock loop (DPLL). Typically
From: Vincent Cheng
Add device tree binding doc for the IDT ClockMatrix PTP clock driver.
Co-developed-by: Richard Cochran
Signed-off-by: Richard Cochran
Signed-off-by: Vincent Cheng
---
Changes since v1:
- No changes
---
Documentation/devicetree/bindings/ptp/ptp-idtcm.txt | 15
() to deal with the break exception as the type of break is
BUG_TRAP_TYPE_BUG.
Signed-off-by: Vincent Chen
---
arch/riscv/kernel/traps.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index 424eb72d56b1..055a937aca70 100644
ebreak, it may cause the kernel thread to be stuck in the ebreak
instruction.
This patch set will solve the above problems by adjusting the
implementations of the do_trap_break().
Vincent Chen (4):
riscv: avoid kernel hangs when trapped in BUG()
rsicv: avoid sending a SIGTRAP to a user
On RISC-V, when the kernel runs code on behalf of a user thread, and the
kernel executes a WARN() or WARN_ON(), the user thread will be sent
a bogus SIGTRAP. Fix the RISC-V kernel code to not send a SIGTRAP when
a WARN()/WARN_ON() is executed.
Signed-off-by: Vincent Chen
---
arch/riscv/kernel
To make the code more straightforward, replacing the switch statement
with if statement.
Suggested-by: Paul Walmsley
Signed-off-by: Vincent Chen
---
arch/riscv/kernel/traps.c | 23 ---
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/arch/riscv/kernel/traps.c
to the trapped process only when the ebreak is
in userspace.
Signed-off-by: Vincent Chen
---
arch/riscv/kernel/traps.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index 82f42a55451e..dd13bc90aeb6 100644
--- a/arch
s32 err;
>> +s32 len;
>> +u8 val;
>> +u8 loaddr;
>> +
>> +pr_info("requesting firmware '%s'\n", FW_FILENAME);
>
>dev_debug()
Thanks, will make the change.
>> +
>> +err = request_firmware(, FW_FILENAME, dev);
>> +
>> +if (err)
>> +return err;
>> +
>> +pr_info("firmware size %zu bytes\n", fw->size);
>
>dev_debug()
>
>Maybe look through all your pr_info and downgrade most of them to
>dev_debug()
Yes, will go through and downgrade to dev_debug() accordingly.
Thanks,
Vincent
On Thu, 19 Sep 2019 at 16:32, Vincent Guittot
wrote:
>
> On Thu, 19 Sep 2019 at 16:23, Qais Yousef wrote:
> >
> > On 09/19/19 14:27, Vincent Guittot wrote:
> > > > > > But for requirement of performance, I think it is better to
> > > > > &g
On Thu, 19 Sep 2019 at 16:23, Qais Yousef wrote:
>
> On 09/19/19 14:27, Vincent Guittot wrote:
> > > > > But for requirement of performance, I think it is better to
> > > > > differentiate between idle CPU and CPU has CFS task.
> > > > >
> &
On Thu, 19 Sep 2019 at 13:22, Jing-Ting Wu wrote:
>
> On Thu, 2019-09-05 at 16:01 +0200, Vincent Guittot wrote:
> > Hi Jing-Ting,
> >
> > On Thu, 5 Sep 2019 at 15:26, Jing-Ting Wu wrote:
> > >
> > > On Fri, 2019-08-30 at 15:55 +0100, Qais Yousef
On Thu, 19 Sep 2019 at 09:20, YT Chang wrote:
>
> When the system is overutilization, the load-balance crossing
s/overutilization/overutilized/
> clusters will be triggered and scheduler will not use energy
> aware scheduling to choose CPUs.
>
> The overutilization means the loading of ANY
clean up load_balance and remove meaningless calculation and fields before
adding new algorithm.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 105 +---
1 file changed, 1 insertion(+), 104 deletions(-)
diff --git a/kernel/sched/fair.c
.
- find_busiest_group() checks if there is an imbalance between local and
busiest group.
- calculate_imbalance() decides what have to be moved.
Finally, the now unused field total_running of struct sd_lb_stats has been
removed.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 585
being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7e74836..15ec38c
the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 63 ++---
1 file changed, 16 insertions(+), 47 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
Rename sum_nr_running to sum_h_nr_running because it effectively tracks
cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
when needed.
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 32
of nr_running in the statistics and use it to detect such
situation.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d33379c..7e74836 100644
--- a/kernel/sched/fair.c
before comparing
runnable load and it's worth aligning the wake up path with the
load_balance.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index acca869..39a37ae 100644
When there is only 1 cpu per group, using the idle cpus to evenly spread
tasks doesn't make sense and nr_running is a better metrics.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 40
1 file changed, 28 insertions(+), 12 deletions(-)
diff
to delay this version because of this update which is not ready yet
- Better detection of overloaded and fully busy state, especially for cases
when nr_running > nr CPUs.
Vincent Guittot (8):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running to sum_h_nr_running
sched/f
utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.
Signed-off-by: Vincent Guittot
Acked-by: Valentin Schneider
---
kernel/sched
find_idlest_group() now loads CPU's load_avg in 2 different ways.
Consolidate the function to read and use load_avg only once and simplify
the algorithm to only look for the group with lowest load_avg.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 52
From: Vincent Cheng
The IDT ClockMatrix (TM) family includes integrated devices that provide
eight PLL channels. Each PLL channel can be independently configured as a
frequency synthesizer, jitter attenuator, digitally controlled
oscillator (DCO), or a digital phase lock loop (DPLL). Typically
From: Vincent Cheng
Add device tree binding doc for the IDT ClockMatrix PTP clock driver.
Signed-off-by: Vincent Cheng
---
Documentation/devicetree/bindings/ptp/ptp-idtcm.txt | 15 +++
1 file changed, 15 insertions(+)
create mode 100644 Documentation/devicetree/bindings/ptp/ptp
On Wed, 18 Sep 2019 at 17:46, Patrick Bellasi wrote:
>
>
> On Wed, Sep 18, 2019 at 16:22:32 +0100, Vincent Guittot wrote...
>
> > On Wed, 18 Sep 2019 at 16:19, Patrick Bellasi
> > wrote:
>
> [...]
>
> >> $> Wakeup path tunings
> >> ===
On Wed, 18 Sep 2019 at 16:19, Patrick Bellasi wrote:
>
>
> On Wed, Sep 18, 2019 at 13:41:04 +0100, Parth Shah wrote...
>
> > Hello everyone,
>
> Hi Parth,
> thanks for staring this discussion.
>
> [ + patrick.bell...@matbug.net ] my new email address, since with
> @arm.com I will not be reachable
+/*
> > > + * APERF/MPERF frequency ratio computation.
> > > + *
> > > + * The scheduler wants to do frequency invariant accounting and
> > > needs a <1
> > > + * ratio to account for the 'current' frequency, corresponding to
> > > + * fr
, it will be taken when interrupts are enabled.
In this case, it may cause a deadlock problem if the rq.lock is locked
again in the timer ISR.
Hence, the handle_exception() can only enable interrupts when the state of
sstatus.SPIE is 1.
This patch is tested on HiFive Unleashed board.
Signed-off-by: Vincent
On Fri, 6 Sep 2019 at 16:13, Valentin Schneider
wrote:
>
> On 06/09/2019 13:45, Parth Shah wrote:>
> > I guess there is some usecase in case of thermal throttling.
> > If a task is heating up the core then in ideal scenarios POWER systems
> > throttle
> > down to rated frequency.
> > In such
s runnable time for CFS task.
>
> The detailed log is shown as following, CFS task(thread1-6580) is preempted
> by RT task(thread0-6674) about 332ms:
332ms is quite long and is probably not an idle load blanace but a
busy load balance
> thread1-6580 [003] dnh294.452898: sched_
On Tue, 3 Sep 2019 at 22:27, Rik van Riel wrote:
>
> On Tue, 2019-09-03 at 17:38 +0200, Vincent Guittot wrote:
> > Hi Rik,
> >
> > On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
> > > Refactor enqueue_entity, dequeue_entity, and update_load_avg, in
> > &
Hi Rik,
On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
>
> Refactor enqueue_entity, dequeue_entity, and update_load_avg, in order
> to split out the things we still want to happen at every level in the
> cgroup hierarchy with a flat runqueue from the things we only need to
> happen once.
>
>
Hi Hillf,
Sorry for the late reply.
I have noticed that i didn't answer your question while preparing v3
On Fri, 9 Aug 2019 at 07:21, Hillf Danton wrote:
>
>
> On Thu, 1 Aug 2019 16:40:21 +0200 Vincent Guittot wrote:
> >
> > cfs load_balance only takes care of CFS t
On Fri, 30 Aug 2019 at 17:02, Rik van Riel wrote:
>
> On Fri, 2019-08-30 at 08:41 +0200, Vincent Guittot wrote:
>
> > > When tasks get their timeslice rounded up, that will increase
> > > the total sched period in a similar way the old code did by
> >
Hi Phil,
On Thu, 29 Aug 2019 at 21:23, Phil Auld wrote:
>
> On Thu, Aug 01, 2019 at 04:40:16PM +0200 Vincent Guittot wrote:
> > Several wrong task placement have been raised with the current load
> >
> > --
> > 2.7.4
> >
>
> I keep expecting a v3 so I h
On Thu, 29 Aug 2019 at 18:00, Rik van Riel wrote:
>
> On Thu, 2019-08-29 at 16:02 +0200, Vincent Guittot wrote:
> > On Thu, 29 Aug 2019 at 01:19, Rik van Riel wrote:
> >
> > > What am I overlooking?
> >
> > My point is more for task that runs severa
On Wed, 28 Aug 2019 at 16:19, Valentin Schneider
wrote:
>
> On 26/08/2019 11:11, Vincent Guittot wrote:
> >>> + case group_fully_busy:
> >>> + /*
> >>> + * Select the fully busy group with highest avg_load.
> >>>
On Wed, 28 Aug 2019 at 11:46, Valentin Schneider
wrote:
>
> On 27/08/2019 13:28, Vincent Guittot wrote:
> > On Thu, 15 Aug 2019 at 16:52, Valentin Schneider
> > wrote:
> >>
> >> The CFS load balancer can cause the cpu_stopper to run a function to
> >
On Thu, 29 Aug 2019 at 01:19, Rik van Riel wrote:
>
> On Wed, 2019-08-28 at 19:32 +0200, Vincent Guittot wrote:
> > On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
> > > The idea behind __sched_period makes sense, but the results do not
> > > always.
>
On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
>
> The idea behind __sched_period makes sense, but the results do not always.
>
> When a CPU has one high priority task and a large number of low priority
> tasks, __sched_period will return a value larger than sysctl_sched_latency,
> and the one
On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
>
> The way the time slice length is currently calculated, not only do high
> priority tasks get longer time slices than low priority tasks, but due
> to fixed point math, low priority tasks could end up with a zero length
> time slice. This can
On Wed, 28 Aug 2019 at 17:28, Rik van Riel wrote:
>
> On Wed, 2019-08-28 at 15:53 +0200, Vincent Guittot wrote:
> > On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
> > > Use an explicit "cfs_rq of parent sched_entity" helper in a few
> > > strategic p
On Wed, 28 Aug 2019 at 16:48, Rik van Riel wrote:
>
> On Wed, 2019-08-28 at 15:50 +0200, Vincent Guittot wrote:
> > Hi Rik,
> >
> > On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
> > > The runnable_load magic is used to quickly propagate information
&g
> is on the rq->leaf_cfs_rq_list.
>
> By only removing a cfs_rq from the list once it no longer has children
> on the list, we can avoid walking the sched_entity hierarchy if the bottom
> cfs_rq is on the list, once the runqueues have been flattened.
>
> Signed-off-by: Rik van Riel
On Thu, 22 Aug 2019 at 04:18, Rik van Riel wrote:
>
> Use an explicit "cfs_rq of parent sched_entity" helper in a few
> strategic places, where cfs_rq_of(se) may no longer point at the
The only case is the sched_entity of a task which will point to root
cfs, isn't it ?
> right runqueue once we
901 - 1000 of 6120 matches
Mail list logo