On 09/08/19 11:17, Dietmar Eggemann wrote:
> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
>
> [...]
>
> > +void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> > + dl_server_has_tasks_f has_tasks,
> > + dl_server_pick_f pick)
> > +{
> > +
On 08/08/19 12:31, Peter Zijlstra wrote:
> On Thu, Aug 08, 2019 at 10:46:52AM +0200, Juri Lelli wrote:
> > On 08/08/19 10:11, Dietmar Eggemann wrote:
>
> > > What about the fast path in pick_next_task()?
> > >
> > > diff --git a/kernel/sched/core.c b/kerne
On 08/08/19 11:27, Juri Lelli wrote:
> On 08/08/19 10:57, Dietmar Eggemann wrote:
> > On 8/8/19 10:46 AM, Juri Lelli wrote:
> > > On 08/08/19 10:11, Dietmar Eggemann wrote:
> > >> On 8/8/19 9:56 AM, Peter Zijlstra wrote:
> > >>> On Wed, Aug 07, 201
On 08/08/19 10:57, Dietmar Eggemann wrote:
> On 8/8/19 10:46 AM, Juri Lelli wrote:
> > On 08/08/19 10:11, Dietmar Eggemann wrote:
> >> On 8/8/19 9:56 AM, Peter Zijlstra wrote:
> >>> On Wed, Aug 07, 2019 at 06:31:59PM +0200, Dietmar Eggemann wrote:
> >>>
On 08/08/19 10:11, Dietmar Eggemann wrote:
> On 8/8/19 9:56 AM, Peter Zijlstra wrote:
> > On Wed, Aug 07, 2019 at 06:31:59PM +0200, Dietmar Eggemann wrote:
> >> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
> >>>
> >>>
> >>> Signed-off-by: Peter Zijlstra (Intel)
> >>
> >> [...]
> >>
> >>> @@ -889,6
Hi,
On 26/07/19 16:54, Peter Zijlstra wrote:
[...]
> +void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> + dl_server_has_tasks_f has_tasks,
> + dl_server_pick_f pick)
> +{
> + dl_se->dl_server = 1;
> + dl_se->rq = rq;
> +
Hi Dietmar,
On 07/08/19 18:31, Dietmar Eggemann wrote:
> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
> >
> >
> > Signed-off-by: Peter Zijlstra (Intel)
>
> [...]
>
> > @@ -889,6 +891,8 @@ static void update_curr(struct cfs_rq *c
> > trace_sched_stat_runtime(curtask, delta_exec,
Hi,
On 07/08/19 16:07, Steven Rostedt wrote:
> On Mon, 5 Aug 2019 12:06:46 +0200
> Juri Lelli wrote:
>
> > This only happens if isolcpus are configured at boot.
> >
> > AFAIU, RT is reworking workqueues and 5.x-rt shouldn't suffer from this.
> > As
Hi,
Booting 4.19.59-rt24 with debug options enabled (DEBUG_ATOMIC_SLEEP) I
noticed the following splat (edited for clarity):
--->8---
Linux version 4.19.59-rt24 (...) (...) #2 SMP PREEMPT RT Mon Aug 5 05:23:26
EDT 2019
Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.19.59-rt24 ... skew_tick=1
Commit-ID: 850377a875a481c393ce59111b0c9725005e0eb4
Gitweb: https://git.kernel.org/tip/850377a875a481c393ce59111b0c9725005e0eb4
Author: Juri Lelli
AuthorDate: Wed, 31 Jul 2019 12:37:15 +0200
Committer: Thomas Gleixner
CommitDate: Thu, 1 Aug 2019 20:51:22 +0200
sched/deadline: Ensure
Commit-ID: 4394ba872c36255d25c6bde151b061f04655ebfb
Gitweb: https://git.kernel.org/tip/4394ba872c36255d25c6bde151b061f04655ebfb
Author: Juri Lelli
AuthorDate: Wed, 31 Jul 2019 12:37:15 +0200
Committer: Thomas Gleixner
CommitDate: Thu, 1 Aug 2019 17:43:20 +0200
sched/deadline: Ensure
Commit-ID: b223cc1bb098ebd1077a5390c434db411806d6b8
Gitweb: https://git.kernel.org/tip/b223cc1bb098ebd1077a5390c434db411806d6b8
Author: Juri Lelli
AuthorDate: Wed, 31 Jul 2019 12:37:15 +0200
Committer: Thomas Gleixner
CommitDate: Wed, 31 Jul 2019 13:01:26 +0200
sched/deadline: Ensure
SCHED_DEADLINE inactive timer needs to run in hardirq context (as
dl_task_timer already does).
Make it HRTIMER_MODE_REL_HARD.
Signed-off-by: Juri Lelli
---
Hi,
Both v4.19-rt and v5.2-rt need this.
Mainline "sched: Mark hrtimers to expire in hard interrupt context"
series needs th
On 29/07/19 18:49, Peter Zijlstra wrote:
> On Fri, Jul 26, 2019 at 09:27:55AM +0100, Dietmar Eggemann wrote:
> > Remove BUG_ON() in __enqueue_dl_entity() since there is already one in
> > enqueue_dl_entity().
> >
> > Move the check that the dl_se is not on the dl_rq from
> > __dequeue_dl_entity()
On 29/07/19 15:04, Peter Zijlstra wrote:
> On Mon, Jul 29, 2019 at 01:27:02PM +0200, Juri Lelli wrote:
> > On 29/07/19 13:15, Peter Zijlstra wrote:
> > > On Mon, Jul 29, 2019 at 11:25:19AM +0200, Juri Lelli wrote:
> > > > Hi,
> > > >
>
On 29/07/19 13:15, Peter Zijlstra wrote:
> On Mon, Jul 29, 2019 at 11:25:19AM +0200, Juri Lelli wrote:
> > Hi,
> >
> > On 26/07/19 16:54, Peter Zijlstra wrote:
> > > Because pick_next_task() implies set_curr_task() and some of the
> > > details haven't
Hi,
On 26/07/19 16:54, Peter Zijlstra wrote:
> Because pick_next_task() implies set_curr_task() and some of the
> details haven't matter too much, some of what _should_ be in
> set_curr_task() ended up in pick_next_task, correct this.
>
> This prepares the way for a pick_next_task() variant that
Hi,
On 26/07/19 16:54, Peter Zijlstra wrote:
>
> Cc: Daniel Bristot de Oliveira
> Cc: Luca Abeni
> Cc: Juri Lelli
> Cc: Dmitry Vyukov
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> include/linux/sched/sysctl.h |3 +++
> kernel/s
Hi,
On 26/07/19 09:37, Valentin Schneider wrote:
> On 26/07/2019 09:27, Dietmar Eggemann wrote:
> > Remove BUG_ON() in __enqueue_dl_entity() since there is already one in
> > enqueue_dl_entity().
> >
> > Move the check that the dl_se is not on the dl_rq from
> > __dequeue_dl_entity() to
Commit-ID: a07db5c0865799ebed1f88be0df50c581fb65029
Gitweb: https://git.kernel.org/tip/a07db5c0865799ebed1f88be0df50c581fb65029
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 08:34:55 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:05 +0200
sched/core: Fix CPU
Commit-ID: 710da3c8ea7dfbd327920afd3831d8c82c42789d
Gitweb: https://git.kernel.org/tip/710da3c8ea7dfbd327920afd3831d8c82c42789d
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 16:00:00 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:04 +0200
sched/core: Prevent race
Commit-ID: 1a763fd7c6335e3122c1cc09576ef6c99ada4267
Gitweb: https://git.kernel.org/tip/1a763fd7c6335e3122c1cc09576ef6c99ada4267
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 15:59:59 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:03 +0200
rcu/tree: Call setschedule
Commit-ID: d74b27d63a8bebe2fe634944e4ebdc7b10db7a39
Gitweb: https://git.kernel.org/tip/d74b27d63a8bebe2fe634944e4ebdc7b10db7a39
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 15:59:58 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:03 +0200
cgroup/cpuset: Change
Commit-ID: 1243dc518c9da467da6635313a2dbb41b8ffc275
Gitweb: https://git.kernel.org/tip/1243dc518c9da467da6635313a2dbb41b8ffc275
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 15:59:57 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:02 +0200
cgroup/cpuset: Convert
Commit-ID: 59d06cea1198d665ba11f7e8c5f45b00ff2e4812
Gitweb: https://git.kernel.org/tip/59d06cea1198d665ba11f7e8c5f45b00ff2e4812
Author: Juri Lelli
AuthorDate: Fri, 19 Jul 2019 15:59:56 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:55:02 +0200
sched/deadline: Fix
Hi,
On 25/07/19 15:56, Ingo Molnar wrote:
>
> * Ingo Molnar wrote:
>
> >
> > * Juri Lelli wrote:
> >
> > > When the topology of root domains is modified by CPUset or CPUhotplug
> > > operations information about the current deadline ba
On 23/07/19 06:11, Tejun Heo wrote:
> On Tue, Jul 23, 2019 at 12:31:31PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote:
> >
> > > Thanks for reporting. The set is based on cgroup/for-next (as of last
> > > week), thou
On 22/07/19 15:21, Dietmar Eggemann wrote:
> On 7/22/19 2:28 PM, Juri Lelli wrote:
> > On 22/07/19 13:07, Dietmar Eggemann wrote:
> >> On 7/19/19 3:59 PM, Juri Lelli wrote:
> >>
> >> [...]
> >>
> >>> @@ -557,6 +558,38 @@ static struct r
On 22/07/19 13:07, Dietmar Eggemann wrote:
> On 7/19/19 3:59 PM, Juri Lelli wrote:
>
> [...]
>
> > @@ -557,6 +558,38 @@ static struct rq *dl_task_offline_migration(struct rq
> > *rq, struct task_struct *p
> > double_lock_balance(rq, later_rq);
On 22/07/19 10:21, Dietmar Eggemann wrote:
> On 7/19/19 3:59 PM, Juri Lelli wrote:
> > From: Mathieu Poirier
>
> [...]
>
> > @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p,
> > */
> > if
Hi,
On 19/07/19 17:49, Steven Rostedt wrote:
> 4.19.59-rt24-rc1 stable review patch.
> If anyone has any objections, please let me know.
>
> --
>
> From: Sebastian Andrzej Siewior
>
> [ Upstream commit 0532e87d9d44795221aa921ba7024bde689cc894 ]
>
> Add kthread_schedule_work()
is only called by sysrq and, if that gets
triggered, DEADLINE guarantees are already gone out of the window
anyway.
Signed-off-by: Juri Lelli
---
v8 -> v9:
- Add comment in changelog regarding normalize_rt_tasks() (Peter)
---
include/linux/cpuset.h | 5 +
kernel/cgroup/cpuset.c |
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
.
Suggested-by: Peter Zijlstra
Signed-off-by: Juri Lelli
---
kernel/rcu/tree.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 980ca3ca643f..32ea75acba14 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3123,13 +3123,13
) and cpuset_rwsem (to be always acquired after hotplug lock).
Fix paths which currently take the two locks in the wrong order (after
a following patch is applied).
Signed-off-by: Juri Lelli
---
include/linux/cpuset.h | 8
kernel/cgroup/cpuset.c | 22 +-
2 files changed, 21
create a bottleneck for tasks concurrently calling
setscheduler().
Convert cpuset_mutex to be a percpu_rwsem (cpuset_rwsem), so that
setscheduler() will then be able to read lock it and avoid concurrency
issues.
Signed-off-by: Juri Lelli
---
v8 -> v9:
- make cpuset_{can,cancel}_attach g
fires and task is migrated (dl_task_offline_migration()).
Signed-off-by: Juri Lelli
---
kernel/sched/deadline.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 4cedcf8d6b03..f0166ab8c6b4 100644
in
CPUsets and adding their current load to the root domain they are
associated with.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
include/linux/cgroup.h | 1 +
include/linux/sched.h | 5 +++
include/linux/sched/deadline.h | 8 +
kernel/cgroup/cgroup.c
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
-domain-accounting-v9
Thanks,
- Juri
[1] https://lkml.org/lkml/2016/2/3/966
Juri Lelli (6):
cpuset: Rebuild root domain deadline accounting information
sched/deadline: Fix bandwidth accounting at all levels after offline
migration
cgroup/cpuset: convert cpuset_mutex to percpu_rwsem
D configurations, since checks related to RT bandwidth
are not performed at all in these cases.
Make moving RT tasks between cpu controller groups viable by removing
special case check for RT (and DEADLINE) tasks.
Signed-off-by: Juri Lelli
Reviewed-by: Michal Koutný
Acked-by: Tejun Heo
---
ist
rmqueue_bulk
<-- spin_lock(>lock) - BUG
Fix this by using {get,put}_cpu_light() in ipcomp_decompress().
Signed-off-by: Juri Lelli
---
Hi,
This has been found on a 4.19.x-rt kernel, but 5.x-rt(s) are affected as
well.
Best,
Juri
---
net/xfrm/xfrm
Hi Clark,
On 16/07/19 17:55, Clark Williams wrote:
> Saw this after applying my thermal lock to raw patch and the change in i915
> for lockdep. The
> splat occurred on boot when creating the kdump initramfs. System is an Intel
> NUC i7 with 32GB ram
> and 256GB SSD for rootfs.
>
> The
On 04/07/19 10:49, Juri Lelli wrote:
> Hi,
>
> On 01/07/19 07:51, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Jul 01, 2019 at 10:27:31AM +0200, Peter Zijlstra wrote:
> > > IIRC TJ figured it wasn't strictly required to fix the lock invertion at
> >
Hi,
On 01/07/19 07:51, Tejun Heo wrote:
> Hello,
>
> On Mon, Jul 01, 2019 at 10:27:31AM +0200, Peter Zijlstra wrote:
> > IIRC TJ figured it wasn't strictly required to fix the lock invertion at
> > that time and they sorted it differently. If I (re)read the thread
> > correctly the other day, he
On 01/07/19 21:13, Peter Zijlstra wrote:
> On Fri, Jun 28, 2019 at 10:06:18AM +0200, Juri Lelli wrote:
> > sched_setscheduler() needs to acquire cpuset_rwsem, but it is currently
> > called from an invalid (atomic) context by rcu_spawn_gp_kthread().
> >
> >
On 01/07/19 21:11, Peter Zijlstra wrote:
> On Fri, Jun 28, 2019 at 10:06:17AM +0200, Juri Lelli wrote:
> > No synchronisation mechanism exists between the cpuset subsystem and
> > calls to function __sched_setscheduler(). As such, it is possible that
> > new root domains are
Hi,
On 28/06/19 15:03, Peter Zijlstra wrote:
> On Fri, Jun 28, 2019 at 10:06:16AM +0200, Juri Lelli wrote:
> > cpuset_rwsem is going to be acquired from sched_setscheduler() with a
> > following patch. There are however paths (e.g., spawn_ksoftirqd) in
> > which sched_sche
Hi,
On 28/06/19 14:45, Peter Zijlstra wrote:
> On Fri, Jun 28, 2019 at 10:06:15AM +0200, Juri Lelli wrote:
> > @@ -2154,7 +2154,7 @@ static int cpuset_can_attach(struct cgroup_taskset
> > *tset)
> > cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, ));
&g
.
Suggested-by: Peter Zijlstra
Signed-off-by: Juri Lelli
---
kernel/rcu/tree.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 980ca3ca643f..32ea75acba14 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3123,13 +3123,13
) and cpuset_rwsem (to be always acquired after hotplug lock).
Fix paths which currently take the two locks in the wrong order (after
a following patch is applied).
Signed-off-by: Juri Lelli
---
include/linux/cpuset.h | 8
kernel/cgroup/cpuset.c | 22 +-
2 files changed, 21
of CPU bandwidth.
Grab cpuset_rwsem read lock from core scheduler, so to prevent
situations such as the one described above from happening.
Signed-off-by: Juri Lelli
---
v7->v8: use a percpu_rwsem read lock to avoid hotpath bottleneck issues
---
include/linux/cpuset.h | 5 +
kernel/cgr
fires and task is migrated (dl_task_offline_migration()).
Signed-off-by: Juri Lelli
---
kernel/sched/deadline.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 4cedcf8d6b03..f0166ab8c6b4 100644
in
CPUsets and adding their current load to the root domain they are
associated with.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
include/linux/cgroup.h | 1 +
include/linux/sched.h | 5 +++
include/linux/sched/deadline.h | 8 +
kernel/cgroup/cgroup.c
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
create a bottleneck for tasks concurrently calling
setscheduler().
Convert cpuset_mutex to be a percpu_rwsem (cpuset_rwsem), so that
setscheduler() will then be able to read lock it and avoid concurrency
issues.
Signed-off-by: Juri Lelli
---
kernel/cgroup/cpuset.c | 68
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
/3/966
Juri Lelli (6):
cpuset: Rebuild root domain deadline accounting information
sched/deadline: Fix bandwidth accounting at all levels after offline
migration
cgroup/cpuset: convert cpuset_mutex to percpu_rwsem
cgroup/cpuset: Change cpuset_rwsem and hotplug lock order
sched/core
Hi,
On 19/06/19 11:29, Michal Koutný wrote:
> On Wed, Jun 05, 2019 at 04:20:03PM +0200, Michal Koutný
> wrote:
> > I considered relaxing the check to non-root cgroups only, however, as
> > your example shows, it doesn't prevent reaching the avoided state by
> > other paths. I'm not that
make much sense for
!RT_GROUP_ SCHED configurations, since checks related to RT bandwidth
are not performed at all in these cases.
Make moving RT tasks between cpu controller groups viable by removing
special case check for RT (and DEADLINE) tasks.
Signed-off-by: Juri Lelli
---
Hi,
Although I'm
Hi,
On 14/01/19 17:19, Juri Lelli wrote:
> Power Management and Scheduling in the Linux Kernel (OSPM-summit) III edition
> May 20-22, 2019
> Scuola Superiore Sant'Anna
> Pisa, Italy
>
> ---
>
> .:: FOCUS
>
> The III edition of the Power Management and Scheduli
On 08/05/19 14:47, luca abeni wrote:
[...]
> Notice that all this logic is used only to select one of the idle cores
> (instead of picking the first idle core, we select the less powerful
> core on which the task "fits").
>
> So, running_bw does not provide any useful information, in this case;
On 08/05/19 11:24, luca abeni wrote:
> On Wed, 8 May 2019 11:08:55 +0200
> Juri Lelli wrote:
>
> > On 06/05/19 06:48, Luca Abeni wrote:
> > > From: luca abeni
> > >
> > > Instead of considering the "static CPU bandwidth" allocated to
&g
On 08/05/19 10:14, luca abeni wrote:
> Hi Juri,
>
> On Wed, 8 May 2019 10:01:16 +0200
> Juri Lelli wrote:
>
> > Hi Luca,
> >
> > On 06/05/19 06:48, Luca Abeni wrote:
> > > From: luca abeni
> > >
> > > Currently, the scheduler tri
On 06/05/19 06:48, Luca Abeni wrote:
> From: luca abeni
>
> Instead of considering the "static CPU bandwidth" allocated to
> a SCHED_DEADLINE task (ratio between its maximum runtime and
> reservation period), try to use the remaining runtime and time
> to scheduling deadline.
>
> Signed-off-by:
Hi Luca,
On 06/05/19 06:48, Luca Abeni wrote:
> From: luca abeni
>
> Currently, the SCHED_DEADLINE scheduler uses a global EDF scheduling
> algorithm, migrating tasks to CPU cores without considering the core
> capacity and the task utilization. This works well on homogeneous
> systems
Hi Luca,
On 06/05/19 06:48, Luca Abeni wrote:
> From: luca abeni
>
> Currently, the scheduler tries to find a proper placement for
> SCHED_DEADLINE tasks when they are pushed out of a core or when
> they wake up. Hence, if there is a single SCHED_DEADLINE task
> that never blocks and wakes up,
On 26/03/19 10:34, Juri Lelli wrote:
> Hi,
>
> Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
> turned on) produces warning below.
And I now think this might lead to an actual crash.
I've got what below while running xfstest suite [1] on 4.19.31-rt18.
gene
Commit-ID: cb0c04143b6196f4a479ba113706329fc667ee15
Gitweb: https://git.kernel.org/tip/cb0c04143b6196f4a479ba113706329fc667ee15
Author: Juri Lelli
AuthorDate: Wed, 19 Dec 2018 14:34:45 +0100
Committer: Ingo Molnar
CommitDate: Fri, 19 Apr 2019 19:44:15 +0200
sched/topology: Update
Commit-ID: b6fbbf31d15b5072250ec6ed79e415a1160e5621
Gitweb: https://git.kernel.org/tip/b6fbbf31d15b5072250ec6ed79e415a1160e5621
Author: Juri Lelli
AuthorDate: Wed, 19 Dec 2018 14:34:44 +0100
Committer: Ingo Molnar
CommitDate: Fri, 19 Apr 2019 19:44:14 +0200
cgroup/cpuset: Update stale
Hi,
On 05/04/19 14:36, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 10:46:47AM +0200, Juri Lelli wrote:
> > +static inline void cpuset_read_only_lock(unsigned long *flags)
> > +{
> > + local_irq_save(*flags);
> > + preempt_disable();
> >
Hi,
On 05/04/19 14:04, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 10:46:44AM +0200, Juri Lelli wrote:
> > +/*
> > + * Call with hotplug lock held
>
> Is that spelled like:
>
> lockdep_assert_cpus_held();
>
> ?
Indeed, but I had that in previous v
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
cpuset_common_seq_show operations atomicity is currently guarded by
callback_lock. Since these operations are initiated by userspace holding
a raw_spin_lock is not wise.
Convert the function to use cpuset_mutex to fix the problem.
Signed-off-by: Juri Lelli
---
kernel/cgroup/cpuset.c | 4
in
CPUsets and adding their current load to the root domain they are
associated with.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
v6 -> v7: make dk_add_task_root_domain() use raw_spin_(un)lock() instead
of the _irqsave variants as irqs are already disabled by ta
of CPU bandwidth.
Grab callback_lock from core scheduler, so to prevent situations such as
the one described above from happening.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
v6->v7: take cpuset_read_only_lock before rq and pi locks, as to not
introdue an unwan
to pay
for the time being.
Signed-off-by: Juri Lelli
---
v6->v7: Added comment in changelog about callback_lock potential
problems w.r.t. userspace ops. [peterz]
---
kernel/cgroup/cpuset.c | 70 +-
1 file changed, 35 insertions(+), 35 deletions(-)
diff --
fires and task is migrated (dl_task_offline_migration()).
Signed-off-by: Juri Lelli
---
kernel/sched/deadline.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index c8a654b133da..91be79072845 100644
; deals with offline migrations (noticed the problem
while testing)
Set also available at
https://github.com/jlelli/linux.git fixes/deadline/root-domain-accounting-v7
Thanks,
- Juri
[1] https://lkml.org/lkml/2016/2/3/966
Juri Lelli (5):
cgroup/cpuset: make callback_lock raw
sched/core
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
Hi,
On 30/03/19 12:09, Borislav Petkov wrote:
> On Sat, Mar 30, 2019 at 07:57:50PM +0900, Tetsuo Handa wrote:
> > Yes. But what such threshold be? 0.1 second? 1 second? 10 seconds?
> > Can we find a threshold where everyone can agree on?
>
> This is what we do all day on lkml: discussing changes
ontending() is called) while the
> 0-lag timer is still active. In this case, the safest thing to
> do is to immediately decrease the running bandwidth of the task,
> without trying to re-arm the 0-lag timer.
>
> Signed-off-by: luca abeni
But I could verify that this fixes the issue I was also able to
reproduce.
Acked-by: Juri Lelli
Thanks!
- Juri
Hi,
Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
turned on) produces warning below.
--->8---
# dd if=/dev/zero of=fsfreezetest count=99
# mkfs -t xfs -q ./fsfreezetest
# mkdir testmount
# mount -t xfs -o loop ./fsfreezetest ./testmount
# for I in `seq 10`; do
Hi,
On 13/03/19 15:49, luca abeni wrote:
> Hi,
>
> (I added Juri in cc)
>
> On Tue, 12 Mar 2019 10:03:12 +0800
> "chengjian (D)" wrote:
> [...]
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 31c050a0d0ce..d73cb033a06d 100644
> > --- a/kernel/sched/deadline.c
> >
Commit-ID: 0f0b7e1cc7abf8e1a8b301f2868379d611d05ae2
Gitweb: https://git.kernel.org/tip/0f0b7e1cc7abf8e1a8b301f2868379d611d05ae2
Author: Juri Lelli
AuthorDate: Thu, 7 Mar 2019 13:09:13 +0100
Committer: Thomas Gleixner
CommitDate: Fri, 22 Mar 2019 14:14:58 +0100
x86/tsc: Add option
Hi,
On 07/03/19 13:09, Juri Lelli wrote:
> Clocksource watchdog has been found responsible for generating latency
> spikes (in the 10-20 us range) when woken up to check for TSC stability.
>
> Add an option to disable it at boot.
Gentle ping.
Does this make any sense?
Thanks,
- Juri
Clocksource watchdog has been found responsible for generating latency
spikes (in the 10-20 us range) when woken up to check for TSC stability.
Add an option to disable it at boot.
Signed-off-by: Juri Lelli
---
Sending this out as an RFC after yesterday discussion with Thomas on IRC.
AFAICT
Hi,
On 07/03/19 09:31, Quentin Perret wrote:
> Hi Juri,
>
> On Thursday 07 Mar 2019 at 08:28:56 (+0100), Juri Lelli wrote:
> > There are cases in which this needs to be RW, as recently discussed
> > https://lore.kernel.org/lkml/20181123135807.GA14964@e107155-lin/
>
>
Hi,
On 06/03/19 20:57, Lingutla Chandrasekhar wrote:
> If user updates any cpu's cpu_capacity, then the new value is going to
> be applied to all its online sibling cpus. But this need not to be correct
> always, as sibling cpus (in ARM, same micro architecture cpus) would have
> different
Hello,
A quick one to inform everybody that registrations are now open!
Although list of topics looks pretty good already, we are still
accepting new ones. So, please don't hesitate to add your.
Best,
- Juri
On 14/01/19 17:19, Juri Lelli wrote:
> Power Management and Scheduling in the Li
On 20/02/19 16:30, Sebastian Andrzej Siewior wrote:
> On 2019-02-20 08:47:51 [+0100], Juri Lelli wrote:
> > > In this case you prepare the wakeup and then wake the CPU anyway. There
> > > should be no downside to this unless the housekeeping CPU is busy and in
> > &g
On 19/02/19 18:19, Sebastian Andrzej Siewior wrote:
> On 2019-02-14 14:37:14 [+0100], Juri Lelli wrote:
> > Hi,
> Hi,
>
> > Now, I'm sending this and an RFC, as I'm wondering if the first behavior
> > is actually what we want, and it is not odd at all for reasons tha
On 19/02/19 17:06, Sebastian Andrzej Siewior wrote:
> On 2019-02-19 15:58:26 [+0100], Juri Lelli wrote:
> > Hi,
> Hi,
>
> > I've been seeing those messages while running some stress tests (hog
> > tasks pinned to CPUs).
> >
> > Have yet to see them after I
unhandled
> softirqs.
>
> Cc: stable...@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior
I've been seeing those messages while running some stress tests (hog
tasks pinned to CPUs).
Have yet to see them after I applied this patch earlier this morning (it
usually took not much time to reproduce).
Tested-by: Juri Lelli
Thanks!
- Juri
per/2:0 [120]
Signed-off-by: Juri Lelli
---
include/linux/hrtimer.h | 2 ++
kernel/time/hrtimer.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index 2bdb047c7656..c6d4941c7dd8 100644
--- a/include/linux/hrtimer.h
+++ b/i
In this case this posting might also
function as a question: why we need things to work as they are today?
Thanks!
- Juri
Juri Lelli (2):
time/hrtimer: Add PINNED_HARD mode for realtime hrtimers
time/hrtimer: Embed hrtimer mode into hrtimer_sleeper
include/linux/hrtimer.h | 4
kernel/ti
hrtimer_sleeper initialization.
Signed-off-by: Juri Lelli
---
include/linux/hrtimer.h | 2 ++
kernel/time/hrtimer.c | 11 ++-
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index c6d4941c7dd8..d5f11ef5330a 100644
--- a/include/linux
On 05/02/19 12:20, Peter Zijlstra wrote:
> On Tue, Feb 05, 2019 at 10:51:43AM +0100, Juri Lelli wrote:
> > On 04/02/19 13:10, Peter Zijlstra wrote:
> > > On Thu, Jan 17, 2019 at 09:47:38AM +0100, Juri Lelli wrote:
> > > > No synchronisation mechanism exist
On 04/02/19 13:10, Peter Zijlstra wrote:
> On Thu, Jan 17, 2019 at 09:47:38AM +0100, Juri Lelli wrote:
> > No synchronisation mechanism exists between the cpuset subsystem and calls
> > to function __sched_setscheduler(). As such, it is possible that new root
> > domains are
On 04/02/19 13:45, Waiman Long wrote:
> On 02/04/2019 07:18 AM, Peter Zijlstra wrote:
> > On Mon, Feb 04, 2019 at 10:02:11AM +0100, Juri Lelli wrote:
> >> On 18/01/19 17:46, Juri Lelli wrote:
> >>> On 18/01/19 08:17, Tejun Heo wrote:
> >>>> On Thu,
101 - 200 of 2448 matches
Mail list logo