On 09/10/18 11:44, Peter Zijlstra wrote:
> On Tue, Oct 09, 2018 at 11:24:26AM +0200, Juri Lelli wrote:
> > The main concerns I have with the current approach is that, being based
> > on mutex.c, it's both
> >
> > - not linked with futexes
> > - not involv
On 09/10/18 11:44, Peter Zijlstra wrote:
> On Tue, Oct 09, 2018 at 11:24:26AM +0200, Juri Lelli wrote:
> > The main concerns I have with the current approach is that, being based
> > on mutex.c, it's both
> >
> > - not linked with futexes
> > - not involv
-by: Juri Lelli
---
kernel/sched/core.c | 62 ++--
kernel/sched/fair.c | 4 +++
kernel/sched/sched.h | 30 -
3 files changed, 82 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fe0223121883
be in all sort of
states when a proxy is found (blocked, executing on a different CPU,
etc.). Details on how to handle different situations are to be found in
proxy() code comments.
Signed-off-by: Peter Zijlstra (Intel)
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli
---
include
).
Signed-off-by: Juri Lelli
---
kernel/sched/core.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 54003515fd29..0314afe4ba80 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1664,6 +1664,14 @@ static inline
-by: Juri Lelli
---
kernel/sched/core.c | 62 ++--
kernel/sched/fair.c | 4 +++
kernel/sched/sched.h | 30 -
3 files changed, 82 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fe0223121883
be in all sort of
states when a proxy is found (blocked, executing on a different CPU,
etc.). Details on how to handle different situations are to be found in
proxy() code comments.
Signed-off-by: Peter Zijlstra (Intel)
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli
---
include
).
Signed-off-by: Juri Lelli
---
kernel/sched/core.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 54003515fd29..0314afe4ba80 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1664,6 +1664,14 @@ static inline
mutex::wait_lock might be nested under rq->lock.
Make it irq safe then.
Signed-off-by: Juri Lelli
---
kernel/locking/mutex.c | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 23312afa7
for Wound-Wait
mutexes")]
Signed-off-by: Juri Lelli
---
kernel/locking/mutex.c | 43 +++---
1 file changed, 28 insertions(+), 15 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index df34ce70fcde..f37402cd8496 100644
--- a/ke
blocked_on pointer might be concurrently modified by schedule() (when
proxy() is called) and by wakeup path, so we need to guard changes.
Ensure blocked_lock is always held before updating blocked_on pointer.
Signed-off-by: Juri Lelli
---
kernel/locking/mutex-debug.c | 1 +
kernel/locking
mutex::wait_lock might be nested under rq->lock.
Make it irq safe then.
Signed-off-by: Juri Lelli
---
kernel/locking/mutex.c | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 23312afa7
for Wound-Wait
mutexes")]
Signed-off-by: Juri Lelli
---
kernel/locking/mutex.c | 43 +++---
1 file changed, 28 insertions(+), 15 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index df34ce70fcde..f37402cd8496 100644
--- a/ke
blocked_on pointer might be concurrently modified by schedule() (when
proxy() is called) and by wakeup path, so we need to guard changes.
Ensure blocked_lock is always held before updating blocked_on pointer.
Signed-off-by: Juri Lelli
---
kernel/locking/mutex-debug.c | 1 +
kernel/locking
tex
| | owner
| v
`-- task
This patch only enables blocked-on relation, blocked-task will be
enabled in a later patch implementing proxy().
Signed-off-by: Peter Zijlstra (Intel)
[minor changes while rebasing]
Signed-off-by: Juri Lelli
---
include/linux/sche
tex
| | owner
| v
`-- task
This patch only enables blocked-on relation, blocked-task will be
enabled in a later patch implementing proxy().
Signed-off-by: Peter Zijlstra (Intel)
[minor changes while rebasing]
Signed-off-by: Juri Lelli
---
include/linux/sche
From: Peter Zijlstra
In preparation to nest mutex::wait_lock under rq::lock it needs to be
raw_spinlock_t.
Signed-off-by: Peter Zijlstra
---
include/linux/mutex.h| 4 ++--
kernel/locking/mutex-debug.c | 4 ++--
kernel/locking/mutex.c | 22 +++---
3 files
-rt-users=153450086400459=2
4 - https://ieeexplore.ieee.org/document/5562902
5 - http://retis.sssup.it/~lipari/papers/rtlws2013.pdf
6 - https://lore.kernel.org/lkml/20180828135324.21976-1-patrick.bell...@arm.com/
Juri Lelli (3):
locking/mutex: make mutex::wait_lock irq safe
sched: Ensure blocked_on
From: Peter Zijlstra
In preparation to nest mutex::wait_lock under rq::lock it needs to be
raw_spinlock_t.
Signed-off-by: Peter Zijlstra
---
include/linux/mutex.h| 4 ++--
kernel/locking/mutex-debug.c | 4 ++--
kernel/locking/mutex.c | 22 +++---
3 files
-rt-users=153450086400459=2
4 - https://ieeexplore.ieee.org/document/5562902
5 - http://retis.sssup.it/~lipari/papers/rtlws2013.pdf
6 - https://lore.kernel.org/lkml/20180828135324.21976-1-patrick.bell...@arm.com/
Juri Lelli (3):
locking/mutex: make mutex::wait_lock irq safe
sched: Ensure blocked_on
lems and changelog might be improved.
Other then that patch looks good, thanks!
Acked-by: Juri Lelli
lems and changelog might be improved.
Other then that patch looks good, thanks!
Acked-by: Juri Lelli
On 03/10/18 15:42, Steven Rostedt wrote:
> On Mon, 3 Sep 2018 16:28:00 +0200
> Juri Lelli wrote:
>
>
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 5b43f482fa0f..8dc26005bb1e 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/k
On 03/10/18 15:42, Steven Rostedt wrote:
> On Mon, 3 Sep 2018 16:28:00 +0200
> Juri Lelli wrote:
>
>
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 5b43f482fa0f..8dc26005bb1e 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/k
.GA25664@localhost.localdomain/
Best,
- Juri
On 03/09/18 16:27, Juri Lelli wrote:
> callback_lock grants the holder read-only access to cpusets. For fixing
> a synchronization issue between cpusets and scheduler core, it is now
> required to make callback_lock available to core sched
.GA25664@localhost.localdomain/
Best,
- Juri
On 03/09/18 16:27, Juri Lelli wrote:
> callback_lock grants the holder read-only access to cpusets. For fixing
> a synchronization issue between cpusets and scheduler core, it is now
> required to make callback_lock available to core sched
On 25/09/18 14:53, Peter Zijlstra wrote:
> On Mon, Sep 03, 2018 at 04:28:01PM +0200, Juri Lelli wrote:
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index fb7ae691cb82..08128bdf3944 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sche
On 25/09/18 14:53, Peter Zijlstra wrote:
> On Mon, Sep 03, 2018 at 04:28:01PM +0200, Juri Lelli wrote:
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index fb7ae691cb82..08128bdf3944 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sche
On 25/09/18 14:32, Peter Zijlstra wrote:
> On Mon, Sep 03, 2018 at 04:28:01PM +0200, Juri Lelli wrote:
> > +/*
> > + * Called with cpuset_mutex held (rebuild_sched_domains())
> > + * Called with hotplug lock held (rebuild_sched_domains_locked())
> > + * Called wi
On 25/09/18 14:32, Peter Zijlstra wrote:
> On Mon, Sep 03, 2018 at 04:28:01PM +0200, Juri Lelli wrote:
> > +/*
> > + * Called with cpuset_mutex held (rebuild_sched_domains())
> > + * Called with hotplug lock held (rebuild_sched_domains_locked())
> > + * Called wi
Hi,
On 03/09/18 16:27, Juri Lelli wrote:
> Hi,
>
> v5 of a series of patches, originally authored by Mathieu, with the intent
> of fixing a long standing issue of SCHED_DEADLINE bandwidth accounting.
> As originally reported by Steve [1], when hotplug and/or (certain)
> cpus
Hi,
On 03/09/18 16:27, Juri Lelli wrote:
> Hi,
>
> v5 of a series of patches, originally authored by Mathieu, with the intent
> of fixing a long standing issue of SCHED_DEADLINE bandwidth accounting.
> As originally reported by Steve [1], when hotplug and/or (certain)
> cpus
On 06/09/18 16:25, Dietmar Eggemann wrote:
> Hi Juri,
>
> On 08/23/2018 11:54 PM, Juri Lelli wrote:
> > On 23/08/18 18:52, Dietmar Eggemann wrote:
> > > Hi,
> > >
> > > On 08/21/2018 01:54 AM, Miguel de Dios wrote:
> > > > On 08/17/2018 1
On 06/09/18 16:25, Dietmar Eggemann wrote:
> Hi Juri,
>
> On 08/23/2018 11:54 PM, Juri Lelli wrote:
> > On 23/08/18 18:52, Dietmar Eggemann wrote:
> > > Hi,
> > >
> > > On 08/21/2018 01:54 AM, Miguel de Dios wrote:
> > > > On 08/17/2018 1
On 06/09/18 15:40, Patrick Bellasi wrote:
> On 04-Sep 15:47, Juri Lelli wrote:
[...]
> > Wondering if you want to fold the check below inside the
> >
> > if (user && !capable(CAP_SYS_NICE)) {
> >...
> > }
> >
> > block.
On 06/09/18 15:40, Patrick Bellasi wrote:
> On 04-Sep 15:47, Juri Lelli wrote:
[...]
> > Wondering if you want to fold the check below inside the
> >
> > if (user && !capable(CAP_SYS_NICE)) {
> >...
> > }
> >
> > block.
On 06/09/18 14:48, Patrick Bellasi wrote:
> Hi Juri!
>
> On 05-Sep 12:45, Juri Lelli wrote:
> > Hi,
> >
> > On 28/08/18 14:53, Patrick Bellasi wrote:
> >
> > [...]
> >
> > > static inline int __setscheduler_uclamp(struct task_struct *p
On 06/09/18 14:48, Patrick Bellasi wrote:
> Hi Juri!
>
> On 05-Sep 12:45, Juri Lelli wrote:
> > Hi,
> >
> > On 28/08/18 14:53, Patrick Bellasi wrote:
> >
> > [...]
> >
> > > static inline int __setscheduler_uclamp(struct task_struct *p
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> static inline int __setscheduler_uclamp(struct task_struct *p,
> const struct sched_attr *attr)
> {
> - if (attr->sched_util_min > attr->sched_util_max)
> - return -EINVAL;
> - if
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> static inline int __setscheduler_uclamp(struct task_struct *p,
> const struct sched_attr *attr)
> {
> - if (attr->sched_util_min > attr->sched_util_max)
> - return -EINVAL;
> - if
Hi,
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> Let's introduce a new API to set utilization clamping values for a
> specified task by extending sched_setattr, a syscall which already
> allows to define task specific properties for different scheduling
> classes.
> Specifically, a new
Hi,
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> Let's introduce a new API to set utilization clamping values for a
> specified task by extending sched_setattr, a syscall which already
> allows to define task specific properties for different scheduling
> classes.
> Specifically, a new
Hi,
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> static inline int __setscheduler_uclamp(struct task_struct *p,
> const struct sched_attr *attr)
> {
> - if (attr->sched_util_min > attr->sched_util_max)
> - return -EINVAL;
> - if
Hi,
On 28/08/18 14:53, Patrick Bellasi wrote:
[...]
> static inline int __setscheduler_uclamp(struct task_struct *p,
> const struct sched_attr *attr)
> {
> - if (attr->sched_util_min > attr->sched_util_max)
> - return -EINVAL;
> - if
nge
> this default behavior thus allowing non privileged tasks to change their
> utilization clamp values.
>
> Signed-off-by: Patrick Bellasi
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Rafael J. Wysocki
> Cc: Paul Turner
> Cc: Suren Baghdasaryan
> Cc: Todd Kjos
nge
> this default behavior thus allowing non privileged tasks to change their
> utilization clamp values.
>
> Signed-off-by: Patrick Bellasi
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Rafael J. Wysocki
> Cc: Paul Turner
> Cc: Suren Baghdasaryan
> Cc: Todd Kjos
to a potential oversell
of CPU bandwidth.
Grab callback_lock from core scheduler, so to prevent situations such as
the one described above from happening.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
v4->v5: grab callback_lock instead of cpuset_mutex, as callback_lock is
eno
to a potential oversell
of CPU bandwidth.
Grab callback_lock from core scheduler, so to prevent situations such as
the one described above from happening.
Signed-off-by: Mathieu Poirier
Signed-off-by: Juri Lelli
---
v4->v5: grab callback_lock instead of cpuset_mutex, as callback_lock is
eno
atomic context.
Signed-off-by: Juri Lelli
---
kernel/cgroup/cpuset.c | 66 +-
1 file changed, 33 insertions(+), 33 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 266f10cb7222..5b43f482fa0f 100644
--- a/kernel/cgroup
atomic context.
Signed-off-by: Juri Lelli
---
kernel/cgroup/cpuset.c | 66 +-
1 file changed, 33 insertions(+), 33 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 266f10cb7222..5b43f482fa0f 100644
--- a/kernel/cgroup
From: Mathieu Poirier
When the topology of root domains is modified by CPUset or CPUhotplug
operations information about the current deadline bandwidth held in the
root domain is lost.
This patch address the issue by recalculating the lost deadline
bandwidth information by circling through the
From: Mathieu Poirier
When the topology of root domains is modified by CPUset or CPUhotplug
operations information about the current deadline bandwidth held in the
root domain is lost.
This patch address the issue by recalculating the lost deadline
bandwidth information by circling through the
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
/root-domain-accounting-v5
Thanks,
- Juri
[1] https://lkml.org/lkml/2016/2/3/966
[2] https://lore.kernel.org/lkml/20180614161142.69f18...@gandalf.local.home/
Juri Lelli (1):
cgroup/cpuset: make callback_lock raw
Mathieu Poirier (4):
sched/topology: Adding function
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
/root-domain-accounting-v5
Thanks,
- Juri
[1] https://lkml.org/lkml/2016/2/3/966
[2] https://lore.kernel.org/lkml/20180614161142.69f18...@gandalf.local.home/
Juri Lelli (1):
cgroup/cpuset: make callback_lock raw
Mathieu Poirier (4):
sched/topology: Adding function
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
On 23/08/18 18:52, Dietmar Eggemann wrote:
> Hi,
>
> On 08/21/2018 01:54 AM, Miguel de Dios wrote:
> > On 08/17/2018 11:27 AM, Steve Muckle wrote:
> > > From: John Dias
> > >
> > > When rt_mutex_setprio changes a task's scheduling class to RT,
> > > we're seeing cases where the task's vruntime
On 23/08/18 18:52, Dietmar Eggemann wrote:
> Hi,
>
> On 08/21/2018 01:54 AM, Miguel de Dios wrote:
> > On 08/17/2018 11:27 AM, Steve Muckle wrote:
> > > From: John Dias
> > >
> > > When rt_mutex_setprio changes a task's scheduling class to RT,
> > > we're seeing cases where the task's vruntime
On 13/08/18 13:14, Patrick Bellasi wrote:
> On 07-Aug 11:59, Juri Lelli wrote:
> > Hi,
> >
> > Minor comments below.
> >
> > On 06/08/18 17:39, Patrick Bellasi wrote:
> >
> > [...]
> &g
On 13/08/18 13:14, Patrick Bellasi wrote:
> On 07-Aug 11:59, Juri Lelli wrote:
> > Hi,
> >
> > Minor comments below.
> >
> > On 06/08/18 17:39, Patrick Bellasi wrote:
> >
> > [...]
> &g
On 13/08/18 11:12, Patrick Bellasi wrote:
> Hi Vincent!
>
> On 09-Aug 18:03, Vincent Guittot wrote:
> > > On 07-Aug 15:26, Juri Lelli wrote:
>
> [...]
>
> > > > > + util_cfs = cpu_util_cfs(rq);
> > > > > + util_rt = cpu_uti
On 13/08/18 11:12, Patrick Bellasi wrote:
> Hi Vincent!
>
> On 09-Aug 18:03, Vincent Guittot wrote:
> > > On 07-Aug 15:26, Juri Lelli wrote:
>
> [...]
>
> > > > > + util_cfs = cpu_util_cfs(rq);
> > > > > + util_rt = cpu_uti
On 09/08/18 16:23, Patrick Bellasi wrote:
> On 09-Aug 11:50, Juri Lelli wrote:
> > On 09/08/18 10:14, Patrick Bellasi wrote:
> > > On 07-Aug 14:35, Juri Lelli wrote:
> > > > On 06/08/18 17:39, Patrick Bellasi wrote:
>
> [...]
>
> > >
On 09/08/18 16:23, Patrick Bellasi wrote:
> On 09-Aug 11:50, Juri Lelli wrote:
> > On 09/08/18 10:14, Patrick Bellasi wrote:
> > > On 07-Aug 14:35, Juri Lelli wrote:
> > > > On 06/08/18 17:39, Patrick Bellasi wrote:
>
> [...]
>
> > >
On 09/08/18 10:14, Patrick Bellasi wrote:
> On 07-Aug 14:35, Juri Lelli wrote:
> > On 06/08/18 17:39, Patrick Bellasi wrote:
> >
> > [...]
> >
> > > @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct
> > &
On 09/08/18 10:14, Patrick Bellasi wrote:
> On 07-Aug 14:35, Juri Lelli wrote:
> > On 06/08/18 17:39, Patrick Bellasi wrote:
> >
> > [...]
> >
> > > @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct
> > &
Hi,
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> @@ -223,13 +224,25 @@ static unsigned long sugov_get_util(struct sugov_cpu
> *sg_cpu)
>* utilization (PELT windows are synchronized) we can directly add them
>* to obtain the CPU's actual utilization.
>*
> - *
Hi,
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> @@ -223,13 +224,25 @@ static unsigned long sugov_get_util(struct sugov_cpu
> *sg_cpu)
>* utilization (PELT windows are synchronized) we can directly add them
>* to obtain the CPU's actual utilization.
>*
> - *
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct *p,
> return retval;
> }
>
> + /* Configure utilization clamps for the task */
> + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) {
>
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct *p,
> return retval;
> }
>
> + /* Configure utilization clamps for the task */
> + if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP) {
>
Hi,
Minor comments below.
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> + *
> + * Task Utilization Attributes
> + * ===
> + *
> + * A subset of sched_attr attributes allows to specify the utilization which
> + * should be expected by a task. These attributes allows
Hi,
Minor comments below.
On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> + *
> + * Task Utilization Attributes
> + * ===
> + *
> + * A subset of sched_attr attributes allows to specify the utilization which
> + * should be expected by a task. These attributes allows
On 01/08/18 23:19, Steven Rostedt wrote:
> On Wed, 11 Jul 2018 09:29:48 +0200
> Juri Lelli wrote:
>
> > Mark noticed that syzkaller is able to reliably trigger the following
> >
> > dl_rq->running_bw > dl_rq->this_bw
> > WARNING: CPU:
On 01/08/18 23:19, Steven Rostedt wrote:
> On Wed, 11 Jul 2018 09:29:48 +0200
> Juri Lelli wrote:
>
> > Mark noticed that syzkaller is able to reliably trigger the following
> >
> > dl_rq->running_bw > dl_rq->this_bw
> > WARNING: CPU:
On 20/07/18 17:36, Daniel Bristot de Oliveira wrote:
> On 07/20/2018 02:53 PM, Juri Lelli wrote:
> > On 20/07/18 14:48, Peter Zijlstra wrote:
> >> On Fri, Jul 20, 2018 at 02:46:15PM +0200, Peter Zijlstra wrote:
> >>> On Fri, Jul 20, 2018 at 11:16:30AM +0200, Daniel Br
On 20/07/18 17:36, Daniel Bristot de Oliveira wrote:
> On 07/20/2018 02:53 PM, Juri Lelli wrote:
> > On 20/07/18 14:48, Peter Zijlstra wrote:
> >> On Fri, Jul 20, 2018 at 02:46:15PM +0200, Peter Zijlstra wrote:
> >>> On Fri, Jul 20, 2018 at 11:16:30AM +0200, Daniel Br
On 20/07/18 14:48, Peter Zijlstra wrote:
> On Fri, Jul 20, 2018 at 02:46:15PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 20, 2018 at 11:16:30AM +0200, Daniel Bristot de Oliveira wrote:
> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > > index fbfc3f1d368a..8b50eea4b607
On 20/07/18 14:48, Peter Zijlstra wrote:
> On Fri, Jul 20, 2018 at 02:46:15PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 20, 2018 at 11:16:30AM +0200, Daniel Bristot de Oliveira wrote:
> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > > index fbfc3f1d368a..8b50eea4b607
double rq_clock_update() call, we set ENQUEUE_NOCLOCK flag to
> activate_task().
I suggested almost the same, but missed the ENQUEUE_NOCLOCK bit (which I
think it's required).
> Changes from v1:
> Cosmetic changes in the log, and correct Juri's email (Daniel).
>
> Reported-by
double rq_clock_update() call, we set ENQUEUE_NOCLOCK flag to
> activate_task().
I suggested almost the same, but missed the ENQUEUE_NOCLOCK bit (which I
think it's required).
> Changes from v1:
> Cosmetic changes in the log, and correct Juri's email (Daniel).
>
> Reported-by
Commit-ID: e117cb52bdb4d376b711bee34af6434c9e314b3b
Gitweb: https://git.kernel.org/tip/e117cb52bdb4d376b711bee34af6434c9e314b3b
Author: Juri Lelli
AuthorDate: Wed, 11 Jul 2018 09:29:48 +0200
Committer: Ingo Molnar
CommitDate: Sun, 15 Jul 2018 23:47:33 +0200
sched/deadline: Fix
Commit-ID: e117cb52bdb4d376b711bee34af6434c9e314b3b
Gitweb: https://git.kernel.org/tip/e117cb52bdb4d376b711bee34af6434c9e314b3b
Author: Juri Lelli
AuthorDate: Wed, 11 Jul 2018 09:29:48 +0200
Committer: Ingo Molnar
CommitDate: Sun, 15 Jul 2018 23:47:33 +0200
sched/deadline: Fix
tter sees running_bw > this_bw.
Fix it by removing a task contribution from running_bw if the task is
not queued and in non_contending state while switched to a different
class.
Reported-by: Mark Rutland
Signed-off-by: Juri Lelli
---
kernel/sched/deadline.c | 11 ++-
1 file changed, 10 inse
tter sees running_bw > this_bw.
Fix it by removing a task contribution from running_bw if the task is
not queued and in non_contending state while switched to a different
class.
Reported-by: Mark Rutland
Signed-off-by: Juri Lelli
---
kernel/sched/deadline.c | 11 ++-
1 file changed, 10 inse
Hi Steve,
On 03/07/18 10:54, Steven Rostedt wrote:
> When looking to test SCHED_DEADLINE, I triggered a lockup. The lockup
> appears to be caused by WARN_ON() done inside the scheduling path, and
> I'm guessing it tried to grab the rq lock and caused a deadlock (all I
> would get would be the
Hi Steve,
On 03/07/18 10:54, Steven Rostedt wrote:
> When looking to test SCHED_DEADLINE, I triggered a lockup. The lockup
> appears to be caused by WARN_ON() done inside the scheduling path, and
> I'm guessing it tried to grab the rq lock and caused a deadlock (all I
> would get would be the
On 21/06/18 20:45, Peter Zijlstra wrote:
> On Fri, Jun 08, 2018 at 02:09:47PM +0200, Vincent Guittot wrote:
> > static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> > {
> > struct rq *rq = cpu_rq(sg_cpu->cpu);
> > + unsigned long util;
> >
> > if
On 21/06/18 20:45, Peter Zijlstra wrote:
> On Fri, Jun 08, 2018 at 02:09:47PM +0200, Vincent Guittot wrote:
> > static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> > {
> > struct rq *rq = cpu_rq(sg_cpu->cpu);
> > + unsigned long util;
> >
> > if
On 19/06/18 11:25, Quentin Perret wrote:
> On Tuesday 19 Jun 2018 at 12:19:01 (+0200), Juri Lelli wrote:
> > On 19/06/18 11:02, Quentin Perret wrote:
> > > On Tuesday 19 Jun 2018 at 11:47:14 (+0200), Juri Lelli wrote:
> > > > On 19/06/18 10:40, Quentin Pe
On 19/06/18 11:25, Quentin Perret wrote:
> On Tuesday 19 Jun 2018 at 12:19:01 (+0200), Juri Lelli wrote:
> > On 19/06/18 11:02, Quentin Perret wrote:
> > > On Tuesday 19 Jun 2018 at 11:47:14 (+0200), Juri Lelli wrote:
> > > > On 19/06/18 10:40, Quentin Pe
On 19/06/18 11:02, Quentin Perret wrote:
> On Tuesday 19 Jun 2018 at 11:47:14 (+0200), Juri Lelli wrote:
> > On 19/06/18 10:40, Quentin Perret wrote:
> > > Hi Pavan,
> > >
> > > On Tuesday 19 Jun 2018 at 14:48:41 (+0530), Pavan Kondeti wrote:
> >
> &
On 19/06/18 11:02, Quentin Perret wrote:
> On Tuesday 19 Jun 2018 at 11:47:14 (+0200), Juri Lelli wrote:
> > On 19/06/18 10:40, Quentin Perret wrote:
> > > Hi Pavan,
> > >
> > > On Tuesday 19 Jun 2018 at 14:48:41 (+0530), Pavan Kondeti wrote:
> >
> &
On 19/06/18 10:40, Quentin Perret wrote:
> Hi Pavan,
>
> On Tuesday 19 Jun 2018 at 14:48:41 (+0530), Pavan Kondeti wrote:
[...]
> > There seems to be a sysfs interface exposed by this driver to change
> > cpu_scale.
> > Should we worry about it? I don't know what is the usecase for changing
On 19/06/18 10:40, Quentin Perret wrote:
> Hi Pavan,
>
> On Tuesday 19 Jun 2018 at 14:48:41 (+0530), Pavan Kondeti wrote:
[...]
> > There seems to be a sysfs interface exposed by this driver to change
> > cpu_scale.
> > Should we worry about it? I don't know what is the usecase for changing
On 15/06/18 09:01, Juri Lelli wrote:
[...]
> I'll try harder to find alternatives, but suggestions are welcome! :-)
I wonder if something like the following might actually work. IIUC
cpuset.c comment [1], callback_lock is the one to actually take if one
needs to only query cpusets.
[1] ht
On 15/06/18 09:01, Juri Lelli wrote:
[...]
> I'll try harder to find alternatives, but suggestions are welcome! :-)
I wonder if something like the following might actually work. IIUC
cpuset.c comment [1], callback_lock is the one to actually take if one
needs to only query cpusets.
[1] ht
On 14/06/18 16:11, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:10 +0200
> Juri Lelli wrote:
>
> > +/**
> > + * cpuset_lock - Grab the cpuset_mutex from another subsysytem
> > + */
> > +int cpuset_lock(void)
> >
On 14/06/18 16:11, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:10 +0200
> Juri Lelli wrote:
>
> > +/**
> > + * cpuset_lock - Grab the cpuset_mutex from another subsysytem
> > + */
> > +int cpuset_lock(void)
> >
301 - 400 of 2448 matches
Mail list logo