FYI Tao, I shared this with Valentin on IRC yesterday evening (we're
both in Europe):
https://paste.debian.net/1167885/
I'll be going over it again this morning with a (hopefully) fresh(er) mind.
On 19/10/20 17:32, Tao Zhou wrote:
> Hi Valentin,
>>
>> Side thought: don't we need to NULL p->migration_pending in __sched_fork()?
>>
>
> No need, if fork happen, the forked task will inherit that pending.
Which is what I'm worrying about. I think we can ignore migrate_disable()
for now and jus
Hi,
On 18/10/20 10:46, ouwen wrote:
> On Fri, Oct 16, 2020 at 01:48:17PM +0100, Valentin Schneider wrote:
>> ---
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index a5b6eac07adb..1ebf653c2c2f 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -1859,6 +1859,13 @
On 15/10/20 12:05, Peter Zijlstra wrote:
> @@ -1862,15 +1875,27 @@ static int migration_cpu_stop(void *data
>* we're holding p->pi_lock.
>*/
> if (task_rq(p) == rq) {
> + if (is_migration_disabled(p))
> + goto out;
> +
> if (task
On Thu, Oct 15, 2020 at 02:54:53PM +0100, Valentin Schneider wrote:
>
> On 15/10/20 12:05, Peter Zijlstra wrote:
> > +static int affine_move_task(struct rq *rq, struct rq_flags *rf,
> > + struct task_struct *p, int dest_cpu, unsigned int
> > flags)
> > +{
> > + struct set_
On 15/10/20 12:05, Peter Zijlstra wrote:
> +static int affine_move_task(struct rq *rq, struct rq_flags *rf,
> + struct task_struct *p, int dest_cpu, unsigned int
> flags)
> +{
> + struct set_affinity_pending my_pending = { }, *pending = NULL;
> + struct migration_
Concurrent migrate_disable() and set_cpus_allowed_ptr() has
interesting features. We rely on set_cpus_allowed_ptr() to not return
until the task runs inside the provided mask. This expectation is
exported to userspace.
This means that any set_cpus_allowed_ptr() caller must wait until
migrate_enabl
7 matches
Mail list logo