On Tue, Nov 21, 2017 at 09:33:52AM -0500, joe.ko...@concurrent-rt.com wrote:
> On Mon, Nov 20, 2017 at 11:57:51PM -0500, Steven Rostedt wrote:
> > On Mon, 20 Nov 2017 23:02:07 -0500
> > Steven Rostedt <rost...@goodmis.org> wrote:
> > 
> > 
> > > Ideally, I would like to stay close to what upstream -rt does. Would
> > > you be able to backport the 4.11-rt patch?
> > > 
> > > I'm currently working on releasing 4.9-rt and 4.4-rt with the latest
> > > backports. I could easily add this one too.
> > 
> > Speaking of which. I just backported this patch to 4.4-rt. Is this what
> > you are talking about?
> 
> Yes it is.
> Thanks for finding that!
> Joe

I spoke too fast.  You will a variant of my one-liner fix
when you backport the 4.11.12-r16 patch:

    rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch

to 4.9-rt and 4.4-rt.  The fix of interest is the introduction of

    p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);

to migrate_enable_update_cpus_allowed().

Regards,
Joe

> 
> > >From 1dc89be37874bfc7bb4a0ea7c45492d7db39f62b Mon Sep 17 00:00:00 2001
> > From: Sebastian Andrzej Siewior <bige...@linutronix.de>
> > Date: Mon, 19 Jun 2017 09:55:47 +0200
> > Subject: [PATCH] sched/migrate disable: handle updated task-mask mg-dis
> >  section
> > 
> > If task's cpumask changes while in the task is in a migrate_disable()
> > section then we don't react on it after a migrate_enable(). It matters
> > however if current CPU is no longer part of the cpumask. We also miss
> > the ->set_cpus_allowed() callback.
> > This patch fixes it by setting task->migrate_disable_update once we this
> > "delayed" hook.
> > This bug was introduced while fixing unrelated issue in
> > migrate_disable() in v4.4-rt3 (update_migrate_disable() got removed
> > during that).
> > 
> > Cc: stable...@vger.kernel.org
> > Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
> > Signed-off-by: Steven Rostedt (VMware) <rost...@goodmis.org>
> > ---
> >  include/linux/sched.h |    1 
> >  kernel/sched/core.c   |   59 
> > ++++++++++++++++++++++++++++++++++++++++++++------
> >  2 files changed, 54 insertions(+), 6 deletions(-)
> > 
> > Index: stable-rt.git/include/linux/sched.h
> > ===================================================================
> > --- stable-rt.git.orig/include/linux/sched.h        2017-11-20 
> > 23:43:24.214077537 -0500
> > +++ stable-rt.git/include/linux/sched.h     2017-11-20 23:43:24.154079278 
> > -0500
> > @@ -1438,6 +1438,7 @@ struct task_struct {
> >     unsigned int policy;
> >  #ifdef CONFIG_PREEMPT_RT_FULL
> >     int migrate_disable;
> > +   int migrate_disable_update;
> >  # ifdef CONFIG_SCHED_DEBUG
> >     int migrate_disable_atomic;
> >  # endif
> > Index: stable-rt.git/kernel/sched/core.c
> > ===================================================================
> > --- stable-rt.git.orig/kernel/sched/core.c  2017-11-20 23:43:24.214077537 
> > -0500
> > +++ stable-rt.git/kernel/sched/core.c       2017-11-20 23:56:05.071687323 
> > -0500
> > @@ -1212,18 +1212,14 @@ void set_cpus_allowed_common(struct task
> >     p->nr_cpus_allowed = cpumask_weight(new_mask);
> >  }
> >  
> > -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask 
> > *new_mask)
> > +static void __do_set_cpus_allowed_tail(struct task_struct *p,
> > +                                  const struct cpumask *new_mask)
> >  {
> >     struct rq *rq = task_rq(p);
> >     bool queued, running;
> >  
> >     lockdep_assert_held(&p->pi_lock);
> >  
> > -   if (__migrate_disabled(p)) {
> > -           cpumask_copy(&p->cpus_allowed, new_mask);
> > -           return;
> > -   }
> > -
> >     queued = task_on_rq_queued(p);
> >     running = task_current(rq, p);
> >  
> > @@ -1246,6 +1242,20 @@ void do_set_cpus_allowed(struct task_str
> >             enqueue_task(rq, p, ENQUEUE_RESTORE);
> >  }
> >  
> > +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask 
> > *new_mask)
> > +{
> > +   if (__migrate_disabled(p)) {
> > +           lockdep_assert_held(&p->pi_lock);
> > +
> > +           cpumask_copy(&p->cpus_allowed, new_mask);
> > +#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
> > +           p->migrate_disable_update = 1;
> > +#endif
> > +           return;
> > +   }
> > +   __do_set_cpus_allowed_tail(p, new_mask);
> > +}
> > +
> >  static DEFINE_PER_CPU(struct cpumask, sched_cpumasks);
> >  static DEFINE_MUTEX(sched_down_mutex);
> >  static cpumask_t sched_down_cpumask;
> > @@ -3231,6 +3241,43 @@ void migrate_enable(void)
> >      */
> >     p->migrate_disable = 0;
> >  
> > +   if (p->migrate_disable_update) {
> > +           unsigned long flags;
> > +           struct rq *rq;
> > +
> > +           rq = task_rq_lock(p, &flags);
> > +           update_rq_clock(rq);
> > +
> > +           __do_set_cpus_allowed_tail(p, &p->cpus_allowed);
> > +           task_rq_unlock(rq, p, &flags);
> > +
> > +           p->migrate_disable_update = 0;
> > +
> > +           WARN_ON(smp_processor_id() != task_cpu(p));
> > +           if (!cpumask_test_cpu(task_cpu(p), &p->cpus_allowed)) {
> > +                   const struct cpumask *cpu_valid_mask = cpu_active_mask;
> > +                   struct migration_arg arg;
> > +                   unsigned int dest_cpu;
> > +
> > +                   if (p->flags & PF_KTHREAD) {
> > +                           /*
> > +                            * Kernel threads are allowed on online && 
> > !active CPUs
> > +                            */
> > +                           cpu_valid_mask = cpu_online_mask;
> > +                   }
> > +                   dest_cpu = cpumask_any_and(cpu_valid_mask, 
> > &p->cpus_allowed);
> > +                   arg.task = p;
> > +                   arg.dest_cpu = dest_cpu;
> > +
> > +                   unpin_current_cpu();
> > +                   preempt_lazy_enable();
> > +                   preempt_enable();
> > +                   stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
> > +                   tlb_migrate_finish(p->mm);
> > +                   return;
> > +           }
> > +   }
> > +
> >     unpin_current_cpu();
> >     preempt_enable();
> >     preempt_lazy_enable();

Reply via email to