On 2020-09-18 10:22:32 [+0200], pet...@infradead.org wrote:
> > > One reason for not allowing migrate_disable() to sleep was: FPU code.
> > >
> > > Could it be it does something like:
> > >
> > > preempt_disable();
> > > spin_lock();
> > >
> > > spin_unlock();
> > > preempt_enable();
>
On Fri, Sep 18, 2020 at 09:00:03AM +0200, Thomas Gleixner wrote:
> >> +void migrate_disable(void)
> >> +{
> >> + unsigned long flags;
> >> +
> >> + if (!current->migration_ctrl.disable_cnt) {
> >> + raw_spin_lock_irqsave(>pi_lock, flags);
> >> +
On Thu, Sep 17, 2020 at 06:30:01PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 17:54:10 [+0200], pet...@infradead.org wrote:
> > I'm not sure what the problem with FPU was, I was throwing alternatives
> > at tglx to see what would stick, in part to (re)discover the design
> > constraints of
On Thu, Sep 17 2020 at 16:24, peterz wrote:
> On Thu, Sep 17, 2020 at 11:42:11AM +0200, Thomas Gleixner wrote:
>
>> +static inline void update_nr_migratory(struct task_struct *p, long delta)
>> +{
>> +if (p->nr_cpus_allowed > 1 && p->sched_class->update_migratory)
>> +
On Thu, Sep 17, 2020 at 05:13:41PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 16:49:37 [+0200], pet...@infradead.org wrote:
> > I'm aware of the duct-tape :-) But I was under the impression that we
> > didn't want the duct-tape, and that there was lots of issues with the
> > FPU code, or was
On 2020-09-17 17:54:10 [+0200], pet...@infradead.org wrote:
> I'm not sure what the problem with FPU was, I was throwing alternatives
> at tglx to see what would stick, in part to (re)discover the design
> constraints of this thing.
was this recent or distant in the time line?
> One reason for
On 2020-09-17 16:49:37 [+0200], pet...@infradead.org wrote:
> I'm aware of the duct-tape :-) But I was under the impression that we
> didn't want the duct-tape, and that there was lots of issues with the
> FPU code, or was that another issue?
Of course it would be better not to need the duct
On Thu, Sep 17, 2020 at 04:38:50PM +0200, Sebastian Siewior wrote:
> On 2020-09-17 16:24:38 [+0200], pet...@infradead.org wrote:
> > And if I'm not mistaken, the above migrate_enable() *does* require being
> > able to schedule, and our favourite piece of futex:
> >
> >
On Thu, Sep 17, 2020 at 11:42:11AM +0200, Thomas Gleixner wrote:
> +static inline void update_nr_migratory(struct task_struct *p, long delta)
> +{
> + if (p->nr_cpus_allowed > 1 && p->sched_class->update_migratory)
> + p->sched_class->update_migratory(p, delta);
> +}
Right, so as
On 2020-09-17 16:24:38 [+0200], pet...@infradead.org wrote:
> And if I'm not mistaken, the above migrate_enable() *does* require being
> able to schedule, and our favourite piece of futex:
>
> raw_spin_lock_irq(_state->pi_mutex.wait_lock);
> spin_unlock(q.lock_ptr);
>
> is broken.
On RT enabled kernels most of the code including spin/rw lock held sections
are preemptible, which also makes the tasks migrateable. That violates the
per CPU constraints. RT needs therefore a mechanism to control migration
independent of preemption.
Add a migrate_disable/enable() mechanism which
11 matches
Mail list logo