Hello, On Fri, Aug 07, 2015 at 05:29:56PM +0200, Peter Zijlstra wrote: > Even if we were to strictly order those stores you could have (note > there is no matching barrier, as there is only the one load, so ordering > cannot help): > > __kthread_bind() > <SYSCALL> > sched_setaffinity() > if (p->flags & > PF_NO_SETAFFINITY) /* false-not-taken */ > p->flags |= PF_NO_SETAFFINITY; > smp_wmb(); > do_set_cpus_allowed(); > set_cpus_allowed_ptr() > > > I think the code was better before. Can't we just revert workqueue.c > > part? > > I agree that the new argument isn't pretty, but I cannot see how > workqueues would not be affected by this.
So, the problem there is that __kthread_bind() doesn't grab the same lock that the syscall side grabs but workqueue used set_cpus_allowed_ptr() which goes through the rq locking, so as long as the check on syscall side is movied inside rq lock, it should be fine. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/