On Sat, Jan 16, 2021 at 07:46:47PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 17, 2021 at 12:14:34AM +0800, Lai Jiangshan wrote:
>
> > BP: AP: worker:
> > cpus_write_lock()
> > bringup_cpu()work_item_func()
> > bringup_wait_for_ap
On Sun, Jan 17, 2021 at 12:14:34AM +0800, Lai Jiangshan wrote:
> BP: AP: worker:
> cpus_write_lock()
> bringup_cpu()work_item_func()
> bringup_wait_for_ap get_online_cpus()
>
On Sat, Jan 16, 2021 at 01:45:23PM +0100, Peter Zijlstra wrote:
> On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:
> > I feel nervous to use kthread_park() here and kthread_parkme() in
> > worker thread. And adding kthread_should_park() to the fast path
> > also daunt me.
>
> Is
On Sat, Jan 16, 2021 at 10:45:04PM +0800, Lai Jiangshan wrote:
> On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra wrote:
> > It is also the exact sequence normal per-cpu threads (smpboot) use to
> > preserve affinity.
>
> Other per-cpu threads normally do short-live works. wq's work can be
>
On Sat, Jan 16, 2021 at 11:16 PM Peter Zijlstra wrote:
>
> On Sat, Jan 16, 2021 at 10:45:04PM +0800, Lai Jiangshan wrote:
> > On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra wrote:
> > > It is also the exact sequence normal per-cpu threads (smpboot) use to
> > > preserve affinity.
> >
> > Other
On Sat, Jan 16, 2021 at 8:45 PM Peter Zijlstra wrote:
>
> On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:
> > On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra
> > wrote:
> >
> > >
> > > -void kthread_set_per_cpu(struct task_struct *k, bool set)
> > > +void
On Sat, Jan 16, 2021 at 02:27:09PM +0800, Lai Jiangshan wrote:
> On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra wrote:
>
> >
> > -void kthread_set_per_cpu(struct task_struct *k, bool set)
> > +void kthread_set_per_cpu(struct task_struct *k, int cpu)
> > {
> > struct kthread *kthread =
On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra wrote:
>
> -void kthread_set_per_cpu(struct task_struct *k, bool set)
> +void kthread_set_per_cpu(struct task_struct *k, int cpu)
> {
> struct kthread *kthread = to_kthread(k);
> if (!kthread)
> return;
>
> -
On Thu, Jan 14, 2021 at 01:21:26PM +, Valentin Schneider wrote:
> On 14/01/21 14:12, Peter Zijlstra wrote:
> > - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> > - pool->attrs->cpumask) < 0);
> > + kthread_park(worker->task);
On 14/01/21 14:12, Peter Zijlstra wrote:
> On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote:
>> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
>> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>> > * of all workers first and then clear UNBOUND. As
On Wed, Jan 13, 2021 at 09:28:13PM +0800, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
> > @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
> > * of all workers first and then clear UNBOUND. As we're called
> > * from CPU_ONLINE,
On Wed, Jan 13, 2021 at 06:43:57PM +, Valentin Schneider wrote:
> On 13/01/21 09:52, Paul E. McKenney wrote:
> > On Wed, Jan 13, 2021 at 02:16:10PM +, Valentin Schneider wrote:
> >> You might be right; at this point we would still have BALANCE_PUSH set,
> >> so something like the below
On 13/01/21 09:52, Paul E. McKenney wrote:
> On Wed, Jan 13, 2021 at 02:16:10PM +, Valentin Schneider wrote:
>> You might be right; at this point we would still have BALANCE_PUSH set,
>> so something like the below could happen
>>
>> rebind_workers()
>> set_cpus_allowed_ptr()
>>
On Wed, Jan 13, 2021 at 02:16:10PM +, Valentin Schneider wrote:
> On 13/01/21 21:28, Lai Jiangshan wrote:
> > On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra
> > wrote:
> >> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
> >> * of all workers first and then clear
On 13/01/21 21:28, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
>> @@ -4972,9 +4977,11 @@ static void rebind_workers(struct worker
>> * of all workers first and then clear UNBOUND. As we're called
>> * from CPU_ONLINE, the following shouldn't
On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
>
> Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
>
> Workqueues have unfortunate semantics in that per-cpu workers are not
> default flushed and parked during hotplug, however a subset does
> manual flush on hotplug and hard
On Wed, Jan 13, 2021 at 12:36:55AM +0800, Lai Jiangshan wrote:
> On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
> >
> > Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
> >
> > Workqueues have unfortunate semantics in that per-cpu workers are not
> > default flushed and parked
On 12/01/21 15:43, Peter Zijlstra wrote:
> @@ -4919,8 +4922,10 @@ static void unbind_workers(int cpu)
>
> raw_spin_unlock_irq(>lock);
>
> - for_each_pool_worker(worker, pool)
> + for_each_pool_worker(worker, pool) {
> +
On Tue, Jan 12, 2021 at 10:51 PM Peter Zijlstra wrote:
>
> Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
>
> Workqueues have unfortunate semantics in that per-cpu workers are not
> default flushed and parked during hotplug, however a subset does
> manual flush on hotplug and hard
Mark the per-cpu workqueue workers as KTHREAD_IS_PER_CPU.
Workqueues have unfortunate semantics in that per-cpu workers are not
default flushed and parked during hotplug, however a subset does
manual flush on hotplug and hard relies on them for correctness.
Therefore play silly games..
20 matches
Mail list logo