On Fri, Jan 15, 2021 at 9:05 PM Peter Zijlstra wrote:
>
> On Fri, Jan 15, 2021 at 10:11:51AM +0100, Peter Zijlstra wrote:
> > On Tue, Jan 12, 2021 at 03:53:24PM -0800, Paul E. McKenney wrote:
> > > An SRCU-P run on the new series reproduced the warning below. Repeat-by:
> > >
> > > tools/testing/
On Fri, Jan 15, 2021 at 10:11:51AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 12, 2021 at 03:53:24PM -0800, Paul E. McKenney wrote:
> > An SRCU-P run on the new series reproduced the warning below. Repeat-by:
> >
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 10
> > --con
On Tue, Jan 12, 2021 at 03:53:24PM -0800, Paul E. McKenney wrote:
> An SRCU-P run on the new series reproduced the warning below. Repeat-by:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 10
> --configs "112*SRCU-P" --bootargs "rcupdate.rcu_cpu_stall_suppress_at_boot=1
>
On 2021/1/13 19:10, Peter Zijlstra wrote:
On Tue, Jan 12, 2021 at 11:38:12PM +0800, Lai Jiangshan wrote:
But the hard problem is "how to suppress the warning of
online&!active in __set_cpus_allowed_ptr()" for late spawned
unbound workers during hotplug.
I cannot see create_worker() go bad l
On Wed, Jan 13, 2021 at 7:11 PM Peter Zijlstra wrote:
>
> On Tue, Jan 12, 2021 at 11:38:12PM +0800, Lai Jiangshan wrote:
>
> > But the hard problem is "how to suppress the warning of
> > online&!active in __set_cpus_allowed_ptr()" for late spawned
> > unbound workers during hotplug.
>
> I cannot s
On Tue, Jan 12, 2021 at 11:38:12PM +0800, Lai Jiangshan wrote:
> But the hard problem is "how to suppress the warning of
> online&!active in __set_cpus_allowed_ptr()" for late spawned
> unbound workers during hotplug.
I cannot see create_worker() go bad like that.
The thing is, it uses:
kthre
On Tue, Jan 12, 2021 at 09:14:11AM -0800, Paul E. McKenney wrote:
> On Mon, Jan 11, 2021 at 01:50:52PM -0800, Paul E. McKenney wrote:
> > On Mon, Jan 11, 2021 at 10:09:07AM -0800, Paul E. McKenney wrote:
> > > On Mon, Jan 11, 2021 at 06:16:39PM +0100, Peter Zijlstra wrote:
> > > >
> > > > While th
On 12/01/21 12:33, Lai Jiangshan wrote:
>> I thought only pcpu pools would get the POOL_DISASSOCIATED flag on
>> offline, but it seems unbound pools also get it at init time. Did I get
>> that right?
>
> You are right.
>
> The POOL_DISASSOCIATED flag indicates whether the pool is concurrency
> mana
On Mon, Jan 11, 2021 at 01:50:52PM -0800, Paul E. McKenney wrote:
> On Mon, Jan 11, 2021 at 10:09:07AM -0800, Paul E. McKenney wrote:
> > On Mon, Jan 11, 2021 at 06:16:39PM +0100, Peter Zijlstra wrote:
> > >
> > > While thinking more about this, I'm thinking a big part of the problem
> > > is that
On Tue, Jan 12, 2021 at 07:57:26AM -0700, Jens Axboe wrote:
> On 1/11/21 12:21 PM, Valentin Schneider wrote:
> > On 11/01/21 18:16, Peter Zijlstra wrote:
> >> Sadly it appears like io_uring() uses kthread_create_on_cpu() without
> >> then having any hotplug crud on, so that needs additinoal frobbin
On Tue, Jan 12, 2021 at 10:53 PM Peter Zijlstra wrote:
>
> On Tue, Jan 12, 2021 at 12:33:03PM +0800, Lai Jiangshan wrote:
> > > Well yes, but afaict the workqueue stuff hasn't been settled yet, and
> > > the rcutorture patch Paul did was just plain racy and who knows what
> > > other daft kthread
On 1/11/21 12:21 PM, Valentin Schneider wrote:
> On 11/01/21 18:16, Peter Zijlstra wrote:
>> Sadly it appears like io_uring() uses kthread_create_on_cpu() without
>> then having any hotplug crud on, so that needs additinoal frobbing.
>>
>
> I noticed that as well sometime ago, and I believed then
On Tue, Jan 12, 2021 at 12:33:03PM +0800, Lai Jiangshan wrote:
> > Well yes, but afaict the workqueue stuff hasn't been settled yet, and
> > the rcutorture patch Paul did was just plain racy and who knows what
> > other daft kthread users are out there. That and we're at -rc3.
>
> I just send the
> Well yes, but afaict the workqueue stuff hasn't been settled yet, and
> the rcutorture patch Paul did was just plain racy and who knows what
> other daft kthread users are out there. That and we're at -rc3.
I just send the V4 patchset for the workqueue. Please take a look.
> @@ -1861,6 +1861,8
On 11/01/21 21:23, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 07:21:06PM +, Valentin Schneider wrote:
>> I'm less fond of the workqueue pcpu flag toggling, but it gets us what
>> we want: allow those threads to run on !active CPUs during online, but
>> move them away before !online during
On Mon, Jan 11, 2021 at 10:09:07AM -0800, Paul E. McKenney wrote:
> On Mon, Jan 11, 2021 at 06:16:39PM +0100, Peter Zijlstra wrote:
> >
> > While thinking more about this, I'm thinking a big part of the problem
> > is that we're not dinstinguishing between geniuine per-cpu kthreads and
> > kthread
On Mon, Jan 11, 2021 at 07:21:06PM +, Valentin Schneider wrote:
> On 11/01/21 18:16, Peter Zijlstra wrote:
> > Sadly it appears like io_uring() uses kthread_create_on_cpu() without
> > then having any hotplug crud on, so that needs additinoal frobbing.
> >
>
> I noticed that as well sometime a
On 11/01/21 18:16, Peter Zijlstra wrote:
> Sadly it appears like io_uring() uses kthread_create_on_cpu() without
> then having any hotplug crud on, so that needs additinoal frobbing.
>
I noticed that as well sometime ago, and I believed then (still do) this
usage is broken. I don't think usage of
On Mon, Jan 11, 2021 at 06:16:39PM +0100, Peter Zijlstra wrote:
>
> While thinking more about this, I'm thinking a big part of the problem
> is that we're not dinstinguishing between geniuine per-cpu kthreads and
> kthreads that just happen to be per-cpu.
>
> Geniuine per-cpu kthreads are kthread
While thinking more about this, I'm thinking a big part of the problem
is that we're not dinstinguishing between geniuine per-cpu kthreads and
kthreads that just happen to be per-cpu.
Geniuine per-cpu kthreads are kthread_bind() and have PF_NO_SETAFFINITY,
but sadly a lot of non-per-cpu kthreads
On Mon, Jan 11, 2021 at 12:01:03PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 11:07:34AM +0100, Thomas Gleixner wrote:
> > On Fri, Jan 08 2021 at 12:46, Peter Zijlstra wrote:
> > > On Sat, Dec 26, 2020 at 10:51:08AM +0800, Lai Jiangshan wrote:
> > >> From: Lai Jiangshan
> > >>
> > >>
On Mon, Jan 11, 2021 at 11:07:34AM +0100, Thomas Gleixner wrote:
> On Fri, Jan 08 2021 at 12:46, Peter Zijlstra wrote:
> > On Sat, Dec 26, 2020 at 10:51:08AM +0800, Lai Jiangshan wrote:
> >> From: Lai Jiangshan
> >>
> >> 06249738a41a ("workqueue: Manually break affinity on hotplug")
> >> said tha
On Fri, Jan 08 2021 at 12:46, Peter Zijlstra wrote:
> On Sat, Dec 26, 2020 at 10:51:08AM +0800, Lai Jiangshan wrote:
>> From: Lai Jiangshan
>>
>> 06249738a41a ("workqueue: Manually break affinity on hotplug")
>> said that scheduler will not force break affinity for us.
>
> So I've been looking at
On Sat, Dec 26, 2020 at 10:51:08AM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan
>
> 06249738a41a ("workqueue: Manually break affinity on hotplug")
> said that scheduler will not force break affinity for us.
So I've been looking at this the past day or so, and the more I look,
the more I thi
From: Lai Jiangshan
06249738a41a ("workqueue: Manually break affinity on hotplug")
said that scheduler will not force break affinity for us.
But workqueue highly depends on the old behavior. Many parts of the codes
relies on it, 06249738a41a ("workqueue: Manually break affinity on hotplug")
is n
25 matches
Mail list logo