On Wed, Aug 19, 2020 at 03:33:20PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-08-19 15:15:07 [+0200], pet...@infradead.org wrote:
> If you want to optimize further, we could move PF_IO_WORKER to an lower
> bit. x86 can test for both via
> (gcc-10)
> | testl $536870944, 44(%rbp)
On 2020-08-19 15:15:07 [+0200], pet...@infradead.org wrote:
> > - if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
> > + if (tsk->flags & PF_WQ_WORKER) {
> > preempt_disable();
> > - if (tsk->flags & PF_WQ_WORKER)
> > - wq_worker_sleeping(tsk);
> > -
On Wed, Aug 19, 2020 at 02:37:58PM +0200, Sebastian Andrzej Siewior wrote:
> I don't see a significant reason why this lock should become a
> raw_spinlock_t therefore I suggest to move it after the
> tsk_is_pi_blocked() check.
> Any feedback on this vs raw_spinlock_t?
>
> Signed-off-by: Sebastia
On 8/19/20 6:15 AM, pet...@infradead.org wrote:
> On Wed, Aug 19, 2020 at 02:37:58PM +0200, Sebastian Andrzej Siewior wrote:
>
>> I don't see a significant reason why this lock should become a
>> raw_spinlock_t therefore I suggest to move it after the
>> tsk_is_pi_blocked() check.
>
>> Any feedba
During a context switch the scheduler invokes wq_worker_sleeping() with
disabled preemption. Disabling preemption is needed because it protects
access to `worker->sleeping'. As an optimisation it avoids invoking
schedule() within the schedule path as part of possible wake up (thus
preempt_enable_no
5 matches
Mail list logo