On 08/12, Jens Axboe wrote:
>
> On 8/12/20 8:54 AM, Oleg Nesterov wrote:
> >
> > --- x/kernel/signal.c
> > +++ x/kernel/signal.c
> > @@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
> >
> >  relock:
> >     spin_lock_irq(&sighand->siglock);
> > -   current->jobctl &= ~JOBCTL_TASK_WORK;
> > +   smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
> >     if (unlikely(current->task_works)) {
> >             spin_unlock_irq(&sighand->siglock);
> >             task_work_run();
> >
>
> I think this should work when paired with the READ_ONCE() on the
> task_work_add() side.

It pairs with mb (implied by cmpxchg) before READ_ONCE. So we roughly have

        task_work_add:                          get_signal:

        STORE(task->task_works, new_work);      STORE(task->jobctl);
        mb();                                   mb();
        LOAD(task->jobctl);                     LOAD(task->task_works);

and we can rely on STORE-MB-LOAD.

> I haven't managed to reproduce badness with the
> existing one that doesn't have the smp_store_mb() here, so can't verify
> much beyond that...

Yes, the race is very unlikely. And the problem is minor, the target task
can miss the new work added by TWA_SIGNAL and return from get_signal() without
TIF_SIGPENDING.

> Are you going to send this out as a complete patch?

Jens, could you please send the patch? I am on vacation and travelling.
Feel free to add my ACK.

Oleg.

Reply via email to