On Thu, Apr 26, 2007 at 08:44:36PM +0400, Oleg Nesterov wrote:
> On 04/26, Jarek Poplawski wrote:
> >
> > On Wed, Apr 25, 2007 at 04:47:14PM +0400, Oleg Nesterov wrote:
> > ...
> > > > > > +           spin_lock_irq(&cwq->lock);
> > > > > > +           /* CPU_DEAD in progress may change cwq */
> > > > > > +           if (likely(cwq == get_wq_data(work))) {
> > > > > > +                   list_del_init(&work->entry);
> > > > > > +                   __set_bit(WORK_STRUCT_PENDING, 
> > > > > > work_data_bits(work));
> > > > > > +                   retry = try_to_del_timer_sync(&dwork->timer) < 
> > > > > > 0;
> > > > > > +           }
> > > > > > +           spin_unlock_irq(&cwq->lock);
> > > > > > +   } while (unlikely(retry));
> > > 
> > > >  1. If delayed_work_timer_fn of this work is fired and is waiting
> > > >  on the above spin_lock then, after above spin_unlock, the work
> > > >  will be queued.
> > > 
> > > No, in that case try_to_del_timer_sync() returns -1.
> > 
> > Yes. But I think it's safe only after moving work_clear_pending
> > in run_workqueue under a lock; probably otherwise there is a
> > possibility this flag could be cleared, after above unlock.  
> 
> It doesn't matter in this particular case because we are going to retry
> anyway. But yes, this patch moves work_clear_pending() under lock, because
> otherwise it could be cleared by run_workqueue() if this work is about
> to be executed, but was already deleted from list.

...and it seems to be the same what I meant...
I wanted only to make agree (now it's only for historical reasons)
the lock on _PENDING could matter in run_workqueue.

Jarek P.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to