Hello, On Fri, Feb 01, 2013 at 02:41:31AM +0800, Lai Jiangshan wrote: > +static struct worker_pool *lock_pool_executing_work(struct work_struct *work, > + struct worker **worker) > +{ > + unsigned long pool_id = offq_work_pool_id(work); > + struct worker_pool *pool; > + struct worker *exec; > + > + if (pool_id == WORK_OFFQ_POOL_NONE) > + return NULL; > + > + pool = worker_pool_by_id(pool_id); > + if (!pool) > + return NULL; > + > + spin_lock(&pool->lock); > + exec = find_worker_executing_work(pool, work); > + if (exec) { > + BUG_ON(pool != exec->pool); > + *worker = exec; > + return pool; > + } > + spin_unlock(&pool->lock); > + > + return NULL; > +}
So, if a work item is queued on the same CPU and it isn't being executed, it will lock, look up the hash, unlock and then lock again? If this is something improved by later patch, please explain so. There gotta be a better way to do this, right? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/