On Wed, 25 Jul 2007 14:54:21 -0600 [EMAIL PROTECTED] wrote:

> +void edac_mc_reset_delay_period(int value)
>  {
> -     /* cancel the current workq request */
> -     edac_mc_workq_teardown(mci);
> +     struct mem_ctl_info *mci;
> +     struct list_head *item;
> +
> +     mutex_lock(&mem_ctls_mutex);
> +
> +     /* scan the list and turn off all workq timers, doing so under lock
> +      */
> +     list_for_each(item, &mc_devices) {
> +             mci = list_entry(item, struct mem_ctl_info, link);
> +
> +             if (mci->op_state == OP_RUNNING_POLL)
> +                     cancel_delayed_work(&mci->work);
> +     }
> +
> +     mutex_unlock(&mem_ctls_mutex);

cancel_delayed_work() on its own looks a bit racy.  The work could
presently be running on another CPU.

So generally we'll run flush_workqueue() or cancel_work_sync() after the
cancel_delayed_work() to make sure that it has really gone away.

Beware however that you're holding a lock here.  If any of the work
functions which can be at mci->work also take mem_ctls_mutex then it is
deadlocky to run flush_workqueue() or cancel_work_sync() while holding that
lock.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to