On Fri, Jan 05, 2007 at 05:07:17PM +0300, Oleg Nesterov wrote:
> How about block_cpu_down() ?
Maybe ..not sure
If we do introduce such a function, we may need to convert several
preempt_disable() that are there already (with intent of blocking
cpu_down) to block_cpu_down() ..
> These cpu-hotplu
On Fri, Jan 05, 2007 at 03:42:46PM +0300, Oleg Nesterov wrote:
> preempt_disable() can't prevent cpu_up, but flush_workqueue() doesn't care
> _unless_ cpu_down also happened meantime (and hence a fresh CPU may have
> pending work_structs which were moved from a dead CPU).
Yes, that was what I had
On 01/05, Srivatsa Vaddagiri wrote:
>
> On Thu, Jan 04, 2007 at 10:31:07AM -0800, Andrew Morton wrote:
> > But before we do much more of this we should have a wrapper. Umm
> >
> > static inline void block_cpu_hotplug(void)
> > {
> > preempt_disable();
> > }
>
> Nack.
>
> This will only bloc
On 01/05, Srivatsa Vaddagiri wrote:
>
> On Thu, Jan 04, 2007 at 09:18:50AM -0800, Andrew Morton wrote:
> > This?
>
> This can still lead to the problem spotted by Oleg here:
>
> http://lkml.org/lkml/2006/12/30/37
>
> and you would need a similar patch he posted there.
preempt_disable() ca
On Thu, Jan 04, 2007 at 10:31:07AM -0800, Andrew Morton wrote:
> But before we do much more of this we should have a wrapper. Umm
>
> static inline void block_cpu_hotplug(void)
> {
> preempt_disable();
> }
Nack.
This will only block cpu down, not cpu_up and hence is a misnomer. I would be
On Thu, Jan 04, 2007 at 09:18:50AM -0800, Andrew Morton wrote:
> This?
This can still lead to the problem spotted by Oleg here:
http://lkml.org/lkml/2006/12/30/37
and you would need a similar patch he posted there.
> void fastcall flush_workqueue(struct workqueue_struct *wq)
> {
> -
On Thu, 4 Jan 2007 21:09:01 +0300
Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> > ---
> > a/kernel/workqueue.c~flush_workqueue-use-preempt_disable-to-hold-off-cpu-hotplug
> > +++ a/kernel/workqueue.c
> > @@ -419,18 +419,22 @@ static void flush_cpu_workqueue(struct c
> > * Probably keven
On 01/04, Andrew Morton wrote:
>
> On Thu, 4 Jan 2007 17:29:36 +0300
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > > In brief:
> > >
> > > keventd threadhotplug thread
> > > ----
> > >
> > > run_
On Thu, 4 Jan 2007 17:29:36 +0300
Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> > In brief:
> >
> > keventd thread hotplug thread
> > -- --
> >
> > run_workqueue()
> > |
> > work_fn()
> >
On Thu, Jan 04, 2007 at 07:31:39PM +0300, Oleg Nesterov wrote:
> > AFAIK this deadlock originated from Andrew's patch here:
> >
> > http://lkml.org/lkml/2006/12/7/231
>
> I don't think so. The core problem is not that we are doing unlock/sleep/lock
> with this patch. The thing is: work->func()
On 01/04, Srivatsa Vaddagiri wrote:
>
> On Thu, Jan 04, 2007 at 05:29:36PM +0300, Oleg Nesterov wrote:
> > Thanks, I need to think about this.
> >
> > However I am not sure I fully understand the problem.
> >
> > First, this deadlock was not introduced by recent changes (including "single
> > thr
On Thu, Jan 04, 2007 at 05:29:36PM +0300, Oleg Nesterov wrote:
> Thanks, I need to think about this.
>
> However I am not sure I fully understand the problem.
>
> First, this deadlock was not introduced by recent changes (including "single
> threaded flush_workqueue() takes workqueue_mutex too"),
On 01/04, Srivatsa Vaddagiri wrote:
>
> On Mon, Dec 18, 2006 at 01:34:16AM +0300, Oleg Nesterov wrote:
> > void fastcall flush_workqueue(struct workqueue_struct *wq)
> > {
> > - might_sleep();
> > -
> > + mutex_lock(&workqueue_mutex);
> > if (is_single_threaded(wq)) {
> > /* A
On 01/04, Srivatsa Vaddagiri wrote:
>
> On Tue, Dec 19, 2006 at 03:43:19AM +0300, Oleg Nesterov wrote:
> > > Taking workqueue_mutex() unconditionally in flush_workqueue() means
> > > that we'll deadlock if a single-threaded workqueue callback handler calls
> > > flush_workqueue().
> >
> > Well. Bu
On Mon, Dec 18, 2006 at 01:34:16AM +0300, Oleg Nesterov wrote:
> void fastcall flush_workqueue(struct workqueue_struct *wq)
> {
> - might_sleep();
> -
> + mutex_lock(&workqueue_mutex);
> if (is_single_threaded(wq)) {
> /* Always use first cpu's area. */
> -
On Tue, Dec 19, 2006 at 03:43:19AM +0300, Oleg Nesterov wrote:
> > Taking workqueue_mutex() unconditionally in flush_workqueue() means
> > that we'll deadlock if a single-threaded workqueue callback handler calls
> > flush_workqueue().
>
> Well. But flush_workqueue() drops workqueue_mutex before g
On Tue, 19 Dec 2006 03:43:19 +0300
Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> On 12/18, Andrew Morton wrote:
> >
> > On Mon, 18 Dec 2006 01:34:16 +0300
> > Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> >
> > > NOTE: I removed 'int cpu' parameter, flush_workqueue() locks/unlocks
> > > workqueue_mutex
On 12/18, Andrew Morton wrote:
>
> On Mon, 18 Dec 2006 01:34:16 +0300
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > NOTE: I removed 'int cpu' parameter, flush_workqueue() locks/unlocks
> > workqueue_mutex unconditionally. It may be restored, but I think it
> > doesn't make much sense, we take t
On Mon, 18 Dec 2006 01:34:16 +0300
Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> Remove ->remove_sequence, ->insert_sequence, and ->work_done from
> struct cpu_workqueue_struct. To implement flush_workqueue() we can
> queue a barrier work on each CPU and wait for its completition.
Seems sensible. I
On Mon, 18 Dec 2006, Oleg Nesterov wrote:
>
> Remove ->remove_sequence, ->insert_sequence, and ->work_done from
> struct cpu_workqueue_struct. To implement flush_workqueue() we can
> queue a barrier work on each CPU and wait for its completition.
Looks fine to me. It's after -rc1 so I won't appl
Remove ->remove_sequence, ->insert_sequence, and ->work_done from
struct cpu_workqueue_struct. To implement flush_workqueue() we can
queue a barrier work on each CPU and wait for its completition.
We don't need to worry about CPU going down while we are are sleeping
on the completition. take_over_
21 matches
Mail list logo