Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Rafael J. Wysocki
On Sunday, 20 May 2007 23:06, Oleg Nesterov wrote:
> On 05/20, Rafael J. Wysocki wrote:
> >
> > On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
> > > 
> > > I am a bit afraid of too many yes/no options for the freezer, a couple of 
> > > naive
> > > questions.
> > > 
> > > 1. Can't we make all wqs freezable? I still can't see the reason to have 
> > > both
> > >freezable and not freezable wqs.
> > 
> > The reason might be the same as for having freezable and nonfreezable kernel
> > threads in general.  For example, there are some kernel threads that we need
> > for saving the image and I don't see why there shouldn't be any such
> > workqueues.
> 
> OK, I see.
> 
> > > 2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
> > >freeze tasks right now, without any additional changes?
> > 
> > In principle, we can, but for this purpose we'd have to modify all NOFREEZE
> > tasks.
> 
> Why?

Ah, sorry, I didn't understand the question correctly.

> >That wouldn't fly, I'm afraid.
> > 
> > >Any subsystem should handle correctly the case when _cpu_down() (say)
> > >is called with tasks_frozen == 1 anyway. So, why can't we simplify
> > >things and do
> > > 
> > >   _cpu_down(int tasks_frozen)
> > > 
> > >   if (!tasks_frozen)
> > >   freeze_processes();
> > >   ...
> > > 
> > >   right now?

Yes, we can do this, I think.
 
> > But we call _cpu_down() after device_suspend(), so many tasks are already
> > frozen at this point.  We'd only need to freeze those that are not frozen 
> > and
> > in _cpu_up() we'd have to thaw them.
> 
> Not sure I understand. When we call _cpu_down() after device_suspend(), we
> check tasks_frozen == 1, and do not call freeze_processes(). If the task
> could be frozen, it is already frozen.
> 
> When _cpu_down() sees tasks_frozen = 0, it does freeze_processes() itself,
> and thaw_tasks() on return.
> 
> IOW, we never send (say) CPU_DEAD, always CPU_DEAD_FROZEN.

Yes, that seems reasonable.

This means that every user of freezable kernel threads who installs a CPU
hotplug notifier will have to assume that its kernel threads are frozen when
the notifier is called.

Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Oleg Nesterov
On 05/20, Rafael J. Wysocki wrote:
>
> On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
> > 
> > I am a bit afraid of too many yes/no options for the freezer, a couple of 
> > naive
> > questions.
> > 
> > 1. Can't we make all wqs freezable? I still can't see the reason to have 
> > both
> >freezable and not freezable wqs.
> 
> The reason might be the same as for having freezable and nonfreezable kernel
> threads in general.  For example, there are some kernel threads that we need
> for saving the image and I don't see why there shouldn't be any such
> workqueues.

OK, I see.

> > 2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
> >freeze tasks right now, without any additional changes?
> 
> In principle, we can, but for this purpose we'd have to modify all NOFREEZE
> tasks.

Why?

>That wouldn't fly, I'm afraid.
> 
> >Any subsystem should handle correctly the case when _cpu_down() (say)
> >is called with tasks_frozen == 1 anyway. So, why can't we simplify
> >things and do
> > 
> > _cpu_down(int tasks_frozen)
> > 
> > if (!tasks_frozen)
> > freeze_processes();
> > ...
> > 
> >   right now?
> 
> But we call _cpu_down() after device_suspend(), so many tasks are already
> frozen at this point.  We'd only need to freeze those that are not frozen and
> in _cpu_up() we'd have to thaw them.

Not sure I understand. When we call _cpu_down() after device_suspend(), we
check tasks_frozen == 1, and do not call freeze_processes(). If the task
could be frozen, it is already frozen.

When _cpu_down() sees tasks_frozen = 0, it does freeze_processes() itself,
and thaw_tasks() on return.

IOW, we never send (say) CPU_DEAD, always CPU_DEAD_FROZEN.

Wouldn't fly?

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Rafael J. Wysocki
On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
> On 05/15, Rafael J. Wysocki wrote:
> >
> > On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
> > > 
> > > So, in the long term, should we change this only user, or we think we 
> > > better fix
> > > freezeable wqs again?
> > 
> > Long term, I'd like to have freezable workqueues, so that people don't have 
> > to
> > use "raw" kernel threads only because they need some synchronization with
> > hibertnation/suspend.  Plus some cases in which workqueues are used by
> > fs-related code make me worry.
> 
> OK, so we should fix them. It would be great to also fix the last known 
> problem
> as well (work->func() vs hotplug callback deadlocks).
> 
> I am a bit afraid of too many yes/no options for the freezer, a couple of 
> naive
> questions.
> 
> 1. Can't we make all wqs freezable? I still can't see the reason to have both
>freezable and not freezable wqs.

The reason might be the same as for having freezable and nonfreezable kernel
threads in general.  For example, there are some kernel threads that we need
for saving the image and I don't see why there shouldn't be any such
workqueues.
 
> 2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
>freeze tasks right now, without any additional changes?

In principle, we can, but for this purpose we'd have to modify all NOFREEZE
tasks.  That wouldn't fly, I'm afraid.

>Any subsystem should handle correctly the case when _cpu_down() (say)
>is called with tasks_frozen == 1 anyway. So, why can't we simplify
>things and do
> 
>   _cpu_down(int tasks_frozen)
> 
>   if (!tasks_frozen)
>   freeze_processes();
>   ...
> 
>   right now?

But we call _cpu_down() after device_suspend(), so many tasks are already
frozen at this point.  We'd only need to freeze those that are not frozen and
in _cpu_up() we'd have to thaw them.

> > [*] The problem is, though, that freezable workqueus have some potential to 
> > fail
> > the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
> > workqueue, finds some work items in there, inserts the barrier and waits for
> > completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
> > the worker thread, which is then woken up and goes to the refrigerator.  
> > Thus
> > if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
> > thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it 
> > will be
> > blocked until the worker thread is woken up.
> 
> Yes, this is yet another dependency which freezer can't handle. Probably it is
> better to ignore this problem for now.
> 
> > To avoid this, I think, we may need to redesign the freezer, so that 
> > freezable
> > worker threads are frozen after all of the other kernel threads.
> 
> I doubt we can find a very clean way to do this. Besides, what if work->func()
> does flush_workqueue(another_wq) ? How can we decide which wq to freeze first?

We can't.

I think it would be a mistake to even try to remove all limitations from the
freezer.  Any other synchronization mechanisms have some limitations as well.

The code that uses these mechanisms is usually expected to use them in a sane
way and I don't see why we shouldn't expect the freezer users to do the same. 
;-)
 
> >   
> > Additionally,
> > we'd need to make a rule that NOFREEZE kernel threads must not call
> > flush_workqueue() or cancel_work_sync() on freezable workqueues.
> 
> cancel_work_sync() is OK, it can be used safely even if workqueue is frozen.
> flush_workqueue() and destroy_workqueue() are not.

Yes, you're right.

Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Oleg Nesterov
On 05/15, Rafael J. Wysocki wrote:
>
> On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
> > 
> > So, in the long term, should we change this only user, or we think we 
> > better fix
> > freezeable wqs again?
> 
> Long term, I'd like to have freezable workqueues, so that people don't have to
> use "raw" kernel threads only because they need some synchronization with
> hibertnation/suspend.  Plus some cases in which workqueues are used by
> fs-related code make me worry.

OK, so we should fix them. It would be great to also fix the last known problem
as well (work->func() vs hotplug callback deadlocks).

I am a bit afraid of too many yes/no options for the freezer, a couple of naive
questions.

1. Can't we make all wqs freezable? I still can't see the reason to have both
   freezable and not freezable wqs.

2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
   freeze tasks right now, without any additional changes?

   Any subsystem should handle correctly the case when _cpu_down() (say)
   is called with tasks_frozen == 1 anyway. So, why can't we simplify
   things and do

_cpu_down(int tasks_frozen)

if (!tasks_frozen)
freeze_processes();
...

  right now?

> [*] The problem is, though, that freezable workqueus have some potential to 
> fail
> the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
> workqueue, finds some work items in there, inserts the barrier and waits for
> completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
> the worker thread, which is then woken up and goes to the refrigerator.  Thus
> if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
> thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it will 
> be
> blocked until the worker thread is woken up.

Yes, this is yet another dependency which freezer can't handle. Probably it is
better to ignore this problem for now.

> To avoid this, I think, we may need to redesign the freezer, so that freezable
> worker threads are frozen after all of the other kernel threads.

I doubt we can find a very clean way to do this. Besides, what if work->func()
does flush_workqueue(another_wq) ? How can we decide which wq to freeze first?

>   
> Additionally,
> we'd need to make a rule that NOFREEZE kernel threads must not call
> flush_workqueue() or cancel_work_sync() on freezable workqueues.

cancel_work_sync() is OK, it can be used safely even if workqueue is frozen.
flush_workqueue() and destroy_workqueue() are not.

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Oleg Nesterov
On 05/15, Rafael J. Wysocki wrote:

 On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
  
  So, in the long term, should we change this only user, or we think we 
  better fix
  freezeable wqs again?
 
 Long term, I'd like to have freezable workqueues, so that people don't have to
 use raw kernel threads only because they need some synchronization with
 hibertnation/suspend.  Plus some cases in which workqueues are used by
 fs-related code make me worry.

OK, so we should fix them. It would be great to also fix the last known problem
as well (work-func() vs hotplug callback deadlocks).

I am a bit afraid of too many yes/no options for the freezer, a couple of naive
questions.

1. Can't we make all wqs freezable? I still can't see the reason to have both
   freezable and not freezable wqs.

2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
   freeze tasks right now, without any additional changes?

   Any subsystem should handle correctly the case when _cpu_down() (say)
   is called with tasks_frozen == 1 anyway. So, why can't we simplify
   things and do

_cpu_down(int tasks_frozen)

if (!tasks_frozen)
freeze_processes();
...

  right now?

 [*] The problem is, though, that freezable workqueus have some potential to 
 fail
 the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
 workqueue, finds some work items in there, inserts the barrier and waits for
 completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
 the worker thread, which is then woken up and goes to the refrigerator.  Thus
 if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
 thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it will 
 be
 blocked until the worker thread is woken up.

Yes, this is yet another dependency which freezer can't handle. Probably it is
better to ignore this problem for now.

 To avoid this, I think, we may need to redesign the freezer, so that freezable
 worker threads are frozen after all of the other kernel threads.

I doubt we can find a very clean way to do this. Besides, what if work-func()
does flush_workqueue(another_wq) ? How can we decide which wq to freeze first?

   
 Additionally,
 we'd need to make a rule that NOFREEZE kernel threads must not call
 flush_workqueue() or cancel_work_sync() on freezable workqueues.

cancel_work_sync() is OK, it can be used safely even if workqueue is frozen.
flush_workqueue() and destroy_workqueue() are not.

Oleg.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Rafael J. Wysocki
On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
 On 05/15, Rafael J. Wysocki wrote:
 
  On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
   
   So, in the long term, should we change this only user, or we think we 
   better fix
   freezeable wqs again?
  
  Long term, I'd like to have freezable workqueues, so that people don't have 
  to
  use raw kernel threads only because they need some synchronization with
  hibertnation/suspend.  Plus some cases in which workqueues are used by
  fs-related code make me worry.
 
 OK, so we should fix them. It would be great to also fix the last known 
 problem
 as well (work-func() vs hotplug callback deadlocks).
 
 I am a bit afraid of too many yes/no options for the freezer, a couple of 
 naive
 questions.
 
 1. Can't we make all wqs freezable? I still can't see the reason to have both
freezable and not freezable wqs.

The reason might be the same as for having freezable and nonfreezable kernel
threads in general.  For example, there are some kernel threads that we need
for saving the image and I don't see why there shouldn't be any such
workqueues.
 
 2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
freeze tasks right now, without any additional changes?

In principle, we can, but for this purpose we'd have to modify all NOFREEZE
tasks.  That wouldn't fly, I'm afraid.

Any subsystem should handle correctly the case when _cpu_down() (say)
is called with tasks_frozen == 1 anyway. So, why can't we simplify
things and do
 
   _cpu_down(int tasks_frozen)
 
   if (!tasks_frozen)
   freeze_processes();
   ...
 
   right now?

But we call _cpu_down() after device_suspend(), so many tasks are already
frozen at this point.  We'd only need to freeze those that are not frozen and
in _cpu_up() we'd have to thaw them.

  [*] The problem is, though, that freezable workqueus have some potential to 
  fail
  the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
  workqueue, finds some work items in there, inserts the barrier and waits for
  completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
  the worker thread, which is then woken up and goes to the refrigerator.  
  Thus
  if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
  thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it 
  will be
  blocked until the worker thread is woken up.
 
 Yes, this is yet another dependency which freezer can't handle. Probably it is
 better to ignore this problem for now.
 
  To avoid this, I think, we may need to redesign the freezer, so that 
  freezable
  worker threads are frozen after all of the other kernel threads.
 
 I doubt we can find a very clean way to do this. Besides, what if work-func()
 does flush_workqueue(another_wq) ? How can we decide which wq to freeze first?

We can't.

I think it would be a mistake to even try to remove all limitations from the
freezer.  Any other synchronization mechanisms have some limitations as well.

The code that uses these mechanisms is usually expected to use them in a sane
way and I don't see why we shouldn't expect the freezer users to do the same. 
;-)
 

  Additionally,
  we'd need to make a rule that NOFREEZE kernel threads must not call
  flush_workqueue() or cancel_work_sync() on freezable workqueues.
 
 cancel_work_sync() is OK, it can be used safely even if workqueue is frozen.
 flush_workqueue() and destroy_workqueue() are not.

Yes, you're right.

Greetings,
Rafael
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Oleg Nesterov
On 05/20, Rafael J. Wysocki wrote:

 On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
  
  I am a bit afraid of too many yes/no options for the freezer, a couple of 
  naive
  questions.
  
  1. Can't we make all wqs freezable? I still can't see the reason to have 
  both
 freezable and not freezable wqs.
 
 The reason might be the same as for having freezable and nonfreezable kernel
 threads in general.  For example, there are some kernel threads that we need
 for saving the image and I don't see why there shouldn't be any such
 workqueues.

OK, I see.

  2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
 freeze tasks right now, without any additional changes?
 
 In principle, we can, but for this purpose we'd have to modify all NOFREEZE
 tasks.

Why?

That wouldn't fly, I'm afraid.
 
 Any subsystem should handle correctly the case when _cpu_down() (say)
 is called with tasks_frozen == 1 anyway. So, why can't we simplify
 things and do
  
  _cpu_down(int tasks_frozen)
  
  if (!tasks_frozen)
  freeze_processes();
  ...
  
right now?
 
 But we call _cpu_down() after device_suspend(), so many tasks are already
 frozen at this point.  We'd only need to freeze those that are not frozen and
 in _cpu_up() we'd have to thaw them.

Not sure I understand. When we call _cpu_down() after device_suspend(), we
check tasks_frozen == 1, and do not call freeze_processes(). If the task
could be frozen, it is already frozen.

When _cpu_down() sees tasks_frozen = 0, it does freeze_processes() itself,
and thaw_tasks() on return.

IOW, we never send (say) CPU_DEAD, always CPU_DEAD_FROZEN.

Wouldn't fly?

Oleg.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-20 Thread Rafael J. Wysocki
On Sunday, 20 May 2007 23:06, Oleg Nesterov wrote:
 On 05/20, Rafael J. Wysocki wrote:
 
  On Sunday, 20 May 2007 21:54, Oleg Nesterov wrote:
   
   I am a bit afraid of too many yes/no options for the freezer, a couple of 
   naive
   questions.
   
   1. Can't we make all wqs freezable? I still can't see the reason to have 
   both
  freezable and not freezable wqs.
  
  The reason might be the same as for having freezable and nonfreezable kernel
  threads in general.  For example, there are some kernel threads that we need
  for saving the image and I don't see why there shouldn't be any such
  workqueues.
 
 OK, I see.
 
   2. Why do we need CPU_TASKS_FROZEN? Can't we change cpu-hotplug to always
  freeze tasks right now, without any additional changes?
  
  In principle, we can, but for this purpose we'd have to modify all NOFREEZE
  tasks.
 
 Why?

Ah, sorry, I didn't understand the question correctly.

 That wouldn't fly, I'm afraid.
  
  Any subsystem should handle correctly the case when _cpu_down() (say)
  is called with tasks_frozen == 1 anyway. So, why can't we simplify
  things and do
   
 _cpu_down(int tasks_frozen)
   
 if (!tasks_frozen)
 freeze_processes();
 ...
   
 right now?

Yes, we can do this, I think.
 
  But we call _cpu_down() after device_suspend(), so many tasks are already
  frozen at this point.  We'd only need to freeze those that are not frozen 
  and
  in _cpu_up() we'd have to thaw them.
 
 Not sure I understand. When we call _cpu_down() after device_suspend(), we
 check tasks_frozen == 1, and do not call freeze_processes(). If the task
 could be frozen, it is already frozen.
 
 When _cpu_down() sees tasks_frozen = 0, it does freeze_processes() itself,
 and thaw_tasks() on return.
 
 IOW, we never send (say) CPU_DEAD, always CPU_DEAD_FROZEN.

Yes, that seems reasonable.

This means that every user of freezable kernel threads who installs a CPU
hotplug notifier will have to assume that its kernel threads are frozen when
the notifier is called.

Greetings,
Rafael
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-15 Thread Rafael J. Wysocki
On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
> On 05/14, Rafael J. Wysocki wrote:
> >
> > On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
> > > 
> > > Rafael, I am afraid we are making too much noise, and this may confuse 
> > > Alex
> > > and Andrew.
> > > 
> > > First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
> > > simple
> > > "make freezeable workqueues singlethread" I sent. It was acked by Alex, 
> > > it is
> > > simple, and it is also good because tifm doesn't need multithreaded wq 
> > > anyway.
> > 
> > Yes, I've already agreed with that.
> 
> Ah, OK, I misunderstood your message as if you propose this fix for 2.6.22.

Never mind. :-)

> > >   - Do we need freezeable workqueues ?
> > 
> > Well, we have at least one case in which they appear to be useful.
> 
> So, in the long term, should we change this only user, or we think we better 
> fix
> freezeable wqs again?

Long term, I'd like to have freezable workqueues, so that people don't have to
use "raw" kernel threads only because they need some synchronization with
hibertnation/suspend.  Plus some cases in which workqueues are used by
fs-related code make me worry.

OTOH, I have some concerns with that (please see [*] below).
 
> > > WORK2->func() completes.
> > > 
> > > freezer comes. cwq->thread notices TIF_FREEZE and goes to refrigerator 
> > > before
> > > executing that barrier.
> 
> This is not possible. cwq->thread _must_ notice the barrier before it goes to
> refrigerator.
> 
> So, given that we have cpu_populated_map we can re-introduce take_over_work()
> along with migrate_sequence and thus we can fix freezeable multithreaded wqs.

[*] The problem is, though, that freezable workqueus have some potential to fail
the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
workqueue, finds some work items in there, inserts the barrier and waits for
completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
the worker thread, which is then woken up and goes to the refrigerator.  Thus
if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it will be
blocked until the worker thread is woken up.

To avoid this, I think, we may need to redesign the freezer, so that freezable
worker threads are frozen after all of the other kernel threads.  Additionally,
we'd need to make a rule that NOFREEZE kernel threads must not call
flush_workqueue() or cancel_work_sync() on freezable workqueues.

> > > If we have take_over_work() we should use it for any workqueue,
> > > freezeable or not. Otherwise this is just a mess, imho.
> 
> Still, this is imho true. So we'd better do some other changes to be 
> consistent.

Agreed.

Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-15 Thread Rafael J. Wysocki
On Tuesday, 15 May 2007 02:56, Alex Dubov wrote:
> > 
> > > > - Do we need freezeable workqueues ?
> > > 
> > > Well, we have at least one case in which they appear to be useful.
> > 
> 
> I need freezeable wq exactly for the fact that they are synchronized with 
> suspend/resume. My
> workitem may do device_register/unregister and it can (and will be) scheduled 
> from irq handler
> during resume. As far as I understand, before freezeable wqs, kthreads were 
> the only way to
> achieve this behavior,

That's correct.

> which is less convenient.

Thanks for the explanation. 

Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-15 Thread Rafael J. Wysocki
On Tuesday, 15 May 2007 02:56, Alex Dubov wrote:
  
- Do we need freezeable workqueues ?
   
   Well, we have at least one case in which they appear to be useful.
  
 
 I need freezeable wq exactly for the fact that they are synchronized with 
 suspend/resume. My
 workitem may do device_register/unregister and it can (and will be) scheduled 
 from irq handler
 during resume. As far as I understand, before freezeable wqs, kthreads were 
 the only way to
 achieve this behavior,

That's correct.

 which is less convenient.

Thanks for the explanation. 

Greetings,
Rafael
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-15 Thread Rafael J. Wysocki
On Monday, 14 May 2007 23:48, Oleg Nesterov wrote:
 On 05/14, Rafael J. Wysocki wrote:
 
  On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
   
   Rafael, I am afraid we are making too much noise, and this may confuse 
   Alex
   and Andrew.
   
   First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
   simple
   make freezeable workqueues singlethread I sent. It was acked by Alex, 
   it is
   simple, and it is also good because tifm doesn't need multithreaded wq 
   anyway.
  
  Yes, I've already agreed with that.
 
 Ah, OK, I misunderstood your message as if you propose this fix for 2.6.22.

Never mind. :-)

 - Do we need freezeable workqueues ?
  
  Well, we have at least one case in which they appear to be useful.
 
 So, in the long term, should we change this only user, or we think we better 
 fix
 freezeable wqs again?

Long term, I'd like to have freezable workqueues, so that people don't have to
use raw kernel threads only because they need some synchronization with
hibertnation/suspend.  Plus some cases in which workqueues are used by
fs-related code make me worry.

OTOH, I have some concerns with that (please see [*] below).
 
   WORK2-func() completes.
   
   freezer comes. cwq-thread notices TIF_FREEZE and goes to refrigerator 
   before
   executing that barrier.
 
 This is not possible. cwq-thread _must_ notice the barrier before it goes to
 refrigerator.
 
 So, given that we have cpu_populated_map we can re-introduce take_over_work()
 along with migrate_sequence and thus we can fix freezeable multithreaded wqs.

[*] The problem is, though, that freezable workqueus have some potential to fail
the freezer.  Namely, suppose task A calls flush_workqueue() on a freezable
workqueue, finds some work items in there, inserts the barrier and waits for
completion (TASK_UNINTERRUPTIBLE).  In the meantime, TIF_FREEZE is set on
the worker thread, which is then woken up and goes to the refrigerator.  Thus
if A is not NOFREEZE, the freezing of tasks will fail (A must be a kernel
thread for this to happen, but still).  Worse yet, if A is NOFREEZE, it will be
blocked until the worker thread is woken up.

To avoid this, I think, we may need to redesign the freezer, so that freezable
worker threads are frozen after all of the other kernel threads.  Additionally,
we'd need to make a rule that NOFREEZE kernel threads must not call
flush_workqueue() or cancel_work_sync() on freezable workqueues.

   If we have take_over_work() we should use it for any workqueue,
   freezeable or not. Otherwise this is just a mess, imho.
 
 Still, this is imho true. So we'd better do some other changes to be 
 consistent.

Agreed.

Greetings,
Rafael
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Alex Dubov
> 
> > >   - Do we need freezeable workqueues ?
> > 
> > Well, we have at least one case in which they appear to be useful.
> 

I need freezeable wq exactly for the fact that they are synchronized with 
suspend/resume. My
workitem may do device_register/unregister and it can (and will be) scheduled 
from irq handler
during resume. As far as I understand, before freezeable wqs, kthreads were the 
only way to
achieve this behavior, which is less convenient.




 

Need Mail bonding?
Go to the Yahoo! Mail Q for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list=396546091
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Oleg Nesterov
On 05/14, Rafael J. Wysocki wrote:
>
> On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
> > 
> > Rafael, I am afraid we are making too much noise, and this may confuse Alex
> > and Andrew.
> > 
> > First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
> > simple
> > "make freezeable workqueues singlethread" I sent. It was acked by Alex, it 
> > is
> > simple, and it is also good because tifm doesn't need multithreaded wq 
> > anyway.
> 
> Yes, I've already agreed with that.

Ah, OK, I misunderstood your message as if you propose this fix for 2.6.22.

> > - Do we need freezeable workqueues ?
> 
> Well, we have at least one case in which they appear to be useful.

So, in the long term, should we change this only user, or we think we better fix
freezeable wqs again?

> > WORK2->func() completes.
> > 
> > freezer comes. cwq->thread notices TIF_FREEZE and goes to refrigerator 
> > before
> > executing that barrier.

This is not possible. cwq->thread _must_ notice the barrier before it goes to
refrigerator.

So, given that we have cpu_populated_map we can re-introduce take_over_work()
along with migrate_sequence and thus we can fix freezeable multithreaded wqs.

> > If we have take_over_work() we should use it for any workqueue,
> > freezeable or not. Otherwise this is just a mess, imho.

Still, this is imho true. So we'd better do some other changes to be consistent.

---
> Yeah, I need to learn more.

No. I should read the patches more carefully before complain.

> Sorry for the confusion I made.

Rafael, it is me who have to apologize.
---

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Rafael J. Wysocki
On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
> Long. Please read.
> 
> On 05/14, Rafael J. Wysocki wrote:
> >
> > I think I have solved this particular problem without any locking:
> 
> Rafael, I am afraid we are making too much noise, and this may confuse Alex
> and Andrew.
> 
> First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
> simple
> "make freezeable workqueues singlethread" I sent. It was acked by Alex, it is
> simple, and it is also good because tifm doesn't need multithreaded wq anyway.

Yes, I've already agreed with that.

> I'll comment the patch you sent below, but for a start
> 
> ---
> Guys, let's have a plan 
> !!!
> 
> I do not understand what's going on. With or without recent changes in 
> workqueue.c
> multithreaded freezeable workqueues were broken by other changes in suspend.
> 
> It was decided we don't need them, they should go away. We even had a patch
> which removed freezeable workqueues (multithreaded or not) completely. But it
> conflicted with other changes in -mm, so it was deleted, and then forgotten.
> 
> I never understood why do we need freezeable workqueues. Is they needed to
> synchronize with suspend? In that case, shouldn't we have some form of
> notification wich allows the driver to cancel the work which should not run
> at suspend time?

To make the long story short, we see some suspend-related problems that may
be attributed to workqueues used by XFS, but not directly, AFAICS.  That's why
we tried to introduce the freezability of workqueues, but I think that wasn't a
good idea.

> OK, so you think we should re-introduce them.

I just think we *might* introduce them, if there are some users.  Obviously we
have one user right now, but he only needs a singlethread workqueue, so your
small patch is the right thing to do for 2.6.22.

> What about incoming CPU-hotplug changes? The goal was - again! - remove the
> "singlethread" parameter, make them all freezeable, and freeze all processes
> to handle CPU_DEAD. In that case we don't have any problems, we can
> re-introduce take_over_work() without migrate_sequence this patch adds.
> 
> So.
>   - Do we need freezeable workqueues ?

Well, we have at least one case in which they appear to be useful.

>   - Do we need multithreaded freezeable workqueues ?

Not right now, if ever.

>   - Do we need them for 2.6.22 ?

Certainly not.

>   - Should we wait for CPU-hotplug changes, or we should
> re-introduce them right now ?

We don't need to reintroduce freezable multithreaded workqueues right now.

> > --- linux-2.6.22-rc1.orig/kernel/workqueue.c
> > +++ linux-2.6.22-rc1/kernel/workqueue.c
> > @@ -62,6 +62,9 @@ struct workqueue_struct {
> > const char *name;
> > int singlethread;
> > int freezeable; /* Freeze threads during suspend */
> > +   atomic_t work_sw;   /* Used to indicate that some work has been
> > +* moved from one CPU to another
> > +*/
> >  };
> 
> "work_sw" should not be atomic, and since the race is _very_ unlikely it
> could be global. We had such a thing, it was called "migrate_sequence" and
> it was removed.
> 
> It didn't work, but it _can_ work now because we have cpu_populated_map.
> However, this needs more thinking, because it breaks cancel_work_sync()
> in a very subtle way.
> 
> > +static void take_over_work(struct workqueue_struct *wq, unsigned int cpu)
> > +{
> > +   struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq->cpu_wq, cpu);
> > +   struct list_head list;
> > +   struct work_struct *work;
> > +
> > +   spin_lock_irq(>lock);
> > +   list_replace_init(>worklist, );
> > +
> > +   if (!list_empty()) {
> > +   /*
> > +* Tell flush_workqueue() that it should repeat the loop over
> > +* CPUs
> > +*/
> > +   atomic_inc(>work_sw);
> > +   while (!list_empty()) {
> > +   printk("Taking work for %s\n", wq->name);
> > +   work = list_entry(list.next, struct work_struct, entry);
> > +   list_del(>entry);
> > +   __queue_work(per_cpu_ptr(wq->cpu_wq,
> > +   smp_processor_id()), work);
> > +   }
> > +   }
> > +   spin_unlock_irq(>lock);
> > +}
> > +
> 
> Suppose that we have some pending WORK1 on CPU 0 which needs some LOCK.
> 
> We take this LOCK and call cancel_work_sync(WORK2) which is running on CPU 1.
> cancel_work_sync() inserts a barrier after WORK2 and waits for completion.
> 
> WORK2->func() completes.
> 
> freezer comes. cwq->thread notices TIF_FREEZE and goes to refrigerator before
> executing that barrier.
> 
> CPU_DEAD_FROZEN(cpu == 1) moves that barrier to CPU 0.
> 
> thaw_processes().
> 
> DEADLOCK.
> 
> We hold the LOCK and sleep waiting for the completion of that 

Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Oleg Nesterov
Long. Please read.

On 05/14, Rafael J. Wysocki wrote:
>
> I think I have solved this particular problem without any locking:

Rafael, I am afraid we are making too much noise, and this may confuse Alex
and Andrew.

First, we should decide how to fix the bug we have in 2.6.22. I prefer a simple
"make freezeable workqueues singlethread" I sent. It was acked by Alex, it is
simple, and it is also good because tifm doesn't need multithreaded wq anyway.

I'll comment the patch you sent below, but for a start

---
Guys, let's have a plan !!!

I do not understand what's going on. With or without recent changes in 
workqueue.c
multithreaded freezeable workqueues were broken by other changes in suspend.

It was decided we don't need them, they should go away. We even had a patch
which removed freezeable workqueues (multithreaded or not) completely. But it
conflicted with other changes in -mm, so it was deleted, and then forgotten.

I never understood why do we need freezeable workqueues. Is they needed to
synchronize with suspend? In that case, shouldn't we have some form of
notification wich allows the driver to cancel the work which should not run
at suspend time?


OK, so you think we should re-introduce them. What about incoming CPU-hotplug
changes? The goal was - again! - remove the "singlethread" parameter, make
them all freezeable, and freeze all processes to handle CPU_DEAD. In that
case we don't have any problems, we can re-introduce take_over_work() without
migrate_sequence this patch adds.

So.
- Do we need freezeable workqueues ?

- Do we need multithreaded freezeable workqueues ?

- Do we need them for 2.6.22 ?

- Should we wait for CPU-hotplug changes, or we should
  re-introduce them right now ?


> --- linux-2.6.22-rc1.orig/kernel/workqueue.c
> +++ linux-2.6.22-rc1/kernel/workqueue.c
> @@ -62,6 +62,9 @@ struct workqueue_struct {
>   const char *name;
>   int singlethread;
>   int freezeable; /* Freeze threads during suspend */
> + atomic_t work_sw;   /* Used to indicate that some work has been
> +  * moved from one CPU to another
> +  */
>  };

"work_sw" should not be atomic, and since the race is _very_ unlikely it
could be global. We had such a thing, it was called "migrate_sequence" and
it was removed.

It didn't work, but it _can_ work now because we have cpu_populated_map.
However, this needs more thinking, because it breaks cancel_work_sync()
in a very subtle way.

> +static void take_over_work(struct workqueue_struct *wq, unsigned int cpu)
> +{
> + struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq->cpu_wq, cpu);
> + struct list_head list;
> + struct work_struct *work;
> +
> + spin_lock_irq(>lock);
> + list_replace_init(>worklist, );
> +
> + if (!list_empty()) {
> + /*
> +  * Tell flush_workqueue() that it should repeat the loop over
> +  * CPUs
> +  */
> + atomic_inc(>work_sw);
> + while (!list_empty()) {
> + printk("Taking work for %s\n", wq->name);
> + work = list_entry(list.next, struct work_struct, entry);
> + list_del(>entry);
> + __queue_work(per_cpu_ptr(wq->cpu_wq,
> + smp_processor_id()), work);
> + }
> + }
> + spin_unlock_irq(>lock);
> +}
> +

Suppose that we have some pending WORK1 on CPU 0 which needs some LOCK.

We take this LOCK and call cancel_work_sync(WORK2) which is running on CPU 1.
cancel_work_sync() inserts a barrier after WORK2 and waits for completion.

WORK2->func() completes.

freezer comes. cwq->thread notices TIF_FREEZE and goes to refrigerator before
executing that barrier.

CPU_DEAD_FROZEN(cpu == 1) moves that barrier to CPU 0.

thaw_processes().

DEADLOCK.

We hold the LOCK and sleep waiting for the completion of that barrier.
But there is WORK1 on queue which runs first, and needs this LOCK to
complete.

> @@ -819,20 +860,31 @@ static int __devinit workqueue_cpu_callb
>
> + case CPU_DEAD_FROZEN:
> + if (wq->freezeable) {
> + take_over_work(wq, cpu);
> + thaw_process(cwq->thread);
> + }
> + cleanup_workqueue_thread(cwq, cpu);
> + break;

If we have take_over_work() we should use it for any workqueue,
freezeable or not. Otherwise this is just a mess, imho.

Rafael, this is tricky. Probably we can fix this, but this needs more
changes. I can _try_ to do this, but not now (unless we think we need
freezeable workqueues for 2.6.22).

I have other clenaups for workqueues, but I'd prefer to do nothing
except bugfixes right now. A lot of 

Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Oleg Nesterov
Long. Please read.

On 05/14, Rafael J. Wysocki wrote:

 I think I have solved this particular problem without any locking:

Rafael, I am afraid we are making too much noise, and this may confuse Alex
and Andrew.

First, we should decide how to fix the bug we have in 2.6.22. I prefer a simple
make freezeable workqueues singlethread I sent. It was acked by Alex, it is
simple, and it is also good because tifm doesn't need multithreaded wq anyway.

I'll comment the patch you sent below, but for a start

---
Guys, let's have a plan !!!

I do not understand what's going on. With or without recent changes in 
workqueue.c
multithreaded freezeable workqueues were broken by other changes in suspend.

It was decided we don't need them, they should go away. We even had a patch
which removed freezeable workqueues (multithreaded or not) completely. But it
conflicted with other changes in -mm, so it was deleted, and then forgotten.

I never understood why do we need freezeable workqueues. Is they needed to
synchronize with suspend? In that case, shouldn't we have some form of
notification wich allows the driver to cancel the work which should not run
at suspend time?


OK, so you think we should re-introduce them. What about incoming CPU-hotplug
changes? The goal was - again! - remove the singlethread parameter, make
them all freezeable, and freeze all processes to handle CPU_DEAD. In that
case we don't have any problems, we can re-introduce take_over_work() without
migrate_sequence this patch adds.

So.
- Do we need freezeable workqueues ?

- Do we need multithreaded freezeable workqueues ?

- Do we need them for 2.6.22 ?

- Should we wait for CPU-hotplug changes, or we should
  re-introduce them right now ?


 --- linux-2.6.22-rc1.orig/kernel/workqueue.c
 +++ linux-2.6.22-rc1/kernel/workqueue.c
 @@ -62,6 +62,9 @@ struct workqueue_struct {
   const char *name;
   int singlethread;
   int freezeable; /* Freeze threads during suspend */
 + atomic_t work_sw;   /* Used to indicate that some work has been
 +  * moved from one CPU to another
 +  */
  };

work_sw should not be atomic, and since the race is _very_ unlikely it
could be global. We had such a thing, it was called migrate_sequence and
it was removed.

It didn't work, but it _can_ work now because we have cpu_populated_map.
However, this needs more thinking, because it breaks cancel_work_sync()
in a very subtle way.

 +static void take_over_work(struct workqueue_struct *wq, unsigned int cpu)
 +{
 + struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq-cpu_wq, cpu);
 + struct list_head list;
 + struct work_struct *work;
 +
 + spin_lock_irq(cwq-lock);
 + list_replace_init(cwq-worklist, list);
 +
 + if (!list_empty(list)) {
 + /*
 +  * Tell flush_workqueue() that it should repeat the loop over
 +  * CPUs
 +  */
 + atomic_inc(wq-work_sw);
 + while (!list_empty(list)) {
 + printk(Taking work for %s\n, wq-name);
 + work = list_entry(list.next, struct work_struct, entry);
 + list_del(work-entry);
 + __queue_work(per_cpu_ptr(wq-cpu_wq,
 + smp_processor_id()), work);
 + }
 + }
 + spin_unlock_irq(cwq-lock);
 +}
 +

Suppose that we have some pending WORK1 on CPU 0 which needs some LOCK.

We take this LOCK and call cancel_work_sync(WORK2) which is running on CPU 1.
cancel_work_sync() inserts a barrier after WORK2 and waits for completion.

WORK2-func() completes.

freezer comes. cwq-thread notices TIF_FREEZE and goes to refrigerator before
executing that barrier.

CPU_DEAD_FROZEN(cpu == 1) moves that barrier to CPU 0.

thaw_processes().

DEADLOCK.

We hold the LOCK and sleep waiting for the completion of that barrier.
But there is WORK1 on queue which runs first, and needs this LOCK to
complete.

 @@ -819,20 +860,31 @@ static int __devinit workqueue_cpu_callb

 + case CPU_DEAD_FROZEN:
 + if (wq-freezeable) {
 + take_over_work(wq, cpu);
 + thaw_process(cwq-thread);
 + }
 + cleanup_workqueue_thread(cwq, cpu);
 + break;

If we have take_over_work() we should use it for any workqueue,
freezeable or not. Otherwise this is just a mess, imho.

Rafael, this is tricky. Probably we can fix this, but this needs more
changes. I can _try_ to do this, but not now (unless we think we need
freezeable workqueues for 2.6.22).

I have other clenaups for workqueues, but I'd prefer to do nothing
except bugfixes right now. A lot of non-reviewed intrusive changes
were merged. 

Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Rafael J. Wysocki
On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
 Long. Please read.
 
 On 05/14, Rafael J. Wysocki wrote:
 
  I think I have solved this particular problem without any locking:
 
 Rafael, I am afraid we are making too much noise, and this may confuse Alex
 and Andrew.
 
 First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
 simple
 make freezeable workqueues singlethread I sent. It was acked by Alex, it is
 simple, and it is also good because tifm doesn't need multithreaded wq anyway.

Yes, I've already agreed with that.

 I'll comment the patch you sent below, but for a start
 
 ---
 Guys, let's have a plan 
 !!!
 
 I do not understand what's going on. With or without recent changes in 
 workqueue.c
 multithreaded freezeable workqueues were broken by other changes in suspend.
 
 It was decided we don't need them, they should go away. We even had a patch
 which removed freezeable workqueues (multithreaded or not) completely. But it
 conflicted with other changes in -mm, so it was deleted, and then forgotten.
 
 I never understood why do we need freezeable workqueues. Is they needed to
 synchronize with suspend? In that case, shouldn't we have some form of
 notification wich allows the driver to cancel the work which should not run
 at suspend time?

To make the long story short, we see some suspend-related problems that may
be attributed to workqueues used by XFS, but not directly, AFAICS.  That's why
we tried to introduce the freezability of workqueues, but I think that wasn't a
good idea.

 OK, so you think we should re-introduce them.

I just think we *might* introduce them, if there are some users.  Obviously we
have one user right now, but he only needs a singlethread workqueue, so your
small patch is the right thing to do for 2.6.22.

 What about incoming CPU-hotplug changes? The goal was - again! - remove the
 singlethread parameter, make them all freezeable, and freeze all processes
 to handle CPU_DEAD. In that case we don't have any problems, we can
 re-introduce take_over_work() without migrate_sequence this patch adds.
 
 So.
   - Do we need freezeable workqueues ?

Well, we have at least one case in which they appear to be useful.

   - Do we need multithreaded freezeable workqueues ?

Not right now, if ever.

   - Do we need them for 2.6.22 ?

Certainly not.

   - Should we wait for CPU-hotplug changes, or we should
 re-introduce them right now ?

We don't need to reintroduce freezable multithreaded workqueues right now.

  --- linux-2.6.22-rc1.orig/kernel/workqueue.c
  +++ linux-2.6.22-rc1/kernel/workqueue.c
  @@ -62,6 +62,9 @@ struct workqueue_struct {
  const char *name;
  int singlethread;
  int freezeable; /* Freeze threads during suspend */
  +   atomic_t work_sw;   /* Used to indicate that some work has been
  +* moved from one CPU to another
  +*/
   };
 
 work_sw should not be atomic, and since the race is _very_ unlikely it
 could be global. We had such a thing, it was called migrate_sequence and
 it was removed.
 
 It didn't work, but it _can_ work now because we have cpu_populated_map.
 However, this needs more thinking, because it breaks cancel_work_sync()
 in a very subtle way.
 
  +static void take_over_work(struct workqueue_struct *wq, unsigned int cpu)
  +{
  +   struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq-cpu_wq, cpu);
  +   struct list_head list;
  +   struct work_struct *work;
  +
  +   spin_lock_irq(cwq-lock);
  +   list_replace_init(cwq-worklist, list);
  +
  +   if (!list_empty(list)) {
  +   /*
  +* Tell flush_workqueue() that it should repeat the loop over
  +* CPUs
  +*/
  +   atomic_inc(wq-work_sw);
  +   while (!list_empty(list)) {
  +   printk(Taking work for %s\n, wq-name);
  +   work = list_entry(list.next, struct work_struct, entry);
  +   list_del(work-entry);
  +   __queue_work(per_cpu_ptr(wq-cpu_wq,
  +   smp_processor_id()), work);
  +   }
  +   }
  +   spin_unlock_irq(cwq-lock);
  +}
  +
 
 Suppose that we have some pending WORK1 on CPU 0 which needs some LOCK.
 
 We take this LOCK and call cancel_work_sync(WORK2) which is running on CPU 1.
 cancel_work_sync() inserts a barrier after WORK2 and waits for completion.
 
 WORK2-func() completes.
 
 freezer comes. cwq-thread notices TIF_FREEZE and goes to refrigerator before
 executing that barrier.
 
 CPU_DEAD_FROZEN(cpu == 1) moves that barrier to CPU 0.
 
 thaw_processes().
 
 DEADLOCK.
 
 We hold the LOCK and sleep waiting for the completion of that barrier.
 But there is WORK1 on queue which runs first, and needs this LOCK to
 complete.

Yeah, I need to learn more.

  @@ -819,20 

Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Oleg Nesterov
On 05/14, Rafael J. Wysocki wrote:

 On Monday, 14 May 2007 18:55, Oleg Nesterov wrote:
  
  Rafael, I am afraid we are making too much noise, and this may confuse Alex
  and Andrew.
  
  First, we should decide how to fix the bug we have in 2.6.22. I prefer a 
  simple
  make freezeable workqueues singlethread I sent. It was acked by Alex, it 
  is
  simple, and it is also good because tifm doesn't need multithreaded wq 
  anyway.
 
 Yes, I've already agreed with that.

Ah, OK, I misunderstood your message as if you propose this fix for 2.6.22.

  - Do we need freezeable workqueues ?
 
 Well, we have at least one case in which they appear to be useful.

So, in the long term, should we change this only user, or we think we better fix
freezeable wqs again?

  WORK2-func() completes.
  
  freezer comes. cwq-thread notices TIF_FREEZE and goes to refrigerator 
  before
  executing that barrier.

This is not possible. cwq-thread _must_ notice the barrier before it goes to
refrigerator.

So, given that we have cpu_populated_map we can re-introduce take_over_work()
along with migrate_sequence and thus we can fix freezeable multithreaded wqs.

  If we have take_over_work() we should use it for any workqueue,
  freezeable or not. Otherwise this is just a mess, imho.

Still, this is imho true. So we'd better do some other changes to be consistent.

---
 Yeah, I need to learn more.

No. I should read the patches more carefully before complain.

 Sorry for the confusion I made.

Rafael, it is me who have to apologize.
---

Oleg.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Freezeable workqueues [Was: 2.6.22-rc1: Broken suspend on SMP with tifm]

2007-05-14 Thread Alex Dubov
 
 - Do we need freezeable workqueues ?
  
  Well, we have at least one case in which they appear to be useful.
 

I need freezeable wq exactly for the fact that they are synchronized with 
suspend/resume. My
workitem may do device_register/unregister and it can (and will be) scheduled 
from irq handler
during resume. As far as I understand, before freezeable wqs, kthreads were the 
only way to
achieve this behavior, which is less convenient.




 

Need Mail bonding?
Go to the Yahoo! Mail QA for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=listsid=396546091
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/