On Sun, Sep 09, 2012 at 01:40:19AM +0800, Lai Jiangshan wrote:
> On Sun, Sep 9, 2012 at 1:32 AM, Tejun Heo wrote:
> > On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
> >> > hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
> >> > manager.
> >>
> >> Ah, right. It
On Sun, Sep 9, 2012 at 1:32 AM, Tejun Heo wrote:
> On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
>> > hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
>> > manager.
>>
>> Ah, right. It isn't either on idle or busy list. Maybe have
>> pool->manager pointer?
>
On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
> > hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
> > manager.
>
> Ah, right. It isn't either on idle or busy list. Maybe have
> pool->manager pointer?
Ooh, this is what you did with the new patchset, right?
Hello, Lai.
On Sun, Sep 09, 2012 at 01:18:25AM +0800, Lai Jiangshan wrote:
> > + /*
> > +* CPU hotplug could have scheduled rebind_work while we're
> > +* waiting for manager_mutex. Rebind before doing anything
> > +* else. This has
On Sat, Sep 8, 2012 at 7:41 AM, Tejun Heo wrote:
> I think this should do it. Can you spot any hole with the following
> patch?
>
> Thanks.
>
> Index: work/kernel/workqueue.c
> ===
> --- work.orig/kernel/workqueue.c
> +++
On Sat, Sep 8, 2012 at 7:41 AM, Tejun Heo t...@kernel.org wrote:
I think this should do it. Can you spot any hole with the following
patch?
Thanks.
Index: work/kernel/workqueue.c
===
--- work.orig/kernel/workqueue.c
+++
Hello, Lai.
On Sun, Sep 09, 2012 at 01:18:25AM +0800, Lai Jiangshan wrote:
+ /*
+* CPU hotplug could have scheduled rebind_work while we're
+* waiting for manager_mutex. Rebind before doing anything
+* else. This has to be
On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
manager.
Ah, right. It isn't either on idle or busy list. Maybe have
pool-manager pointer?
Ooh, this is what you did with the new patchset, right?
--
On Sun, Sep 9, 2012 at 1:32 AM, Tejun Heo t...@kernel.org wrote:
On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
manager.
Ah, right. It isn't either on idle or busy list. Maybe have
pool-manager pointer?
On Sun, Sep 09, 2012 at 01:40:19AM +0800, Lai Jiangshan wrote:
On Sun, Sep 9, 2012 at 1:32 AM, Tejun Heo t...@kernel.org wrote:
On Sat, Sep 08, 2012 at 10:29:50AM -0700, Tejun Heo wrote:
hotplug code can't iterate manager. not rebind_work() nor UNBOUND for
manager.
Ah, right. It
I think this should do it. Can you spot any hole with the following
patch?
Thanks.
Index: work/kernel/workqueue.c
===
--- work.orig/kernel/workqueue.c
+++ work/kernel/workqueue.c
@@ -66,6 +66,7 @@ enum {
/* pool flags */
On Fri, Sep 07, 2012 at 04:05:56PM -0700, Tejun Heo wrote:
> I got it down to the following but it creates a problem where CPU
> hotplug queues a work item on worker->scheduled before the execution
> loops starts. :(
Oops, wrong patch. This is the right one.
Index: work/kernel/workqueue.c
I got it down to the following but it creates a problem where CPU
hotplug queues a work item on worker->scheduled before the execution
loops starts. :(
Need to think more about it.
kernel/workqueue.c | 63 -
1 file changed, 29
On Fri, Sep 07, 2012 at 01:22:49PM -0700, Tejun Heo wrote:
> So, how about something like the following?
>
> * Make manage_workers() called outside gcwq->lock (or drop gcwq->lock
> after checking MANAGING). worker_thread() can jump back to woke_up:
> instead.
>
> * Distinguish
Hello again, Lai.
On Fri, Sep 07, 2012 at 12:29:39PM -0700, Tejun Heo wrote:
> > Since we introduce manage_mutex(), any palace should be allowed to grab it
> > when its context allows. So it is not hotplug code's responsibility of this
> > bug.
> >
> > manage_workers() just use mutex_trylock()
Hello,
On Fri, Sep 07, 2012 at 11:10:34AM +0800, Lai Jiangshan wrote:
> > This patch fixes the bug by releasing manager_mutexes before letting
> > the rebound idle workers go. This ensures that by the time idle
> > workers check whether management is necessary, CPU_ONLINE already has
> >
Hello, Lai.
On Fri, Sep 07, 2012 at 09:53:25AM +0800, Lai Jiangshan wrote:
> > This patch fixes the bug by releasing manager_mutexes before letting
> > the rebound idle workers go. This ensures that by the time idle
> > workers check whether management is necessary, CPU_ONLINE already has
> >
Hello, Lai.
On Fri, Sep 07, 2012 at 09:53:25AM +0800, Lai Jiangshan wrote:
This patch fixes the bug by releasing manager_mutexes before letting
the rebound idle workers go. This ensures that by the time idle
workers check whether management is necessary, CPU_ONLINE already has
released
Hello,
On Fri, Sep 07, 2012 at 11:10:34AM +0800, Lai Jiangshan wrote:
This patch fixes the bug by releasing manager_mutexes before letting
the rebound idle workers go. This ensures that by the time idle
workers check whether management is necessary, CPU_ONLINE already has
released the
Hello again, Lai.
On Fri, Sep 07, 2012 at 12:29:39PM -0700, Tejun Heo wrote:
Since we introduce manage_mutex(), any palace should be allowed to grab it
when its context allows. So it is not hotplug code's responsibility of this
bug.
manage_workers() just use mutex_trylock() to grab
On Fri, Sep 07, 2012 at 01:22:49PM -0700, Tejun Heo wrote:
So, how about something like the following?
* Make manage_workers() called outside gcwq-lock (or drop gcwq-lock
after checking MANAGING). worker_thread() can jump back to woke_up:
instead.
* Distinguish synchronization among
I got it down to the following but it creates a problem where CPU
hotplug queues a work item on worker-scheduled before the execution
loops starts. :(
Need to think more about it.
kernel/workqueue.c | 63 -
1 file changed, 29 insertions(+),
On Fri, Sep 07, 2012 at 04:05:56PM -0700, Tejun Heo wrote:
I got it down to the following but it creates a problem where CPU
hotplug queues a work item on worker-scheduled before the execution
loops starts. :(
Oops, wrong patch. This is the right one.
Index: work/kernel/workqueue.c
I think this should do it. Can you spot any hole with the following
patch?
Thanks.
Index: work/kernel/workqueue.c
===
--- work.orig/kernel/workqueue.c
+++ work/kernel/workqueue.c
@@ -66,6 +66,7 @@ enum {
/* pool flags */
On 09/07/2012 04:08 AM, Tejun Heo wrote:
>>From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
> From: Tejun Heo
> Date: Thu, 6 Sep 2012 12:50:41 -0700
>
> To simplify both normal and CPU hotplug paths, while CPU hotplug is in
> progress, manager_mutex is held to prevent one
On 09/07/2012 04:08 AM, Tejun Heo wrote:
>>From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
> From: Tejun Heo
> Date: Thu, 6 Sep 2012 12:50:41 -0700
>
> To simplify both normal and CPU hotplug paths, while CPU hotplug is in
> progress, manager_mutex is held to prevent one
>From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Thu, 6 Sep 2012 12:50:41 -0700
To simplify both normal and CPU hotplug paths, while CPU hotplug is in
progress, manager_mutex is held to prevent one of the workers from
becoming a manager and creating
From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
From: Tejun Heo t...@kernel.org
Date: Thu, 6 Sep 2012 12:50:41 -0700
To simplify both normal and CPU hotplug paths, while CPU hotplug is in
progress, manager_mutex is held to prevent one of the workers from
becoming a manager
On 09/07/2012 04:08 AM, Tejun Heo wrote:
From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
From: Tejun Heo t...@kernel.org
Date: Thu, 6 Sep 2012 12:50:41 -0700
To simplify both normal and CPU hotplug paths, while CPU hotplug is in
progress, manager_mutex is held to
On 09/07/2012 04:08 AM, Tejun Heo wrote:
From 985aafbf530834a9ab16348300adc7cbf35aab76 Mon Sep 17 00:00:00 2001
From: Tejun Heo t...@kernel.org
Date: Thu, 6 Sep 2012 12:50:41 -0700
To simplify both normal and CPU hotplug paths, while CPU hotplug is in
progress, manager_mutex is held to
30 matches
Mail list logo