retrying. Since retrying is expected as rare case, the trade-off
is acceptible.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 49 ++---
1 files changed, 18 insertions(+), 31 deletions(-)
diff --git a/kernel/workqueue.c b
Subsitute the get_work_pwq(work) to pwq since it is calculated.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c820057..7fc3df3 100644
--- a/kernel
unbound workqueue cpumask
Lai Jiangshan (1):
workqueue: Allow changing attributions of ordered workqueues
kernel/workqueue.c | 1674
1 file changed, 900 insertions(+), 774 deletions(-)
.
--
To unsubscribe from this list: send
On 06/23/2014 09:37 PM, Andrey Ryabinin wrote:
I'm working on address sanitizer project for kernel. Recently we started
experiments with stack instrumentation, to detect out-of-bounds
read/write bugs on stack.
Just after booting I've hit out-of-bounds read on stack in idr_for_each
(and in
idr_layer ***paa = pa[0];
+ struct idr_layer ***paa = pa[1];
I don't reject your patch, I had review it.
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
The reason why I'm still muttering here is that I wish a simple solution
to fix the problem. And:
1) your patch also makes use
On 06/20/2014 03:44 AM, Tejun Heo wrote:
On Tue, Jun 03, 2014 at 03:33:28PM +0800, Lai Jiangshan wrote:
When POOL_DISASSOCIATED is cleared, the running worker's local CPU should
be the same as pool-cpu without any exception even during cpu-hotplug.
This fix changes (proposition_A
On 06/18/2014 11:32 PM, Tejun Heo wrote:
On Wed, Jun 18, 2014 at 11:37:35AM +0800, Lai Jiangshan wrote:
@@ -97,7 +98,10 @@ static inline void percpu_ref_kill(struct percpu_ref
*ref)
static inline bool __pcpu_ref_alive(struct percpu_ref *ref,
unsigned __percpu
(). We
only need data dep barrier and smp_load_acquire() is stronger and
heavier on some archs. Spotted by Lai Jiangshan.
Signed-off-by: Tejun Heo t...@kernel.org
Cc: Kent Overstreet k...@daterainc.com
Cc: Christoph Lameter c...@linux-foundation.org
Cc: Lai Jiangshan la
and footprint for the process_one_work()
if worker_set_flags() is really inlined.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 27 ---
1 files changed, 8 insertions(+), 19 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cd75689
!WORKER_UNBOUND since the nr_running = 1 except only one case.
It will introduce useless/redundant wake-up when cpu_intensive, but this
case is rare and next patch will also remove this redundant wake-up.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |9 ++---
1 files
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 74
1 files changed, 17 insertions(+), 57 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 40b1f00..bf837b0 100644
--- a/kernel/workqueue.c
+++ b/kernel
to access to the pool as the same as the regular workers,
put_unbound_pool() will wait for it to detach and then free the pool.
So we move the worker_detach_from_pool() down, make it coincide with
the regular workers.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 14
Current get_unbound_pool() considers all node IDs are probable for
the pool, and it travel all nodes to find the node ID for the pool.
This travel is unnecessary, the probable node ID can be attained
from the first CPU and we need check only one possible node ID.
Signed-off-by: Lai Jiangshan la
. This comment is also removed in this patch
since the whole link_pwq() is proteced by wq-mutex.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 10 +-
1 files changed, 1 insertions(+), 9 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
They are the same and nr_node_ids is provided by the memory subsystem.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 11 +++
1 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4e9f722..7207393 100644
After the locking was moved up to the caller of the get_unbound_pool(),
out_unlock label doesn't need to do any unlock operation and the name
became bad, so we just remove this label, and the only usage-site
goto out_unlock is subsituted to return pool.
Signed-off-by: Lai Jiangshan la
On 07/22/2014 11:30 PM, Tejun Heo wrote:
On Tue, Jul 22, 2014 at 01:04:02PM +0800, Lai Jiangshan wrote:
+node = cpu_to_node(cpumask_first(pool-attrs-cpumask));
Minor but maybe cpumask_any() is a better fit here?
It is OK, the result are the same. But I still think cpumask_first
Ping.
Thanks,
Lai
On 06/26/2014 07:27 PM, Lai Jiangshan wrote:
On 06/20/2014 03:44 AM, Tejun Heo wrote:
On Tue, Jun 03, 2014 at 03:33:28PM +0800, Lai Jiangshan wrote:
When POOL_DISASSOCIATED is cleared, the running worker's local CPU should
be the same as pool-cpu without any exception even
On 07/07/2014 01:21 AM, Yasuaki Ishimatsu wrote:
When hot-adding and onlining CPU, kernel panic occurs, showing following
call trace.
BUG: unable to handle kernel paging request at 1d08
IP: [8114acfd] __alloc_pages_nodemask+0x9d/0xb10
PGD 0
Oops: [#1] SMP
...
On 07/07/2014 08:33 AM, Yasuaki Ishimatsu wrote:
(2014/07/07 9:19), Lai Jiangshan wrote:
On 07/07/2014 01:21 AM, Yasuaki Ishimatsu wrote:
When hot-adding and onlining CPU, kernel panic occurs, showing following
call trace.
BUG: unable to handle kernel paging request at 1d08
)
and patch 4 (documentation: Add pointer to percpu-ref for RCU and refcount)
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
.
+ */
A pair of brackets are missing here: rcu_read_unlock()
after that, please add:
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
It reminds me that I should keep my effort to solve the deadlock
problem where rcu_read_unlock() is overlapped with schedule locks.
local_irq_save
On 07/08/2014 06:38 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
The current approach to RCU priority boosting uses an rt_mutex strictly
for its priority-boosting side effects. The rt_mutex_init_proxy_locked()
function is used by the booster to initialize
Besides patch 3, please also add my review-by for these patches:
patch 1, 2, 5, 7, 12, 13, 15, 16, 17
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
Thanks,
Lai
On 07/08/2014 06:37 AM, Paul E. McKenney wrote:
Hello!
This series provides miscellaneous fixes:
1.Document deadlock
if we only do the 2).
2) remove the del_timer_sync()s in maybe_create_worker()
Fix the problem. It is not enough if we only do the 1),
need_to_create_worker() would be probably true when the time
del_timer_sync() is called, the timer is still repeating.
Signed-off-by: Lai
On 07/11/2014 11:03 PM, Tejun Heo wrote:
On Fri, Jul 11, 2014 at 12:01:03AM +0800, Lai Jiangshan wrote:
@@ -1887,17 +1887,11 @@ static void pool_mayday_timeout(unsigned long __pool)
* spin_lock_irq(pool-lock) which may be released and regrabbed
* multiple times. Does GFP_KERNEL
Hi, TJ,
I dropped the patch1 patch2 of the V1, only the patch3 is kept and
re-based. The new patch depends on the patch of last night:
workqueue: remove the del_timer_sync()s in maybe_create_worker().
Thanks,
Lai
Lai Jiangshan (1):
workqueue: unfold start_worker() into create_worker
the manager is slow path.
And because this new locking behavior, the newly created worker
may grab the lock earlier than the manager and go to process
work items. In this case, the need_to_create_worker() may be
true as expected and the manager goes to restart without complaint.
Signed-off-by: Lai
is not working.
Sorry for the incorrect V1 due to a general excuse: World-Cup-2014.
Another patch [PATCH 1/1 V2] workqueue: unfold start_worker() into
create_worker()
depends on this patch. That patch still coincides with this patch even
this patch is updated.
Thanks,
Lai
Lai Jiangshan (1
it when needed. In V2, we stops the
timer only when manager is not working.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 +++
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 338d418..9c86a64 100644
On 07/14/2014 11:33 PM, Thomas Gleixner wrote:
On Mon, 14 Jul 2014, Tejun Heo wrote:
Hello,
On Mon, Jul 14, 2014 at 04:13:21PM +0800, Lai Jiangshan wrote:
It is said in the document that the timer which is being
deleted by del_timer_sync() should not be restarted:
Synchronization rules
On 07/15/2014 05:42 AM, Thomas Gleixner wrote:
On Tue, 15 Jul 2014, Lai Jiangshan wrote:
On 07/14/2014 11:33 PM, Thomas Gleixner wrote:
On Mon, 14 Jul 2014, Tejun Heo wrote:
Hello,
On Mon, Jul 14, 2014 at 04:13:21PM +0800, Lai Jiangshan wrote:
It is said in the document that the timer
to the work is dequeued and re-queued. So we also disallow
try_to_grab_pending() to grab the pending of the work in this condition
by introducing KEEP_QUEUED flag.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 29 -
1 files changed, 24
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e83c42a..a4a5364 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1614,11 +1614,11 @@ static void
On 07/29/2014 02:55 AM, Tejun Heo wrote:
Hello, Lai.
On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
There are some problems with the managers:
1) The last idle worker prefer managing to processing.
It is better that the processing of work items should be the first
(creater_work, mayday_timer) at first and then
stops idle workers and idle_timer.
(1/2 patch is the 1/3 patch of the v1, so it is not resent.)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 238 ++--
1 files changed, 63
On 07/29/2014 11:04 PM, Tejun Heo wrote:
Hello,
On Tue, Jul 29, 2014 at 05:16:07PM +0800, Lai Jiangshan wrote:
First of all, the patch is too big. This is a rather pervasive
change. Please split it up if at all possible.
+/* Start the mayday timer and the creater when needed
If I understand the semantics of the cpu_stat_off correctly, please read.
cpu_stat_off = a set of such CPU: the cpu is online vmstat_work is off
I consider some code forget to guarantee each cpu in cpu_stat_off is online.
Thanks,
Lai
On 07/10/2014 10:04 PM, Christoph Lameter wrote:
+
+/*
+
On 07/11/2014 11:17 PM, Christoph Lameter wrote:
On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
Converted what? We still need to keep a cpumask around that tells us which
processor have vmstat running and which do not.
Converted to cpumask_var_t.
I mean we spent dozens emails on that...
On 07/29/2014 11:39 PM, Christoph Lameter wrote:
On Tue, 29 Jul 2014, Tejun Heo wrote:
Hmmm, well, then it's something else. Either a bug in workqueue or in
the caller. Given the track record, the latter is more likely.
e.g. it looks kinda suspicious that the work func is cleared after
On 07/30/2014 11:23 AM, Tejun Heo wrote:
Hello, Lai.
On Wed, Jul 30, 2014 at 08:32:51AM +0800, Lai Jiangshan wrote:
Why? Just sleep and retry? What's the point of requeueing?
Accepted your comments except this one which may need to discuss
for an additional round. Requeueing passes
On 07/29/2014 06:56 AM, Paul E. McKenney wrote:
+ /*
+ * Each pass through the following loop scans the list
+ * of holdout tasks, removing any that are no longer
+ * holdouts. When the list is empty, we are done.
+ */
+
On 07/15/2014 11:58 PM, Tejun Heo wrote:
Hello, Lai.
On Tue, Jul 15, 2014 at 05:30:10PM +0800, Lai Jiangshan wrote:
Thread1 expects that, after flush_delayed_work() returns, the known pending
work is guaranteed finished. But if Thread2 is scheduled a little later than
Thread1, the known
On 07/15/2014 05:42 AM, Thomas Gleixner wrote:
On Tue, 15 Jul 2014, Lai Jiangshan wrote:
On 07/14/2014 11:33 PM, Thomas Gleixner wrote:
On Mon, 14 Jul 2014, Tejun Heo wrote:
Hello,
On Mon, Jul 14, 2014 at 04:13:21PM +0800, Lai Jiangshan wrote:
It is said in the document that the timer
On 07/14/2014 12:05 PM, Lai Jiangshan wrote:
Simply unfold the code of start_worker() into create_worker() and
remove the original start_worker() and create_and_start_worker().
The only trade-off is the introduced overhead that the pool-lock
is released and re-grabbed after the newly worker
We don't need to wake up regular worker when nr_running==1,
so need_more_worker() is sufficient here.
And need_more_worker() gives us better readability due to the name of
keep_working() implies the rescuer should keep working now but
the rescuer is actually leaving.
Signed-off-by: Lai Jiangshan
to access to the pool as the same as the regular workers,
put_unbound_pool() will wait for it to detach and then free the pool.
So we move the worker_detach_from_pool() down, make it coincide with
the regular workers.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
This old logic is introduced
worker_set_flags() doesn't necessarily wake next worker and the @wakeup
can be removed, the caller can use the following conbination instead
when needed:
worker_set_flags();
if (need_more_worker(pool))
wake_up_worker(pool);
Signed-off-by: Lai Jiangshan la
!WORKER_UNBOUND since the nr_running = 1 except only one case.
It will introduce useless/redundant wake-up when cpu_intensive, but this
case is rare and next patch will also remove this redundant wake-up.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 ++-
1 files
On 07/16/2014 09:29 PM, Pranith Kumar wrote:
On 07/16/2014 08:47 AM, Paul E. McKenney wrote:
On Tue, Jul 15, 2014 at 06:57:59PM -0400, Pranith Kumar wrote:
On 07/15/2014 06:53 PM, j...@joshtriplett.org wrote:
On Tue, Jul 15, 2014 at 06:31:48PM -0400, Pranith Kumar wrote:
This commit removes
On 07/17/2014 09:01 AM, Pranith Kumar wrote:
On 07/16/2014 08:55 PM, Lai Jiangshan wrote:
On 07/16/2014 09:29 PM, Pranith Kumar wrote:
On 07/16/2014 08:47 AM, Paul E. McKenney wrote:
On Tue, Jul 15, 2014 at 06:57:59PM -0400, Pranith Kumar wrote:
On 07/15/2014 06:53 PM, j...@joshtriplett.org
Hi,
I'm curious about what will it happen when alloc_pages_node(memoryless_node).
If the memory is allocated from the most preferable node for the
@memoryless_node,
why we need to bother and use cpu_to_mem() in the caller site?
If not, why the memory allocation subsystem refuses to find a
On 07/30/2014 10:45 PM, Christoph Lameter wrote:
On Wed, 30 Jul 2014, Lai Jiangshan wrote:
I think the bug is here, it re-queues the per_cpu(vmstat_work, cpu) which is
offline
(after vmstat_cpuup_callback(CPU_DOWN_PREPARE). And cpu_stat_off is
accessed without
proper lock.
Ok. I
smpboot.h doesn't need this declaration, remove it.
CC: Thomas Gleixner t...@linutronix.de
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/smpboot.h |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
Weisbecker fweis...@gmail.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
mm/swap.c | 11 ---
1 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 9e8e347..bb524ca 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -833,27 +833,24 @@ static
, it doesn't need
get_online_cpus() which is removed in the patch.
CC: Thomas Gleixner t...@linutronix.de
Cc: Rusty Russell ru...@rustcorp.com.au
Cc: Peter Zijlstra pet...@infradead.org
Cc: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
CC: sta...@kernel.org
Signed-off-by: Lai Jiangshan la
On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the
On 07/30/2014 09:56 PM, Fengguang Wu wrote:
Hi Christoph,
FYI, this commit seems to convert some kernel boot hang bug into
different BUG messages.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git
for-3.17-consistent-ops
commit 9b0c63851edaf54e909475fe2a0946f57810e98a
Author:
On 07/30/2014 10:55 PM, Christoph Lameter wrote:
On Wed, 30 Jul 2014, Fengguang Wu wrote:
FYI, this commit seems to convert some kernel boot hang bug into
different BUG messages.
Hmmm. Still a bit confused as to why these messages occur.. Does this
patch do any good?
The vmstat bug can't
On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
+ /*
+* There were callbacks, so we need to wait for an
+* RCU-tasks grace period. Start off by scanning
+* the task list for tasks that are not already
+* voluntarily blocked. Mark
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the
On 08/01/2014 12:09 AM, Chris Metcalf wrote:
On 7/31/2014 7:51 AM, Michal Hocko wrote:
On Thu 31-07-14 11:30:19, Lai Jiangshan wrote:
It is suggested that cpumask_var_t and alloc_cpumask_var() should be used
instead of struct cpumask. But I don't want to add this complicity nor
leave
On 08/04/2014 06:05 AM, Paul E. McKenney wrote:
On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
On 08/02, Paul E. McKenney wrote:
On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
On 07/31, Paul E. McKenney wrote:
+ rcu_read_lock();
+
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
+ rcu_read_lock();
+ for_each_process_thread(g, t) {
+ if (t != current ACCESS_ONCE(t-on_rq)
+ !is_idle_task(t)) {
+ get_task_struct(t);
+
On 08/02/2014 05:54 AM, David Rientjes wrote:
On Thu, 31 Jul 2014, Lai Jiangshan wrote:
If the smpboot_register_percpu_thread() is called after
smpboot_create_threads()
but before __cpu_up(), the smpboot thread of the online-ing CPU is not
created,
and it results a bug. So we use
On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
+ rcu_read_lock();
+ for_each_process_thread(g, t) {
+ if (t != current ACCESS_ONCE(t-on_rq
that kworker_creater_thread is
created than all early kworkers. Although the early kworkers are not
depends on kworker_creater_thread, but this initialization order makes
the pid of kworker_creater_thread smaller than kworkers which
seems more smooth.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel
manager is implemented inside
worker, using dedicated creater will make things more flexible.
So we offload the worker-management out from kworker into a single
dedicated creater kthread. It is done in patch2. And the patch1 is
preparation and patch3 is cleanup patch.
Lai Jiangshan (3
with it.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 37 +
1 files changed, 13 insertions(+), 24 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ce8e3fc..e1ab4f9 100644
--- a/kernel/workqueue.c
+++ b/kernel
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |7 ++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 370f947..1d44d8d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1708,8 +1708,13 @@ static struct
the running state of the kthread_worker
and calls cancel_kthread_work() to cancel the possible requeued work.
Both cancel_kthread_work_sync() and cancel_kthread_work() share the
code of flush_kthread_work() which also make the implementation simpler.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
The wait_queue_head_t done was totally unused since the flush_kthread_work()
had been re-implemented. So we removed it including the initialization
code. Some LOCKDEP code also depends on this wait_queue_head, so the
LOCKDEP code is also cleanup.
Signed-off-by: Lai Jiangshan la
If the worker task is not idle, it may sleep on some conditions by the request
of the work. Our unfriendly wakeup in the insert_kthread_work() may confuse
the worker.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/kthread.c |2 +-
1 files changed, 1 insertions(+), 1 deletions
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a runqueue, but neither -on_cpu nor
PREEMPT_ACTIVE?
Userspace tasks,
coding alignment calculation
Lai Jiangshan (2):
workqueue: clear POOL_DISASSOCIATED in rebind_workers()
workqueue: stronger test in process_one_work()
These two are also requested to be pulled on workqueue tree.
Tejun Heo (18):
percpu: disallow archs from overriding
I don't think this one needs nested sleeps.
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index cc423a3..1ca5888 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -233,15 +233,16 @@ static ssize_t inotify_read(struct file
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I
+struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long
addr)
+{
+ struct vm_area_struct *vma;
+ unsigned int seq;
+
+ WARN_ON_ONCE(!srcu_read_lock_held(vma_srcu));
+
+ do {
+ seq = read_seqbegin(mm-mm_seq);
+ vma =
On 10/22/2014 01:56 AM, Peter Zijlstra wrote:
On Tue, Oct 21, 2014 at 08:09:48PM +0300, Kirill A. Shutemov wrote:
It would be interesting to see if the patchset affects non-condended case.
Like a one-threaded workload.
It does, and not in a good way, I'll have to look at that... :/
Maybe it
On 10/23/2014 07:03 PM, Peter Zijlstra wrote:
On Thu, Oct 23, 2014 at 06:14:45PM +0800, Lai Jiangshan wrote:
+struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long
addr)
+{
+ struct vm_area_struct *vma;
+ unsigned int seq;
+
+ WARN_ON_ONCE(!srcu_read_lock_held
tags in your patch:
Reported-by: Sasha Levin sasha.le...@oracle.com
Reported-by: Jason J. Herne jjhe...@linux.vnet.ibm.com
Tested-by: Jason J. Herne jjhe...@linux.vnet.ibm.com
Acked-by: Lai Jiangshan la...@cn.fujitsu.com
Thanks,
Lai
On 06/06/2014 09:36 PM, Peter Zijlstra wrote:
On Thu, Jun 05
-hotplug callbacks and wq_calc_node_cpumask()
can use it instead of cpumask_of_node(node). Thus wq_calc_node_cpumask()
becomes much simpler and @cpu_going_down is gone.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 42 --
1 files
Hi, TJ
These patches are for unbound workqueue management (hotplug).
This patchset simplify the unbound workqueue management when hotplug.
This is also a preparation patchset for later unbound workqueue management
patches.
Thanks,
Lai.
Lai Jiangshan (3):
workqueue: add
-allocation and installation are changed to be protected by
wq_pool_mutex. Now the get_online_cpus() is no reason to exist,
remove it!
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 15 ++-
1 files changed, 2 insertions(+), 13 deletions(-)
diff --git
for this reason, and it will be remove
in later patch.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a217f0..9bc3a87 100644
--- a/kernel/workqueue.c
ping
On 10/08/2014 11:53 AM, Lai Jiangshan wrote:
Hi, TJ
These patches are for unbound workqueue management (hotplug).
This patchset simplify the unbound workqueue management when hotplug.
This is also a preparation patchset for later unbound workqueue management
patches.
Thanks,
Lai
On 10/29/2014 10:38 PM, Tejun Heo wrote:
On Wed, Oct 29, 2014 at 05:26:34PM +0800, pang.xun...@zte.com.cn wrote:
The memset in ida_init() already handles idr, so there's some
redundancy in the following idr_init().
This patch removes the memset, and clears ida-free_bitmap instead.
It seems incomplete if the pool_ids file doesn't include the default
pwq's pool. Add it and the result:
# cat pool_ids
0:9 1:10
default:8
rcu_read_lock_sched() is also changed to mutex_lock(wq-mutex)
for accessing the default pwq.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel
it would be better to remove it IMO.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/prio_heap.h | 58 -
lib/Makefile |2 +-
lib/prio_heap.c | 70 -
3 files changed, 1
On 11/18/2014 07:55 PM, Tejun Heo wrote:
Hello,
On Tue, Nov 18, 2014 at 05:19:18PM +0800, Lai Jiangshan wrote:
Is it too ugly?
What is it? The whole thing? percpu preloading? I'm just gonna
continue assuming that you're talking about preloading. If you think
it's ugly, please go
On 09/03/2014 11:15 PM, Peter Zijlstra wrote:
On Mon, Sep 01, 2014 at 11:04:23AM +0800, Lai Jiangshan wrote:
Hi, Peter
Could you make a patch for it, please? Jason J. Herne's test showed we
addressed the bug. But the fix is not in kernel yet. Some new highly
related reports are come up
,
in this case, the just added augment-code does nothing before cancel
since the @this node is already in the subtrees in this case.
CC: Michel Lespinasse wal...@google.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/block/drbd/drbd_interval.c |4
1 files changed, 4
The comment is copied from Documentation/rbtree.txt, but this comment
is so important that it should also be in the code.
CC: Andrew Morton a...@linux-foundation.org
CC: Michel Lespinasse wal...@google.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/rbtree_augmented.h
The original code are the same as RB_DECLARE_CALLBACKS().
CC: Michel Lespinasse wal...@google.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
drivers/block/drbd/drbd_interval.c | 36 ++--
1 files changed, 2 insertions(+), 34 deletions(-)
diff --git
On 09/23/2014 10:38 PM, Tejun Heo wrote:
On Mon, Sep 22, 2014 at 04:04:37PM +0800, Lai Jiangshan wrote:
It seems incomplete if the pool_ids file doesn't include the default
pwq's pool. Add it and the result:
# cat pool_ids
0:9 1:10
default:8
Hmmm? default pwq is used only
The 48a7639ce80c (rcu: Make callers awaken grace-period kthread)
removed the irq_work_queue(), so the TREE_RCU doesn't need
irq work any more.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
init/Kconfig |2 --
kernel/rcu/tree.h |1 -
2 files changed, 0 insertions(+), 3
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Lai Jiangshan la...@cn.fujitsu.com
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8749f43f3f05..fc0236992655 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -759,39 +759,71 @@ void
: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Cc: Gu, Zheng guz.f...@cn.fujitsu.com
Cc: tangchen tangc...@cn.fujitsu.com
Cc: Hiroyuki KAMEZAWA kamezawa.hir...@jp.fujitsu.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 53
701 - 800 of 2229 matches
Mail list logo