On 10/29/2014 10:38 PM, Tejun Heo wrote:
> On Wed, Oct 29, 2014 at 05:26:34PM +0800, pang.xun...@zte.com.cn wrote:
>> The memset in ida_init() already handles idr, so there's some
>> redundancy in the following idr_init().
>>
>> This patch removes the memset, and clears ida->free_bitmap instead.
>>
in the cpu-hotplug callbacks and wq_calc_node_cpumask()
can use it instead of cpumask_of_node(node). Thus wq_calc_node_cpumask()
becomes much simpler and @cpu_going_down is gone.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 42 --
1 files c
Hi, TJ
These patches are for unbound workqueue management (hotplug).
This patchset simplify the unbound workqueue management when hotplug.
This is also a preparation patchset for later unbound workqueue management
patches.
Thanks,
Lai.
Lai Jiangshan (3):
workqueue: add
-allocation and installation are changed to be protected by
wq_pool_mutex. Now the get_online_cpus() is no reason to exist,
remove it!
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 15 ++-
1 files changed, 2 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b
for this reason, and it will be remove
in later patch.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a217f0..9bc3a87 100644
--- a/kernel/workqueue.c
+++ b/kernel
On 10/23/2014 07:03 PM, Peter Zijlstra wrote:
> On Thu, Oct 23, 2014 at 06:14:45PM +0800, Lai Jiangshan wrote:
>>
>>>
>>> +struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long
>>> addr)
>>> +{
>>> +
ping
On 10/08/2014 11:53 AM, Lai Jiangshan wrote:
> Hi, TJ
>
> These patches are for unbound workqueue management (hotplug).
>
> This patchset simplify the unbound workqueue management when hotplug.
> This is also a preparation patchset for later unbound workqueue ma
worker" to "need_to_create_worker" ?
>>> Then it will stop as soon as there in an idle worker thread.
>>> That is the condition that keeps maybe_create_worker() looping.
>>> ??
>>
>> Yeah, that'd be a better condition and can work out. C
On Fri, Jul 10, 2015 at 3:09 AM, Chris Mason wrote:
>
> We've started experimenting with these to cut overheads in a few
> critical places, and while we don't have numbers yet I really hope it
> won't take too long.
>
> I think the topic is really interesting and we'll be able to get numbers
> fr
On Mon, Jul 13, 2015 at 5:57 PM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 12:26:21PM -0500, Christoph Lameter wrote:
>> On Thu, 9 Jul 2015, Chris Mason wrote:
>>
>> > I think the topic is really interesting and we'll be able to get numbers
>> > from production workloads to help justify and
On Mon, Jul 13, 2015 at 5:57 PM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 12:26:21PM -0500, Christoph Lameter wrote:
>> On Thu, 9 Jul 2015, Chris Mason wrote:
>>
>> > I think the topic is really interesting and we'll be able to get numbers
>> > from production workloads to help justify and
On Thu, Aug 13, 2015 at 12:03 AM, Paul E. McKenney
wrote:
> On Wed, Aug 12, 2015 at 04:27:34PM +0200, Frederic Weisbecker wrote:
>> On Tue, Aug 11, 2015 at 08:42:58PM +0200, Luis R. Rodriguez wrote:
>> > On Tue, Aug 11, 2015 at 10:49:36AM -0700, Andy Lutomirski wrote:
>> > > This is a bit late, bu
Hi, TJ
I think we need to add might_sleep() on the top of __cancel_work_timer().
The might_sleep() on the start_flush_work() doesn't cover all the
paths of __cancel_work_timer().
And it can help to narrow the area of this bug.
Hi Sedat Dilek
[ 24.705704] irq event stamp: 19968
[ 24.705706] h
On 11/18/2014 07:55 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Nov 18, 2014 at 05:19:18PM +0800, Lai Jiangshan wrote:
>> Is it too ugly?
>
> What is "it"? The whole thing? percpu preloading? I'm just gonna
> continue assuming that you're talking
can change rcu_bh_qs() rcu_idle/irq_enter/exit() to static-inline-functions
to reduce the binary size after these two patches accepted.
Thanks,
Lai
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Lai Jiangshan (2):
record rcu_bh quiescent state in RCU_SOFTIRQ
tiny_rcu: resched
directly in call_rcu_bh().
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tiny.c | 38 +-
1 files changed, 17 insertions(+), 21 deletions(-)
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 805b6d5..f8e19ac 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu
()rcu_sched_qs()
QS,and GP and advance cb QS,and GP and advance cb
wake up the ksoftirqd wake up the ksoftirqd
set resched
resched to ksoftirqd (or other) resched to ksoftirqd (or other)
These two code patches are almost the same.
Signed-off-by: Lai Jiangshan
>
> Signed-off-by: Paul E. McKenney
>
Reviewed-by: Lai Jiangshan
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8749f43f3f05..fc0236992655 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -759,39 +759,71 @@ void rcu_irq_enter(void)
&g
On 12/02/2014 03:14 AM, Paul E. McKenney wrote:
> On Sun, Nov 30, 2014 at 11:02:43PM -0200, Dâniel Fraga wrote:
>> On Sun, 30 Nov 2014 16:21:19 -0800
>> Linus Torvalds wrote:
>>
>>> Maybe you'll have to turn off RCU_CPU_STALL_VERBOSE first.
>>>
>>> Although I think you should be able to just edit
On 12/03/2014 12:58 AM, Dâniel Fraga wrote:
> On Tue, 2 Dec 2014 16:40:37 +0800
> Lai Jiangshan wrote:
>
>> It is needed at lest for testing.
>>
>> CONFIG_TREE_PREEMPT_RCU=y with CONFIG_PREEMPT=n is needed for testing too.
>>
>> Please enable them (
On 12/04/2014 02:02 AM, Tejun Heo wrote:
> So, something like the following. Only compile tested. I'll test it
> and post proper patches w/ due credits.
>
> Thanks.
>
> Index: work/kernel/workqueue.c
> ===
> --- work.orig/kernel/wo
On 11/18/2014 12:27 PM, NeilBrown wrote:
>
> When there is serious memory pressure, all workers in a pool could be
> blocked, and a new thread cannot be created because it requires memory
> allocation.
>
> In this situation a WQ_MEM_RECLAIM workqueue will wake up the
> rescuer thread to do some w
*/
complete(&rnp->boost_completion);
}
Just revert the patch to avoid it.
Cc: Thomas Gleixner
Cc: Steven Rostedt
Cc: Peter Zijlstra
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tree.h|5 -
kernel/rcu/tree_plugin.h |8 +---
2 files changed, 1 insertions(+), 1
On 11/14/2014 06:09 AM, Tejun Heo wrote:
> Implement set of pointers. Pointers can be added, deleted and
> iterated. It's currently implemented as a thin rbtree wrapper making
> addition and removal O(log N). A drawback is that iteration isn't RCU
> safe, which is okay for now. This will be use
Hi, TJ
The patch 4/5/6 does reduce cpu and temporary-memory usage sometimes.
But it is in slow path where small optimization is commonly unwelcome at.
Do I need to refactor the patches? I'm in doubt for the necessary.
Thanks,
Lai
--
To unsubscribe from this list: send the line "unsubscribe linu
ping
On 05/12/2015 08:32 PM, Lai Jiangshan wrote:
> Hi,
>
> This is the V2 version of the V1 pathset. But it is just the updated
> version of the patch1&2 of the V1 patchset.
>
> [1/5 V1] is split into [1/7 V2] and [2/7 V2].
> [2/5 V1] is split into [3,4,5,6,7/7 V2
On 05/18/2015 09:26 AM, Tejun Heo wrote:
> On Mon, May 18, 2015 at 08:39:21AM +0800, Lai Jiangshan wrote:
>> ping
>
> Does this reflect the comments from the previous review cycle?
>
This is the V2 version of the V1 pathset. But it is just the updated
version of the
Current modification to attrs via sysfs is not fully synchronized.
So this patch separates out and refactors the locking and
ensures attrs changes are properly synchronized.
changed from v1
just split the patch
Cc: Tejun Heo
Lai Jiangshan (2):
workqueue: separate out and refactor the
().
The apply_wqattrs_[un]lock() will be also used on later patch for
ensuring attrs changes are properly synchronized.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 82 --
1 file changed, 49 insertions(+), 33 deletions(-)
diff --git a
results that the Process B's operation is totally reverted
without any notification, it is a buggy behavior. So this patch
moves wq_sysfs_prep_attrs() into the protection under wq_pool_mutex
to ensure attrs changes are properly synchronized.
Signed-off-by: Lai Jiangshan
---
kernel/workqu
patch1&2 are simple cleaups and reflect to recently changes.
patch3 just moves code.
Cc: Tejun Heo
Lai Jiangshan (3):
workqueue: remove the declaration of copy_workqueue_attrs()
workqueue: remove the lock from wq_sysfs_prep_attrs()
workqueue: move flush_scheduled_work() to workque
Reading to wq->unbound_attrs requires protection of either wq_pool_mutex
or wq->mutex, and wq_sysfs_prep_attrs() is called with wq_pool_mutex held,
so we don't need to grab wq->mutex here.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 4 ++--
1 file changed, 2 insertions(+
flush_scheduled_work() is just a simple call to flush_work().
Signed-off-by: Lai Jiangshan
---
include/linux/workqueue.h | 30 +-
kernel/workqueue.c| 30 --
2 files changed, 29 insertions(+), 31 deletions(-)
diff --git a/include
This pre-declaration was unneeded since a previous refactor patch
6ba94429c8e7 ("workqueue: Reorder sysfs code").
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ee5bf95..a04a9cd3 10
revised as TJ's suguested.
Thanks,
Lai
Cc: Tejun Heo
Lai Jiangshan (7):
workqueue: wq_pool_mutex protects the attrs-installation
workqueue: simplify wq_update_unbound_numa()
workqueue: introduce get_pwq_unlocked()
workqueue: reuse the current per-node pwq when its attrs unchanged
tmp_attrs is just temporary attrs, we can use wq_update_unbound_numa_attrs_buf
for it like wq_update_unbound_numa();
This change also avoids frequently alloc/free the tmp_attrs when
the low level cpumask is being updated.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 15
this change, "ctx->dfl_pwq->refcnt++" could be dangerous
when ctx->dfl_pwq is being reused, so we use get_pwq_unlocked() instead.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/workqueue.c b
If the cpuamsk is changed, it is possible that only a part of the per-node
pwq is affected. This can happen when the user changes the cpumask of
a workqueue or the low level cpumask.
So we try to reuse the current per-node pwq when its attrs unchanged.
Signed-off-by: Lai Jiangshan
---
kernel
reused. Comparing to the old behavior,
wq_update_unbound_numa() introduces 3 pairs of lock()/unlock()
operations and overhead when the pwq is unchanged. Although
cpu-hotplug is cold path, but this case is likely true in
the cpu-hotplug path.
Signed-off-by: Lai
tex now,
so we don't need this such comment.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 20 +---
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f02b8ad..c8b9de0 100644
--- a/kernel/workqueue.c
+++
than comments.
It is also a preparation patch for next several patches which read
wq->unbound_attrs, wq->numa_pwq_tbl[] and wq->dfl_pwq with
only wq_pool_mutex held.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 27 ---
1 file changed, 20 insertions(+),
ned-off-by: Lai Jiangshan
---
kernel/workqueue.c | 33 ++---
1 file changed, 22 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c8b9de0..0fa352d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1067,6 +1067,24 @@ s
On 05/12/2015 09:22 PM, Tejun Heo wrote:
> Hello, Lai.
>
> On Tue, May 12, 2015 at 10:15:28AM +0800, Lai Jiangshan wrote:
>>> I'm not sure about this. Yeah, sure, it's a bit more lines of code
>>> but at the same time this'd allow us to make the public
under wq_pool_mutex to ensure attrs-changing be sequentially.
This patch is also a preparation patch for next patch which change
the API of apply_workqueue_attrs().
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 110 +
1 file changed, 69 inser
otected attrs-installation code is in apply_workqueue_attrs(),
so this patch touches code less than comments.
It is also a preparation patch of merging similar code in
apply_workqueue_attrs()
and wq_update_unbound_numa() together.
Signed-off-by: Lai Jiangshan
--
with the protection
of apply_wqattrs_lock();
This patch is also a preparation patch of next patch which
remove no_numa out from the structure workqueue_attrs which
requires apply_workqueue_attrs() has an argument to pass numa affinity.
Signed-off-by: Lai Jiangshan
---
include/linux/workqueu
This patchset has several cleanups to apply_workqueue_attrs(),
including enlagring the protection region of wq_pool_mutex,
merging similar code, changing the API ...
Patch3 is not just cleanup, it changes behavior and
ensures attrs-changing be sequentially.
Thanks,
Lai
Cc: Tejun Heo
Lai
d code. The numa affinity is stored at
wq->numa instead.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 44 ++--
1 file changed, 14 insertions(+), 30 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b08df98..42721a2 10064
().
Changed behavior:
1) Always try to reuse the old pwq.
(In old code, apply_workqueue_attrs() doesn't resue the old pwq.
2) Any reusage of old pwq will introduce get_pwq()/put_pwq() and
corresponding lock overhead.
Signed-off-by: Lai Jiangshan
---
k
On 05/11/2015 10:31 PM, Tejun Heo wrote:
> Hello, Lai.
Hello, TJ
>
>> * @node: the target NUMA node
>> - * @cpu_going_down: if >= 0, the CPU to consider as offline
>> - * @cpumask: outarg, the resulting cpumask
>> + * @cpu_off: if >= 0, the CPU to consider as offline
>
> @cpu_off sounds like
On 05/11/2015 10:59 PM, Tejun Heo wrote:
> On Mon, May 11, 2015 at 05:35:51PM +0800, Lai Jiangshan wrote:
>> workqueue_attrs is an internal-like structure and is exposed with
>> apply_workqueue_attrs() whose user has to investigate the structure
>> before use.
>>
>&
On 05/11/2015 10:55 PM, Tejun Heo wrote:
> Hey,
>
> Prolly a better subject is "ensure attrs changes are properly
> synchronized"
>
> On Mon, May 11, 2015 at 05:35:50PM +0800, Lai Jiangshan wrote:
>> Current modification to
queue will be repose after this patchset accepted.
Thanks,
Lai
Frederic Weisbecker (2):
workqueue: Reorder sysfs code
workqueue: Create low-level unbound workqueues cpumask
Lai Jiangshan (2):
workqueue: split apply_workqueue_attrs() into 3 stages
workqueue: Allow modifying low level unbou
s().
Lets move that block further in the file, right above alloc_workqueue_key()
which reference it.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
Sig
user doesn't overlap with the low level cpumask. In this case, we can't
apply the empty cpumask to the default pwq, so we use the user-set cpumask
directly.
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kuma
n is also moved into wq_pool_mutex.
this is needed to avoid to do the further splitting.
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Cc: Frederic Weisbecker
Signed-off-by: Lai Jiangshan
---
kern
: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 29 +++--
1 file changed, 27 insertions(+), 2
On 03/13/2015 01:42 AM, Christoph Lameter wrote:
> On Thu, 12 Mar 2015, Lai Jiangshan wrote:
>
>> The per-nodes' pwqs are mandatorily controlled by the low level cpumask,
>> while
>> the default pwq ignores the low level cpumask when (and ONLY when) the
>>
On 03/12/2015 01:00 PM, Lai Jiangshan wrote:
> Allow to modify the low-level unbound workqueues cpumask through
> sysfs. This is performed by traversing the entire workqueue list
> and calling wq_unbound_install_ctx_prepare() on the unbound workqueues
> with the low level mask pas
ed workqueue will be repose after this patchset accepted.
Changed from V4
Add workqueue_unbounds_cpumask_set() kernel API and minimally restruct the
patch4.
Thanks,
Lai
Frederic Weisbecker (2):
workqueue: Reorder sysfs code
workqueue: Create low-level unbound workqueues cpumask
Lai Jiangs
alue for the runtime, the
system manager or other subsystem which knows the sufficient information should
set
it when needed.
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Cc: Frederic Weisbecker
Origina
s().
Lets move that block further in the file, right above alloc_workqueue_key()
which reference it.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
Sig
n is also moved into wq_pool_mutex.
this is needed to avoid to do the further splitting.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Cc: Frederic Weisbecker
Signed-off-by: Lai Ji
: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 29 +++--
1 file changed, 27 insertions(+), 2
On 03/14/2015 07:49 AM, Kevin Hilman wrote:
> Lai Jiangshan writes:
>
>> From: Frederic Weisbecker
>>
>> Create a cpumask that limit the affinity of all unbound workqueues.
>> This cpumask is controlled though a file at the root of the workqueue
>> sysfs
On 12/15/2014 07:14 PM, Kamezawa Hiroyuki wrote:
> Unbound wq pool's node attribute is calculated at its allocation.
> But it's now calculated based on possible cpu<->node information
> which can be wrong after cpu hotplug/unplug.
>
> If wrong pool->node is set, following allocation error will hap
On 12/15/2014 07:16 PM, Kamezawa Hiroyuki wrote:
> The percpu workqueue pool are persistend and never be freed.
> But cpu<->node relationship can be changed by cpu hotplug and pool->node
> can point to an offlined node.
>
> If pool->node points to an offlined node,
> following allocation failure c
On 12/15/2014 07:18 PM, Kamezawa Hiroyuki wrote:
> Workqueue keeps cpu<->node relationship including all possible cpus.
> The original information was made at boot but it may change when
> a new node is added.
>
> Update information if a new node is ready with using node-hotplug callback.
>
> Sig
On 12/16/2014 03:32 PM, Kamezawa Hiroyuki wrote:
> (2014/12/16 14:30), Lai Jiangshan wrote:
>> On 12/15/2014 07:14 PM, Kamezawa Hiroyuki wrote:
>>> Unbound wq pool's node attribute is calculated at its allocation.
>>> But it's now calculated based on possible
On 12/17/2014 12:45 AM, Kamezawa Hiroyuki wrote:
> With node online/offline, cpu<->node relationship is established.
> Workqueue uses a info which was established at boot time but
> it may be changed by node hotpluging.
>
> Once pool->node points to a stale node, following allocation failure
> hap
On 12/12/2014 06:19 PM, Lai Jiangshan wrote:
> Yasuaki Ishimatsu hit a allocation failure bug when the numa mapping
> between CPU and node is changed. This was the last scene:
> SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
> cache: kmalloc-192, object size: 192, buff
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 53 ++-
1 files changed, 39 insertions(+), 14 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4c88b
c: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 16 +++-
1 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 29a96c3..9e35a79 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1
et is untested. It is sent for earlier review.
Thanks,
Lai.
Reported-by: Yasuaki Ishimatsu
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Lai Jiangshan (5):
workqueue: fix memory leak in wq_numa_init()
workqueue: update wq_numa_poss
update the affinity in this case.
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 15 +++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/kernel/workqueue.c b/ke
wq_numa_init() will quit directly on some bonkers cases without freeing the
memory. Add the missing cleanup code.
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |3 +++
1 files
new pool->node of new pools are correct.
and existing wq's affinity is fixed up by wq_update_unbound_numa()
after wq_update_numa_mapping().
Reported-by: Yasuaki Ishimatsu
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by:
On 12/13/2014 01:25 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:53PM +0800, Lai Jiangshan wrote:
>> Yasuaki Ishimatsu hit a bug when the numa mapping between CPU and node
>> is changed. And the previous path fixup wq_numa_possible_cpumask.
>> (See more information
On 12/13/2014 01:27 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:54PM +0800, Lai Jiangshan wrote:
>> We fixed the major cases when the numa mapping is changed.
>>
>> We still have the assumption that when the node<->cpu mapping is changed
>> the original
smp_send_reschedule+0x5d/0x60
> [ 890.156187] [] resched_curr+0xa8/0xd0
> [ 890.156187] [] check_preempt_curr+0x80/0xa0
> [ 890.156187] [] attach_task+0x48/0x50
> [ 890.156187] [] active_load_balance_cpu_stop+0x105/0x250
> [ 890.156187] [] ? set_next_entity+0x80/0x80
> [ 89
On 12/13/2014 01:18 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:52PM +0800, Lai Jiangshan wrote:
> ...
>> +static void wq_update_numa_mapping(int cpu)
>> +{
>> +int node, orig_node = NUMA_NO_NODE, new_node = cpu_to_node(cpu);
>> +
>> +
On 12/14/2014 12:35 AM, Kamezawa Hiroyuki wrote:
> remove node aware unbound pools if node goes offline.
>
> scan unbound workqueue and remove numa affine pool when
> a node goes offline.
>
> Signed-off-by: KAMEZAWA Hiroyuki
> ---
> kernel/workqueue.c | 29 +
> 1 fil
On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
> Although workqueue detects relationship between cpu<->node at boot,
> it is finally determined in cpu_up().
> This patch tries to update pool->node using online status of cpus.
>
> 1. When a node goes down, clear per-cpu pool's node attr.
> 2. Whe
On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:12), Lai Jiangshan wrote:
>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>> Although workqueue detects relationship between cpu<->node at boot,
>>> it is finally determined in cpu_up().
&g
On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:48), Lai Jiangshan wrote:
>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:12), Lai Jiangshan wrote:
>>>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>>>> Al
On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:48), Lai Jiangshan wrote:
>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:12), Lai Jiangshan wrote:
>>>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>>>> Al
On 12/15/2014 12:04 PM, Kamezawa Hiroyuki wrote:
> (2014/12/15 12:34), Lai Jiangshan wrote:
>> On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:48), Lai Jiangshan wrote:
>>>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>>>
On 12/13/2014 01:12 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:51PM +0800, Lai Jiangshan wrote:
>> wq_numa_init() will quit directly on some bonkers cases without freeing the
>> memory. Add the missing cleanup code.
>>
>> Cc: Tejun Heo
>> Cc: Yas
Update my email address.
The old la...@cn.fujitsu.com will ended after Jul 10 2015.
Signed-off-by: Lai Jiangshan
Signed-off-by: Lai Jiangshan
---
MAINTAINERS | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 0e6b091..c98eea3 100644
--- a
I am one of the dedicated reviwers of workqueue.c. Now I add myself
to the MAINTAINERS entry with the updated email address.
The old la...@cn.fujitsu.com will be ended soon.
Signed-off-by: Lai Jiangshan
Signed-off-by: Lai Jiangshan
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff
uot;,
"""
And document "/* MD: rescue worker */" might be better
than current "/* I: rescue worker */", although ->rescuer can
be accessed without wq_mayday_lock lock in some code.
Reviewed-by: Lai Jiangshan
On Thu, Sep 19, 2019 at 9:43 AM Tejun Heo wrote:
>
> B
Currently PREEMPT_RCU and TREE_RCU are "contrary" configs
when they can't be both on. But PREEMPT_RCU is actually a kind
of TREE_RCU in the implementation. It seams to be appropriate
to make PREEMPT_RCU to be a decorative option of TREE_RCU.
Signed-off-by: Lai Jiangshan
Sig
On Tue, Oct 15, 2019 at 9:46 AM Paul E. McKenney wrote:
>
> On Mon, Oct 14, 2019 at 02:48:32PM -0400, Joel Fernandes wrote:
> > On Sun, Oct 13, 2019 at 12:59:57PM +, Lai Jiangshan wrote:
> > > Currently PREEMPT_RCU and TREE_RCU are "contrary" configs
> &
On 2019/10/15 10:00 上午, Paul E. McKenney wrote:
On Tue, Oct 15, 2019 at 09:50:21AM +0800, Lai Jiangshan wrote:
On Tue, Oct 15, 2019 at 9:46 AM Paul E. McKenney wrote:
On Mon, Oct 14, 2019 at 02:48:32PM -0400, Joel Fernandes wrote:
On Sun, Oct 13, 2019 at 12:59:57PM +, Lai Jiangshan
Currently PREEMPT_RCU and TREE_RCU are "contrary" configs
when they can't be both on. But PREEMPT_RCU is actually a kind
of TREE_RCU in the implementation. It seams to be appropriate
to make PREEMPT_RCU to be a decorative option of TREE_RCU.
Signed-off-by: Lai Jiangshan
Sig
On 2019/10/15 10:45 上午, Lai Jiangshan wrote:
On 2019/10/15 10:00 上午, Paul E. McKenney wrote:
On Tue, Oct 15, 2019 at 09:50:21AM +0800, Lai Jiangshan wrote:
On Tue, Oct 15, 2019 at 9:46 AM Paul E. McKenney
wrote:
On Mon, Oct 14, 2019 at 02:48:32PM -0400, Joel Fernandes wrote:
On Sun
All are minimal independent cleanups, expect that patch 3 depends
on patch 2.
Lai Jiangshan (7):
rcu: fix incorrect conditional compilation
rcu: fix tracepoint string when RCU CPU kthread runs
rcu: trace_rcu_utilization() paired
rcu: remove the declaration of call_rcu() in tree.h
rcu
"rcu_wait" is incorrct here, use "rcu_run" instead.
Signed-off-by: Lai Jiangshan
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tree.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 278798e58698..c351fc280945
ngly used, which are always non-defined,
which makes "!defined(TINY_RCU)" always true, which means
the code block is always inclued, and the included code block
doesn't cause any compilation error so far when CONFIG_TINY_RCU.
It is also the reason this change doesn't need for stabl
The notations include "Start" and "End", it is better
when there are paired.
Signed-off-by: Lai Jiangshan
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.
1001 - 1100 of 1247 matches
Mail list logo