et of "add online_movable" is here:
https://lkml.org/lkml/2012/7/4/145
The new V2 discards the MIGRATE_HOTREMOVE approach, and use a more straight
implementation(only 1 patch).
Lai Jiangshan (21):
page_alloc.c: don't subtract unrelated memmap from zone's present
pages
In one word, we need a N_MEMORY. We just intrude it as an alias to
N_HIGH_MEMORY and fix all im-proper usages of N_HIGH_MEMORY in late patches.
Signed-off-by: Lai Jiangshan
Acked-by: Christoph Lameter
Acked-by: Hillf Danton
---
include/linux/nodemask.h |1 +
1 files changed, 1 insertions(+
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Hillf Danton
---
mm/vmscan.c |4
update nodemasks management for N_MEMORY
Signed-off-by: Lai Jiangshan
---
Documentation/memory-hotplug.txt |5 +++-
include/linux/memory.h |1 +
mm/memory_hotplug.c | 49 +
3 files changed, 48 insertions(+), 7 deletions
From: Yasuaki Ishimatsu
memblock.current_limit is set directly though memblock_set_current_limit()
is prepared. So fix it.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Lai Jiangshan
---
arch/x86/kernel/setup.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a
"memblock_find_in_range_node()"
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Lai Jiangshan
---
mm/memblock.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 663b805..ce7fcb6 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
Signed-off-by: Lai Jiangshan
---
include/linux/memblock.h |1 +
mm/memblock.c|5 -
mm/page_alloc.c |6 +-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 19dc455..f2977ae 100644
--- a
ories, and helps for THP.
Current constraints: Only the memoryblock which is adjacent to the ZONE_MOVABLE
can be onlined from ZONE_NORMAL to ZONE_MOVABLE.
For opposite onlining behavior, we also introduce "online_kernel" to change
a memoryblock of ZONE_MOVABLE to ZONE_KERNEL when online.
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
---
init/main.c |2 +-
1 files changed, 1
-off-by: Lai Jiangshan
---
arch/x86/mm/init_64.c |4 +++-
mm/page_alloc.c | 40 ++--
2 files changed, 25 insertions(+), 19 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..005f00c 100644
--- a/arch/x86/mm/init_64
2 of them).
Signed-off-by: Lai Jiangshan
---
Documentation/kernel-parameters.txt |9 +
mm/page_alloc.c | 29 -
2 files changed, 37 insertions(+), 1 deletions(-)
diff --git a/Documentation/kernel-parameters.txt
b/Documentation/kernel-para
All are prepared, we can actually introduce N_MEMORY.
add CONFIG_MOVABLE_NODE make we can use it for movable-dedicated node
Signed-off-by: Lai Jiangshan
---
drivers/base/node.c |6 ++
include/linux/nodemask.h |4
mm/Kconfig |8
mm/page_alloc.c
suaki Ishimatsu
Signed-off-by: Lai Jiangshan
---
arch/x86/mm/numa.c |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..a86e315 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -223,9 +223,13 @@ static void _
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Christoph Lameter
---
mm/vmstat.c |4
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Christoph Lameter
---
mm/migrate.c |2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Hillf Danton
---
mm/oom_kill.c |2
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
---
mm/memcontrol.c | 18 +-
mm
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
---
kernel/kthread.c |2 +-
1 files changed, 1
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Hillf Danton
---
drivers/base/node.c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Hillf Danton
---
fs/proc/kcore.c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
---
mm/mempolicy.c | 12 ++--
1 files
SLUB only fucus on the nodes which has normal memory, so ignore the other
node's hot-adding and hot-removing.
Signed-off-by: Lai Jiangshan
---
mm/slub.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8c691fa..f8b137a 100644
---
So a new proper approach is needed to do it similarly
and new approach should also handle other long living unreclaimable memory.
Current blindly subtracted-present-pages-size approach does wrong, remove it.
Signed-off-by: Lai Jiangshan
---
mm/page_alloc.c | 20 +---
1 files c
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan
Acked-by: Hillf Danton
---
Documentation/cgroups
Make it more readability and easy to add new state.
Signed-off-by: Lai Jiangshan
---
drivers/base/node.c | 20 ++--
1 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c
index af1a177..5d7731e 100644
--- a/drivers/base
Currently memory_hotplug only manages the node_states[N_HIGH_MEMORY],
it forgot to manage node_states[N_NORMAL_MEMORY]. fix it.
Signed-off-by: Lai Jiangshan
---
Documentation/memory-hotplug.txt |5 ++-
include/linux/memory.h |1 +
mm/memory_hotplug.c | 94
instruction to void other CPU see wrong flags.
Patch6,7 small fix.
Lai Jiangshan (7):
wait on manager_mutex instead of rebind_hold
simple clear WORKER_REBIND
explit way to wait for idles workers to finish
single pass rebind
ensure the wq_worker_sleeping() see the right flags
init 0
static
igned-off-by: Lai Jiangshan
---
kernel/workqueue.c | 19 +++
1 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f6e4394..96485c0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1394,16 +1394,12 @@ static
ngly do local wake up.
so we use one write/modify instruction explicitly instead.
This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |7 +--
1 files changed, 5 insertions(+), 2 deletions(-)
workers().
The sleeping mutex_lock(&worker->pool->manager_mutex) must be put in the top of
busy_worker_rebind_fn(), because this busy worker thread can sleep
before the WORKER_REBIND is cleared, but can't sleep after
the WORKER_REBIND cleared.
It adds a small overhead to the unli
rebind_workers() is finish up, the idle_worker_rebind() can
returns.
This fix has an advantage: WORKER_REBIND is not used for wait_event(),
so we can clear it in idle_worker_rebind().(next patch)
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 13 +++--
1 files changed, 3 insertio
rebind_workers() is protected by cpu_hotplug lock,
so struct idle_rebind is also proteced by it.
And we can use a compile time allocated idle_rebind instead
of allocating it from the stack. it makes code clean.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 28
Access idle_rebind.cnt is always protected by gcwq->lock,
don't need to init it as 1.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ed23c9a..9f38a65 1006
WORKER_REBIND is not used for other purpose,
idle_worker_rebind() can directly clear it.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 13 ++---
1 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5872c31..90ada8f
-up is correct
instead.
Patch2-6 do simple cleanup
Lai Jiangshan (6):
workqueue: async idle rebinding
workqueue: new day don't need WORKER_REBIND for busy rebinding
workqueue: remove WORKER_REBIND
workqueue: rename manager_mutex to assoc_mutex
workqueue: use __cpuinit instead of __de
/workqueue.o.hotcpu_notifier
textdata bss dec hex filename
1851323871221 221215669 kernel/workqueue.o.cpu_notifier
1808223551221 21658549a kernel/workqueue.o.hotcpu_notifier
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |2 +-
1 files changed, 1
le or @idle_list, and make them be aware to exile-operation.
(only change too_many_workers() at the result)
rebind_workers() become single pass and don't release gcwq->lock in between.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 126 +++-
we use list_del_init(&worker->entry) when rebind(exile) idles
or destroy idles.
So we can use list_empty(&worker->entry) to know: does the worker
has been exiled or killed.
WORKER_REBIND is not need any more, remove it to reduce the states
of workers.
Signed-off-by: Lai Jiangsha
We can't known what is being protected from the name of
manager_mutex or be misled by the name.
Actually, it protects the CPU-association of the gcwq,
rename it to assoc_mutex will be better.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 40 --
It makes less sense to use __devinit(the memory will be discard
after boot when !HOTPLUG).
It will be more accurate to to use __cpuinit(the memory will be discard
after boot when !HOTPLUG_CPU).
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2
>lock
when it is doing rebind in rebind_workers(), so we don't need to use two flags,
just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |8 +---
1 files changed, 1 insertions(+), 7 deletions(-)
diff --git a/kernel/
;pool in try_to_grab_pending().
Signed-off-by: Lai Jiangshan
---
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7b91332..1a268b2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1101,11 +1101,26 @@ static int try_to_grab_pending(struct work_struct
*work, bool is_dwork,
We must clear this WORKER_REBIND before busy_worker_rebind_fn() returns,
otherise the worker may go to call idle_worker_rebind() wrongly, which
may access to the invalid ->idle_rebind and sleep forever in ->rebind_hold.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c
xt)
3) add a lot of comments.
4) clear WORKER_REBIND unconditionaly in idle_worker_rebind()
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 146 +---
1 files changed, 47 insertions(+), 99 deletions(-)
diff --git a/kernel/workque
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 35 ++-
1 files changed, 14 insertions(+), 21 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 6b643df..4696441 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -73,11
>lock
when it is doing rebind in rebind_workers(), so we don't need to use two flags,
just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 19 ++-
1 files changed, 2 insertions(+), 17 deletions(-)
diff -
It makes less sense to use __devinit(the memory will be discard
after boot when !HOTPLUG).
It will be more accurate to to use __cpuinit(the memory will be discard
after boot when !HOTPLUG_CPU).
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2
idle_list to ensure the local-wake-up is correct
instead.
Patch3-7 do simple cleanup
Patch2-7 are ready for for-next. I have other devlopment and cleanup for
workqueue,
should I wait this patchset merged or send them at the same time?
Lai Jiangshan (7):
workqueue: clear WORKER_REBIND
We can't known what is being protected from the name of
manager_mutex or be misled by the name.
Actually, it protects the CPU-association of the gcwq,
rename it to assoc_mutex will be better.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 40 --
/workqueue.o.hotcpu_notifier
textdata bss dec hex filename
1851323871221 221215669 kernel/workqueue.o.cpu_notifier
1808223551221 21658549a kernel/workqueue.o.hotcpu_notifier
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |2 +-
1 files changed, 1
The argument @delayed is always false in all call site,
we simply remove it.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 21 -
1 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 834aa62..d15d383 100644
cwq is frozen.
Fix it by moving the work to cwq->pool before delete it
in try_to_grab_pending(), thus the tagalong is left in
cwq->pool like as grabbing non-delayed work.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 26 +++---
1 files changed, 23 insertio
sibility.
If it is workqueue's responsibility, the patch needs go to -stable.
If it is user's responsibility. it is a nice cleanup, it can go to for-next.
I prefer it is workqueue's responsibility.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |2 +-
1 files changed, 1 inser
Using a helper instead of open code makes thaw_workqueues() more clear.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 26 +-
1 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cd05cf3..d0ca063 100644
is out, not just for SRCU,
> but also for RCU-bh. Also document the fact that SRCU readers are
> respected on CPUs executing in user mode, idle CPUs, and even on
> offline CPUs.
>
> Signed-off-by: Paul E. McKenney
Good. (Sorry, I'm late.)
Reviewed-by: Lai Jiangshan
&g
On 09/17/2012 11:46 PM, Lai Jiangshan wrote:
> Patch1 fix new found possible bug.
>
> Patch2 use async algorithm to replace the synchronous algorithm to rebind
> idle workers.
>
> The synchronous algorithm requires 3 hand shakes, it introduces much
> complicated.
>
&
On 09/19/2012 01:05 AM, Tejun Heo wrote:
> On Tue, Sep 18, 2012 at 04:36:53PM +0800, Lai Jiangshan wrote:
>> The whole workqueue.c keeps activate-order equals to queue_work()-order
>> in any given cwq except workqueue_set_max_active().
>>
>> If this order is not kept
On 09/19/2012 01:08 AM, Tejun Heo wrote:
> On Tue, Sep 18, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
>> On Tue, Sep 18, 2012 at 04:36:53PM +0800, Lai Jiangshan wrote:
>>> The whole workqueue.c keeps activate-order equals to queue_work()-order
>>> in any given cwq exce
>work, 0);
> }
> spin_unlock_irqrestore(&sp->queue_lock, flags);
> }
> @@ -631,7 +631,7 @@ static void srcu_reschedule(struct srcu_struct *sp)
> }
>
> if (pending)
> - queue_delayed_work(system_nrt_wq, &sp->work, SRCU_INTERVAL);
> +
Hi, tj
Thank you for adding this one.
Would you deffer "workqueue: rename cpu_workqueue to pool_workqueue" a
little? I don't want to rebase my almost-ready work again(not a good
reason... but please...)
I will answer your other emails soon and sent the patches.
Thanks,
Lai
On 14/02/13 11:2
6-8: ensure modification to worker->pool is under pool lock held
Patch 14: remove hashtable totally
other patch is preparing-patch or cleanup.
Lai Jiangshan (15):
workqueue: add lock_work_pool()
workqueue: allow more work_pool id space
workqueue: remname worker->id to worker->id_in_p
color bits is not used when offq, so we reuse them for pool IDs.
thus we will have more pool IDs.
Signed-off-by: Lai Jiangshan
---
include/linux/workqueue.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index
ool,
It will still look up the worker.
But this lookup is neeeded in later patches.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 170 ---
1 files changed, 93 insertions(+), 77 deletions(-)
diff --git a/kernel/workqueue.
We will use worker->id for global worker id.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 20 +++-
kernel/workqueue_internal.h |2 +-
2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f90d
worker_maybe_bind_and_lock() uses both @task and @current and the same time,
they are the same(worker_maybe_bind_and_lock() can only be called by current
worker task)
We make it uses @current only.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |7 +++
1 files changed, 3
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cdd5523..ab5c61a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -544,9 +544,11 @@ static inline unsigned
check)
get_work_cwq() requires the caller hold the pool lock and
the work must be *owned* by the pool.
or we can provide even more loose semantic,
but we don't need loose semantic in any case currently, KISS.
Signed-off-by: Lai Jiangshan
---
include/linux/workqueue.h |3
pool->busy_list is touched when the worker processes every work.
if this code is moved out, we reduce this touch.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |6 --
1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
in
Since we don't use the hashtable, thus we can use list to implement
the for_each_busy_worker().
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 27 ++-
kernel/workqueue_internal.h |9 +++--
2 files changed, 13 insertions(+), 23 deletions(-)
Use already-known "cwq" instead of get_work_cwq(work) in try_to_grab_pending()
and cwq_activate_first_delayed().
It avoid unneeded calls to get_work_cwq() which becomes not so light-way
in later patches.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 11 +--
1 files
is de-associated to the pool.
It is done with pool->lock held in ether set of above
Thus we have this semantic:
If pool->lock is held and worker->pool==pool, we can determine that
the worker is associated to the pool now.
Signed-off-by: Lai Jiangshan
---
ker
Allow we use delayed_flags only in different path in later patches.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7ac6824..cdd5523 100644
--- a/kernel/workqueue.c
ument. we choice the later one.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 18 +-
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b47d1af..b987195 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@
Add new worker->id which is allocated from worker_idr. This
will be used to record the last running worker in work->data.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 28
kernel/workqueue_internal.h |1 +
2 files changed, 29 insertions
std worker_pool, this patch solves it.
This patch slows down the very-slow-path destroy_worker(), if it is required,
we will move the synchronize_sched() out.
Signed-off-by: Lai Jiangshan
---
include/linux/workqueue.h | 20 +++---
kernel/workqueue.c| 140 ++---
When a work is dequeued via try_to_grab_pending(), its pool id is recored
in work->data. but this recording is useless when the work is not running.
In this patch, we only record pool id when the work is running.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 11 +--
1 fi
On 19/02/13 00:12, Lai Jiangshan wrote:
Core patches are patch 1, patch 9, patch 13
Patch 1: enhance locking
Patch 9: recorde worker id to work->data instead of pool id
lookup worker via worker ID if offq
Patch 13:also lookup worker via worker ID if running&&queued,
use DEFINE_STATIC_SRCU() to simplify the rcutorture.c
Signed-off-by: Lai Jiangshan
---
kernel/rcutorture.c | 41 ++---
1 files changed, 6 insertions(+), 35 deletions(-)
diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c
index 25b1503..7939edf 100644
I changed a lot for srcu, add my name here, thus any one can blame/contact
to me when needed.
Signed-off-by: Lai Jiangshan
---
include/linux/srcu.h |2 ++
kernel/srcu.c|2 ++
2 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/include/linux/srcu.h b/include/linux
ide a single *.c.
Signed-off-by: Lai Jiangshan
---
include/linux/srcu.h | 30 ++
1 files changed, 30 insertions(+), 0 deletions(-)
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 5cce128..f986df1 100644
--- a/include/linux/srcu.h
+++ b/include/linux/s
process_srcu() will be used in DEFINE_SRCU() (only).
Although it is exported, it is still an internal in srcu.h.
Signed-off-by: Lai Jiangshan
---
include/linux/srcu.h |2 ++
kernel/srcu.c|6 ++
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux
These patches add a simple DEFINE_SRCU() which define and init
the srcu struct in build time, and allow us use srcu in very early
boot time.
Lai Jiangshan (4):
srcu: add my name
srcu: export process_srcu()
srcu: add DEFINE_SRCU
rcutorture: use DEFINE_STATIC_SRCU()
include/linux/srcu.h
On 10/17/2012 10:23 AM, Linus Torvalds wrote:
> [ Architecture people, note the potential new SMP barrier! ]
>
> On Tue, Oct 16, 2012 at 4:30 PM, Mikulas Patocka wrote:
>> + /*
>> +* The lock is considered unlocked when p->locked is set to false.
>> +* Use barrier prevent re
fine
> + with this option on since they don't online memory as movable.
> +
> + Say Y here if you want to hotplug a whole node.
> + Say N here if you want kernel to use memory on all nodes evenly.
Thank you for adding the help text which should have been done
Wu
CC: Kay Sievers
CC: Greg Kroah-Hartman
CC: Xishi Qiu
CC: Mel Gorman
CC: linux-...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux...@kvack.org
Lai Jiangshan (3):
memory_hotplug: fix missing nodemask management
slub, hotplug: ignore unrelated node's hot-adding an
suitable here.
Signed-off-by: Lai Jiangshan
---
mm/slub.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 2fdd96f..2d78639 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3577,7 +3577,7 @@ static void slab_mem_offline_callback(void *arg)
or in booting.
Since zone_start_pfn is not modified by init_currently_empty_zone()
grow_zone_span() needs to check zone_start_pfn before update it.
CC: Mel Gorman
Signed-off-by: Lai Jiangshan
Reported-by: Yasuaki ISIMATU
Tested-by: Wen Congyang
---
mm/memory_hotplug.c |2 +-
mm
] and node_states[N_NORMAL_MEMORY]
are changed while hotpluging.
Also add @status_change_nid_normal to struct memory_notify, thus
the memory hotplug callbacks know whether the node_states[N_NORMAL_MEMORY]
are changed.
Signed-off-by: Lai Jiangshan
---
Documentation/memory-hotplug.txt |5
On 09/27/2012 10:32 PM, Ni zhan Chen wrote:
> On 09/27/2012 02:47 PM, Lai Jiangshan wrote:
>> Currently memory_hotplug only manages the node_states[N_HIGH_MEMORY],
>> it forgets to manage node_states[N_NORMAL_MEMORY]. it causes
>> node_states[N_NORMAL_MEMORY] become
Hi, KOSAKI
On 09/28/2012 06:30 AM, KOSAKI Motohiro wrote:
> (9/27/12 2:47 AM), Lai Jiangshan wrote:
>> The __add_zone() maybe call sleep-able init_currently_empty_zone()
>> to init wait_table,
>
> This doesn't explain why sleepable is critical important. I think sle
til I find something bad in SLAB.
Thanks,
Lai
On 09/28/2012 06:35 AM, Christoph wrote:
> While you are at it: Could you move the code into slab_common.c so that there
> is only one version to maintain?
>
> On Sep 27, 2012, at 17:04, KOSAKI Motohiro wrote:
>
>> (9/27/12 2:4
Hi, Chen,
On 09/27/2012 09:19 PM, Ni zhan Chen wrote:
> On 09/27/2012 02:47 PM, Lai Jiangshan wrote:
>> The __add_zone() maybe call sleep-able init_currently_empty_zone()
>> to init wait_table,
>>
>> But this function also modifies the zone_start_pfn without any lock.
Add CC: Tejun Heo, Peter Zijlstra.
Hi, Tejun
This is a bug whose root cause is the same as
https://bugzilla.kernel.org/show_bug.cgi?id=47301.
Acked-by: Lai Jiangshan
thanks,
Lai
On 09/27/2012 05:19 PM, Tang Chen wrote:
> 1. cmci_rediscover() is only called by the CPU_POST_DEAD event hand
Hi, Tejun
On 09/27/2012 02:38 AM, Tejun Heo wrote:
> On Thu, Sep 27, 2012 at 01:20:42AM +0800, Lai Jiangshan wrote:
>> works in system_long_wq will be running long.
>> add WQ_CPU_INTENSIVE to system_long_wq to avoid these kinds of works occupy
>> the running wokers which d
On 09/27/2012 02:28 AM, Tejun Heo wrote:
> On Thu, Sep 27, 2012 at 01:20:35AM +0800, Lai Jiangshan wrote:
>> is_chained_work() is too complicated. we can simply found out
>> whether current task is worker by PF_WQ_WORKER or wq->rescuer.
>>
>> Signed-off-by: La
On 09/27/2012 02:36 AM, Tejun Heo wrote:
> On Thu, Sep 27, 2012 at 01:20:38AM +0800, Lai Jiangshan wrote:
>> All newly created worker will enter idle soon,
>> WORKER_STARTED is not used any more, remove it.
>
> Please merge this with the previous patch.
>
OK, I will d
On 09/27/2012 02:24 AM, Tejun Heo wrote:
> On Thu, Sep 27, 2012 at 01:20:34AM +0800, Lai Jiangshan wrote:
>> There is no reason to use WORKER_PREP, remove it from rescuer.
>>
>> And there is no reason to set it so early in alloc_worker(),
>> move "worker->f
On 09/27/2012 02:07 AM, Tejun Heo wrote:
> On Thu, Sep 27, 2012 at 01:20:32AM +0800, Lai Jiangshan wrote:
>> rescuer thread must be a worker which is WORKER_NOT_RUNNING:
>> If it is *not* WORKER_NOT_RUNNING, it will increase the nr_running
>> and it disables the n
On 09/27/2012 02:34 AM, Tejun Heo wrote:
> (cc'ing Ray Jui)
>
> On Thu, Sep 27, 2012 at 01:20:36AM +0800, Lai Jiangshan wrote:
>> rescuer is NOT_RUNNING, so there is no sense when it wakes up other workers,
>> if there are available normal workers, they are al
On Tue, Oct 24, 2017 at 9:18 AM, Li Bin wrote:
> When queue_work() is used in irq handler, there is a potential
> case that trigger NULL pointer dereference.
>
> worker_thread()
> |-spin_lock_irq()
> |-process_one_work()
> |-
401 - 500 of 1247 matches
Mail list logo