: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to
be self-documented
Acked-by: Nicolas Pitre n...@linaro.org
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
Hello Nicolas.
I maintain your 'Acked-by' while updating this patch to v2.
Please let me know if there is problem
On Thu, Mar 07, 2013 at 07:35:51PM +0900, JoonSoo Kim wrote:
2013/3/7 Nicolas Pitre nicolas.pi...@linaro.org:
On Thu, 7 Mar 2013, Joonsoo Kim wrote:
Hello, Nicolas.
On Tue, Mar 05, 2013 at 05:36:12PM +0800, Nicolas Pitre wrote:
On Mon, 4 Mar 2013, Joonsoo Kim wrote:
With SMP
Hello, Pekka.
Could you pick up 1/3, 3/3?
These are already acked by Christoph.
2/3 is same effect as Glauber's slub: correctly bootstrap boot caches,
so should skip it.
Thanks.
On Mon, Jan 21, 2013 at 05:01:25PM +0900, Joonsoo Kim wrote:
There is a subtle bug when calculating a number
On Mon, Feb 25, 2013 at 01:56:59PM +0900, Joonsoo Kim wrote:
On Thu, Feb 14, 2013 at 02:48:33PM +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct
Remove unused argument and make function static,
because there is no user outside of nobootmem.c
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index cdc3bab..5f0b0e1 100644
--- a/include/linux/bootmem.h
+++ b/include/linux
max_low_pfn reflect the number of _pages_ in the system,
not the maximum PFN. You can easily find that fact in init_bootmem().
So fix it.
Additionally, if 'start_pfn == end_pfn', we don't need to go futher,
so change range check.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm
Currently, we do memset() before reserving the area.
This may not cause any problem, but it is somewhat weird.
So change execution order.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/nobootmem.c b/mm/nobootmem.c
index 589c673..f11ec1c 100644
--- a/mm/nobootmem.c
+++ b/mm
On Tue, Mar 19, 2013 at 02:16:00PM +0900, Joonsoo Kim wrote:
Remove unused argument and make function static,
because there is no user outside of nobootmem.c
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index cdc3bab
On Mon, Mar 18, 2013 at 10:53:04PM -0700, Yinghai Lu wrote:
On Mon, Mar 18, 2013 at 10:16 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
Currently, we do memset() before reserving the area.
This may not cause any problem, but it is somewhat weird.
So change execution order.
Signed-off
On Mon, Mar 18, 2013 at 10:51:43PM -0700, Yinghai Lu wrote:
On Mon, Mar 18, 2013 at 10:16 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
Remove unused argument and make function static,
because there is no user outside of nobootmem.c
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
On Mon, Mar 18, 2013 at 10:47:41PM -0700, Yinghai Lu wrote:
On Mon, Mar 18, 2013 at 10:15 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
max_low_pfn reflect the number of _pages_ in the system,
not the maximum PFN. You can easily find that fact in init_bootmem().
So fix it.
I'm confused
On Tue, Mar 19, 2013 at 03:25:22PM +0900, Joonsoo Kim wrote:
On Mon, Mar 18, 2013 at 10:47:41PM -0700, Yinghai Lu wrote:
On Mon, Mar 18, 2013 at 10:15 PM, Joonsoo Kim iamjoonsoo@lge.com
wrote:
max_low_pfn reflect the number of _pages_ in the system,
not the maximum PFN. You can
On Tue, Mar 19, 2013 at 12:35:45AM -0700, Yinghai Lu wrote:
Can you check why sparc do not need to change interface during converting
to use memblock to replace bootmem?
Sure.
According to my understanding to sparc32 code(arch/sparc/mm/init_32.c),
they already use max_low_pfn as the maximum PFN
Hello, Peter.
On Tue, Mar 19, 2013 at 03:02:21PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add
On Tue, Mar 19, 2013 at 03:20:57PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group,
regardless of idle type. When we do NEWLY_IDLE balancing, we should not
consider it, because
On Tue, Mar 19, 2013 at 03:30:15PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
Some validation for task moving is performed in move_tasks() and
move_one_task(). We can move these code to can_migrate_task()
which is already exist for this purpose
On Tue, Mar 19, 2013 at 04:01:01PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
This name doesn't represent specific meaning.
So rename it to imply it's purpose.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/core.c
On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, in that, there is no code for preventing to re-select dst-cpu.
So, same dst-cpu can be selected
On Tue, Mar 19, 2013 at 04:21:23PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group.
So, now, When we redo in load_balance(), we should reset some fields of
lb_env to ensure
2013/3/20 Peter Zijlstra pet...@infradead.org:
On Wed, 2013-03-20 at 16:33 +0900, Joonsoo Kim wrote:
Right, so I'm not so taken with this one. The whole load stuff really
is a balance heuristic that's part of move_tasks(), move_one_task()
really doesn't care about that.
So why did you
2013/3/20 Peter Zijlstra pet...@infradead.org:
On Wed, 2013-03-20 at 16:43 +0900, Joonsoo Kim wrote:
On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
Commit 88b8dac0 makes load_balance() consider other cpus in its group
2013/3/19 Tejun Heo t...@kernel.org:
On Wed, Mar 13, 2013 at 07:57:18PM -0700, Tejun Heo wrote:
and available in the following git branch.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-finer-locking
Applied to wq/for-3.10.
Hello, Tejun.
I know I am late, but, please give
2013/3/20 Tejun Heo t...@kernel.org:
Unbound workqueues are going to be NUMA-affine. Add wq_numa_tbl_len
and wq_numa_possible_cpumask[] in preparation. The former is the
highest NUMA node ID + 1 and the latter is masks of possibles CPUs for
each NUMA node.
It is better to move this code to
.
This patchset is based on v3.9-rc4.
Thanks.
Joonsoo Kim (6):
ARM, TCM: initialize TCM in paging_init(), instead of setup_arch()
ARM, crashkernel: use ___alloc_bootmem_node_nopanic() for reserving
memory
ARM, crashkernel: correct total_mem size in reserve_crashkernel()
ARM, mm: don't do
arm_bootmem_init() initialize a bitmap for bootmem and
it is not needed for CONFIG_NO_BOOTMEM.
So skip it when CONFIG_NO_BOOTMEM.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad722f1..049414a 100644
--- a/arch/arm/mm/init.c
+++ b
-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index b3990a3..99ffe87 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -674,15 +674,20 @@ static void __init reserve_crashkernel(void)
{
unsigned long long
.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 3f6cbb2..b3990a3 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -56,7 +56,6 @@
#include asm/virt.h
#include atags.h
-#include tcm.h
#if defined
There is some platforms which have highmem, so this equation
doesn't represent total_mem size properly.
In addition, max_low_pfn's meaning is different in other architecture and
it is scheduled to be changed, so remove related code to max_low_pfn.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
lowmem pfn,
so this patch may not harm anything.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 049414a..873f4ca 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -423,12 +423,10 @@ void __init bootmem_init(void
, it actually give us PAGE_SIZE area.
nobootmem manage memory as byte unit, so there is no waste.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 13b7394..8b73417 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -58,6 +58,7 @@ config ARM
to evaluate load value in can_migrate_task()
[5/6]: rename load_balance_tmpmask to load_balance_mask
[6/6]: not use one more cpumasks, use env's cpus for prevent to re-select
Joonsoo Kim (6):
sched: change position of resched_cpu() in load_balance()
sched: explicitly cpu_idle_type checking
cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
So, there is possibility that we miss doing resched_cpu().
Correct it as changing position of resched_cpu()
before checking LBF_NEED_BREAK.
Acked-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
...@linux.vnet.ibm.com
Acked-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9d693d0..3f8c4f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5007,8 +5007,17 @@ static int load_balance(int
After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add explicit cpu_idle_type checking.
Cc: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git
to re-select dst_cpu via
env's cpus, so now, env's cpus is a candidate not only for src_cpus,
but also dst_cpus.
Cc: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e3f09f4..6f238d2 100644
This name doesn't represent specific meaning.
So rename it to imply it's purpose.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f12624..07b4178 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6865,7 +6865,7
.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3f8c4f2..d3c6011 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3874,10 +3874,14 @@ int can_migrate_task(struct task_struct *p, struct
lb_env *env)
int tsk_cache_hot
On Mon, Mar 25, 2013 at 02:32:35PM -0400, Steven Rostedt wrote:
On Mon, 2013-03-25 at 18:27 +, Christoph Lameter wrote:
On Mon, 25 Mar 2013, Steven Rostedt wrote:
If this makes it more deterministic, and lower worse case latencies,
then it's definitely worth the price.
Yes
On Tue, Mar 26, 2013 at 11:30:32PM -0400, Steven Rostedt wrote:
On Wed, 2013-03-27 at 11:59 +0900, Joonsoo Kim wrote:
How about using spin_try_lock() in unfreeze_partials() and
using spin_lock_contented() in get_partial_node() to reduce latency?
IMHO, this doesn't make code more
Remove one division operation in find_buiest_queue().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6f238d2..1d8774f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4911,7 +4911,7 @@ static struct rq
-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 204a9a9..e232421 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -631,23 +631,20 @@ static u64 __sched_period(unsigned long nr_running)
*/
static u64 sched_slice(struct cfs_rq
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95ec757..204a9a9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4175,36 +4175,6 @@ static unsigned long task_h_load(struct task_struct *p)
/** Helpers for find_busiest_group
with nice -20 is sysctl_sched_min_granularity * 10 * (88761 / 97977),
that is, approximately, sysctl_sched_min_granularity * 9. This aspect
can be much larger if there is more tasks with nice 0.
So we should limit this possible weird situation.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff
data bss dec hex filename
342431136 116 354958aa7 kernel/sched/fair.o
In addition, rename @balance to @should_balance in order to represent
its purpose more clearly.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
sched_slice()
Feel free to give a comment for this patchset.
It's based on v3.9-rc4 and top of my previous patchset. But, perhaps,
it may not really depend on my previous patchset. :)
https://lkml.org/lkml/2013/3/26/28
[PATCH v2 0/6] correct load_balance()
Thanks.
Joonsoo Kim (5):
sched: remove one
From: Joonsoo Kim js1...@gmail.com
Although our intention is to unexport internal structure entirely,
but there is one exception for kexec. kexec dumps address of vmlist
and makedumpfile uses this information.
We are about to remove vmlist, then another way to retrieve information
of vmalloc
.
Changes from v1
5/8: skip areas for lazy_free
6/8: skip areas for lazy_free
7/8: export vmap_area_list for kexec, instead of vmlist
Joonsoo Kim (8):
mm, vmalloc: change iterating a vmlist to find_vm_area()
mm, vmalloc: move get_vmalloc_info() to vmalloc.c
mm, vmalloc: protect va-vm
From: Joonsoo Kim js1...@gmail.com
This patch is preparing step for removing vmlist entirely.
For above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.
vmlist is lack of information about some
From: Joonsoo Kim js1...@gmail.com
Now, when we hold a vmap_area_lock, va-vm can't be discarded. So we can
safely access to va-vm when iterating a vmap_area_list with holding a
vmap_area_lock. With this property, change iterating vmlist codes in
vread/vwrite() to iterating vmap_area_list
From: Joonsoo Kim js1...@gmail.com
Inserting and removing an entry to vmlist is linear time complexity, so
it is inefficient. Following patches will try to remove vmlist entirely.
This patch is preparing step for it.
For removing vmlist, iterating vmlist codes should be changed to iterating
From: Joonsoo Kim js1...@gmail.com
Now, there is no need to maintain vmlist after initializing vmalloc.
So remove related code and data structure.
Signed-off-by: Joonsoo Kim js1...@gmail.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7e63984
From: Joonsoo Kim js1...@gmail.com
This patch is preparing step for removing vmlist entirely.
For above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.
Using vmap_area_list in vmallocinfo
From: Joonsoo Kim js1...@gmail.com
The purpose of iterating a vmlist is finding vm area with specific
virtual address. find_vm_area() is provided for this purpose
and more efficient, because it uses a rbtree.
So change it.
Cc: Chris Metcalf cmetc...@tilera.com
Cc: Guan Xuetao g
From: Joonsoo Kim js1...@gmail.com
Now get_vmalloc_info() is in fs/proc/mmu.c. There is no reason
that this code must be here and it's implementation needs vmlist_lock
and it iterate a vmlist which may be internal data structure for vmalloc.
It is preferable that vmlist_lock and vmlist is only
Hello, Hugh.
On Thu, Mar 07, 2013 at 06:01:26PM -0800, Hugh Dickins wrote:
On Fri, 8 Mar 2013, Joonsoo Kim wrote:
On Thu, Mar 07, 2013 at 10:54:15AM -0800, Hugh Dickins wrote:
On Thu, 7 Mar 2013, Joonsoo Kim wrote:
When we found that the flag has a bit of PAGE_FLAGS_CHECK_AT_PREP
.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c7e3759..b7711be 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page,
unsigned long
With SMP and enabling kmap_high_get(), it makes users of kmap_atomic()
sequential ordered, because kmap_high_get() use global kmap_lock().
It is not welcome situation, so turn off this optimization for SMP.
Cc: Nicolas Pitre n...@linaro.org
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff
Hello, Nicolas.
On Tue, Mar 05, 2013 at 05:36:12PM +0800, Nicolas Pitre wrote:
On Mon, 4 Mar 2013, Joonsoo Kim wrote:
With SMP and enabling kmap_high_get(), it makes users of kmap_atomic()
sequential ordered, because kmap_high_get() use global kmap_lock().
It is not welcome situation, so
When we found that the flag has a bit of PAGE_FLAGS_CHECK_AT_PREP,
we reset the flag. If we always reset the flag, we can reduce one
branch operation. So remove it.
Cc: Hugh Dickins hu...@google.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
2013/3/7 Nicolas Pitre nicolas.pi...@linaro.org:
On Thu, 7 Mar 2013, Joonsoo Kim wrote:
Hello, Nicolas.
On Tue, Mar 05, 2013 at 05:36:12PM +0800, Nicolas Pitre wrote:
On Mon, 4 Mar 2013, Joonsoo Kim wrote:
With SMP and enabling kmap_high_get(), it makes users of kmap_atomic
Hello, Hugh.
On Thu, Mar 07, 2013 at 10:54:15AM -0800, Hugh Dickins wrote:
On Thu, 7 Mar 2013, Joonsoo Kim wrote:
When we found that the flag has a bit of PAGE_FLAGS_CHECK_AT_PREP,
we reset the flag. If we always reset the flag, we can reduce one
branch operation. So remove it.
Cc
Hello, Russell.
On Thu, Mar 07, 2013 at 01:26:23PM +, Russell King - ARM Linux wrote:
On Mon, Mar 04, 2013 at 01:50:09PM +0900, Joonsoo Kim wrote:
In kmap_atomic(), kmap_high_get() is invoked for checking already
mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
we
ccr...@android.com
CC: Arve Hjønnevåg a...@android.com
CC: Dima Zavin d...@android.com
CC: Robert Love rl...@google.com
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
index 634b9ae..2fde9df 100644
--- a/drivers/staging
Hello, Dan.
2012/12/2 Dan Carpenter dan.carpen...@oracle.com:
On Sat, Dec 01, 2012 at 02:45:57AM +0900, Joonsoo Kim wrote:
@@ -614,21 +616,35 @@ static int ashmem_pin_unpin(struct ashmem_area *asma,
unsigned long cmd,
pgstart = pin.offset / PAGE_SIZE;
pgend = pgstart + (pin.len
2012/12/3 Dan Carpenter dan.carpen...@oracle.com:
On Mon, Dec 03, 2012 at 09:09:59AM +0900, JoonSoo Kim wrote:
Hello, Dan.
2012/12/2 Dan Carpenter dan.carpen...@oracle.com:
On Sat, Dec 01, 2012 at 02:45:57AM +0900, Joonsoo Kim wrote:
@@ -614,21 +616,35 @@ static int ashmem_pin_unpin(struct
cscope O=. SRCARCH=arm SUBARCH=xxx
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 79fdafb..a400c88 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -48,13 +48,14 @@ find_arch_sources()
for i in $archincludedir; do
prune
the kernel.
Signed-off-by: Joonsoo Kim js1...@gmail.com
diff --git a/scripts/tags.sh b/scripts/tags.sh
index a400c88..ef9668c 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -96,6 +96,29 @@ all_sources()
find_other_sources '*.[chS]'
}
+all_compiled_sources()
+{
+ for i
cscope O=. SRCARCH=arm SUBARCH=xxx
Signed-off-by: Joonsoo Kim js1...@gmail.com
---
v2: change bash specific '[[]]' to 'case in' statement.
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 79fdafb..38483f4 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -48,13 +48,14 @@ find_arch_sources
the kernel.
Signed-off-by: Joonsoo Kim js1...@gmail.com
---
v2: change bash specific '[[]]' to 'case in' statement.
use COMPILED_SOURCE env var, instead of abusing SUBARCH
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 38483f4..9c02921 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
2012/12/4 Michal Marek mma...@suse.cz:
On 3.12.2012 17:22, Joonsoo Kim wrote:
We usually have interst in compiled files only,
because they are strongly related to individual's work.
Current tags.sh can't select compiled files, so support it.
We can use this functionality like below.
make
Hello, Russell.
On Wed, Feb 06, 2013 at 04:33:55PM +, Russell King - ARM Linux wrote:
On Wed, Feb 06, 2013 at 10:33:53AM +0100, Linus Walleij wrote:
On Wed, Feb 6, 2013 at 6:21 AM, Joonsoo Kim iamjoonsoo@lge.com wrote:
If we want load epoch_cyc and epoch_ns atomically,
we
2013/2/9 Nicolas Pitre nicolas.pi...@linaro.org:
On Fri, 8 Feb 2013, Russell King - ARM Linux wrote:
On Fri, Feb 08, 2013 at 03:51:25PM +0900, Joonsoo Kim wrote:
I try to put it into patch tracker, but I fail to put it.
I use following command.
git send-email --to patc
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre n...@linaro.org
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..904c15e 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,70 @@
#include asm/mach/pci.h
#include mm.h
+
+LIST_HEAD(static_vmlist);
+
+static struct static_vm
according to [2/3] change
Rebased on v3.8-rc5
v2-v3:
coverletter: refer a link related to this work
[2/3]: drop @flags of find_static_vm_vaddr
Rebased on v3.8-rc4
v1-v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim (3):
ARM: vmregion: remove vmregion code
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
Hello, Nicolas.
On Mon, Feb 04, 2013 at 11:44:16PM -0500, Nicolas Pitre wrote:
On Tue, 5 Feb 2013, Joonsoo Kim wrote:
A static mapped area is ARM-specific, so it is better not to use
generic vmalloc data structure, that is, vmlist and vmlist_lock
for managing static mapped area
Hello, Rob.
On Tue, Feb 05, 2013 at 01:12:51PM -0600, Rob Herring wrote:
On 02/05/2013 12:13 PM, Nicolas Pitre wrote:
On Tue, 5 Feb 2013, Rob Herring wrote:
On 02/04/2013 10:44 PM, Nicolas Pitre wrote:
On Tue, 5 Feb 2013, Joonsoo Kim wrote:
A static mapped area is ARM-specific, so
Hello, Santosh.
On Tue, Feb 05, 2013 at 02:22:39PM +0530, Santosh Shilimkar wrote:
On Tuesday 05 February 2013 06:01 AM, Joonsoo Kim wrote:
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre n...@linaro.org
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff
Hello, Santosh.
On Tue, Feb 05, 2013 at 02:32:06PM +0530, Santosh Shilimkar wrote:
On Tuesday 05 February 2013 06:01 AM, Joonsoo Kim wrote:
In current implementation, we used ARM-specific flag, that is,
VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
The purpose
On Wed, Feb 06, 2013 at 11:07:07AM +0900, Joonsoo Kim wrote:
Hello, Rob.
On Tue, Feb 05, 2013 at 01:12:51PM -0600, Rob Herring wrote:
On 02/05/2013 12:13 PM, Nicolas Pitre wrote:
On Tue, 5 Feb 2013, Rob Herring wrote:
On 02/04/2013 10:44 PM, Nicolas Pitre wrote:
On Tue, 5 Feb
in example case are not.
So, change updating sequence in order to correct this problem.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/kernel/sched_clock.c b/arch/arm/kernel/sched_clock.c
index fc6692e..bd6f56b 100644
--- a/arch/arm/kernel/sched_clock.c
+++ b/arch/arm
v1-v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim (3):
ARM: vmregion: remove vmregion code entirely
ARM: ioremap: introduce an infrastructure for static mapped area
ARM: mm: use static_vm for managing static mapped areas
arch/arm/mm/Makefile |2 +-
arch
-by: Nicolas Pitre n...@linaro.org
Tested-by: Santosh Shilimkar santosh.shilim...@ti.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..904c15e 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,70
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre n...@linaro.org
Tested-by: Santosh Shilimkar santosh.shilim...@ti.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm
...@linaro.org
Acked-by: Rob Herring rob.herr...@calxeda.com
Tested-by: Santosh Shilimkar santosh.shilim...@ti.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 904c15e..04d9006 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm
Hello, Linus.
2013/2/6 Linus Walleij linus.wall...@linaro.org:
On Wed, Feb 6, 2013 at 6:21 AM, Joonsoo Kim iamjoonsoo@lge.com wrote:
If we want load epoch_cyc and epoch_ns atomically,
we should update epoch_cyc_copy first of all.
This notify reader that updating is in progress.
If you
Hello, Nicolas.
On Tue, Jan 29, 2013 at 07:05:32PM -0500, Nicolas Pitre wrote:
On Thu, 24 Jan 2013, Joonsoo Kim wrote:
From: Joonsoo Kim js1...@gmail.com
In current implementation, we used ARM-specific flag, that is,
VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre n...@linaro.org
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7
-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..ceb34ae 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,78 @@
#include asm/mach/pci.h
#include mm.h
+
+LIST_HEAD(static_vmlist);
+static DEFINE_RWLOCK
Modify static_vm's flags bits
[3/3]: Rework according to [2/3] change
Rebased on v3.8-rc5
v2-v3:
coverletter: refer a link related to this work
[2/3]: drop @flags of find_static_vm_vaddr
Rebased on v3.8-rc4
v1-v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
2013/2/1 Nicolas Pitre nicolas.pi...@linaro.org:
On Thu, 31 Jan 2013, Joonsoo Kim wrote:
A static mapped area is ARM-specific, so it is better not to use
generic vmalloc data structure, that is, vmlist and vmlist_lock
for managing static mapped area. And it causes some needless overhead
Hello, Nicolas.
2013/2/1 Nicolas Pitre nicolas.pi...@linaro.org:
On Thu, 31 Jan 2013, Joonsoo Kim wrote:
In current implementation, we used ARM-specific flag, that is,
VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
The purpose of static mapped area is to re-use
Hello, Bjorn.
On Thu, Jan 24, 2013 at 10:45:13AM -0700, Bjorn Helgaas wrote:
On Fri, Dec 28, 2012 at 6:50 AM, Joonsoo Kim js1...@gmail.com wrote:
During early boot phase, PCI bus subsystem is not yet initialized.
If panic is occured in early boot phase and panic_timeout is set,
code flow
Greg for driver core]
On Fri, Jan 25, 2013 at 10:13:03AM +0900, Joonsoo Kim wrote:
Hello, Bjorn.
On Thu, Jan 24, 2013 at 10:45:13AM -0700, Bjorn Helgaas wrote:
On Fri, Dec 28, 2012 at 6:50 AM, Joonsoo Kim js1...@gmail.com
wrote:
During early boot phase, PCI bus
On Thu, Jan 24, 2013 at 10:32:32PM -0500, CAI Qian wrote:
- Original Message -
From: Greg Kroah-Hartman gre...@linuxfoundation.org
To: Joonsoo Kim iamjoonsoo@lge.com
Cc: Paul Hargrove phhargr...@lbl.gov, Pekka Enberg
penb...@kernel.org, linux-kernel@vger.kernel.org
On Mon, Jan 28, 2013 at 01:04:24PM -0500, Nicolas Pitre wrote:
On Mon, 28 Jan 2013, Will Deacon wrote:
Hello,
On Thu, Jan 24, 2013 at 01:28:51AM +, Joonsoo Kim wrote:
In current implementation, we used ARM-specific flag, that is,
VM_ARM_STATIC_MAPPING, for distinguishing ARM
Hello, Minchan.
On Thu, Jan 17, 2013 at 08:59:22AM +0900, Minchan Kim wrote:
Hi Joonsoo,
On Wed, Jan 16, 2013 at 05:08:55PM +0900, Joonsoo Kim wrote:
If object is on boundary of page, zs_map_object() copy content of object
to pre-allocated page and return virtual address of
IMHO
().
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
diff --git a/mm/slub.c b/mm/slub.c
index 7204c74..8b95364 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,10 +3614,15 @@ static int slab_memory_callback(struct notifier_block
*self,
static struct kmem_cache * __init bootstrap(struct kmem_cache
101 - 200 of 4538 matches
Mail list logo