Hi, everyone.
I have a question regarding to select system call's code. In
do_select() function, after check each fd in the set, do_select() call
cond_resched(). That line, according to my view, is to reduce the
system freeze time when do the busy querying. But before the call,
when entering into
), 0x2000-0x2800 used
by old kernel is still unable to retrived by crash kernel because they
are at the same section.
This patch makes crash dump kernel use strict (and slow) version of
pfn_valid(), which makes crash kernel recongnize memory correctly.
Signed-off-by: Wang Nan wangn...@huawei.com
Add ke...@lists.infradead.org to cc list.
On 2014/5/15 15:14, Wang Nan wrote:
This patch makes crash dump kernel use arch pfn_valid defined in
arch/arm/mm/init.c instead of the one in include/linux/mmzone.h.
The goal of this patch is to remove some limitation when kexec loading
crash kernel
HAVE_ARCH_PFN_VALID for CRASH_DUMP, makes crash dump
kernel to use strict version of pfn_valid().
Signed-off-by: Wang Nan wangn...@huawei.com
---
This is the third time I post this patch. The previous records can be
retrived from:
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-May/256498
ioremap to make sure the destnation of all memcpy() is
uncachable memory, including copying of target kernel and trampoline.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: sta...@vger.kernel.org # 3.4+
Cc: Eric Biederman ebied...@xmission.com
Cc: Russell King rmk+ker...@arm.linux.org.uk
Cc: Andrew
it, crashdump kernel must be
carefully configured to boot.
Wang Nan (3):
ARM: Premit ioremap() to map reserved pages
ARM: kexec: copying code to ioremapped area
ARM: allow kernel to be loaded in middle of phymem
arch/arm/kernel/machine_kexec.c | 18 --
arch/arm/mm/init.c
.
This feature will be used for arm kexec support to ensure copied data goes into
RAM even without cache flushing, because we found that flush_cache_xxx can't
reliably flush code to memory.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: sta...@vger.kernel.org # 3.4+
Cc: Eric Biederman ebied...@xmission.com
Cc
. Without it, kernel command line, atag and devicetree must be
adjusted carefully, sometimes is impossible.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: sta...@vger.kernel.org # 3.4+
Cc: Eric Biederman ebied...@xmission.com
Cc: Russell King rmk+ker...@arm.linux.org.uk
Cc: Andrew Morton a...@linux
On 2014/1/22 19:42, Russell King - ARM Linux wrote:
On Wed, Jan 22, 2014 at 07:25:14PM +0800, Wang Nan wrote:
This patch relaxes the restriction set by commit 309caa9cc, which
prohibit ioremap() on all kernel managed pages.
Other architectures, such as x86 and (some specific platforms
On 2014/1/22 20:56, Vaibhav Bedia wrote:
On Wed, Jan 22, 2014 at 6:25 AM, Wang Nan wangn...@huawei.com
mailto:wangn...@huawei.com wrote:
ARM's kdump is actually corrupted (at least for omap4460), mainly because
of
cache problem: flush_icache_range can't reliably ensure the copied
On 2014/1/22 21:27, Russell King - ARM Linux wrote:
On Wed, Jan 22, 2014 at 07:25:15PM +0800, Wang Nan wrote:
ARM's kdump is actually corrupted (at least for omap4460), mainly because of
cache problem: flush_icache_range can't reliably ensure the copied data
correctly goes into RAM.
Quite
From: Wang Guoli andy.wanggu...@huawei.com
If jffs2_new_inode() succeeds, it returns with f-sem held, and
the caller is responsible for releasing the lock. If it fails,
it still returns with the lock held, but the caller won't release
the lock, which will lead to deadlock.
Fix it by releasing
-off-by: Wang Nan wangn...@huawei.com
Cc: Sasha Levin sasha.le...@oracle.com
Cc: Arnaldo Carvalho de Melo a...@redhat.com
Cc: Jiri Olsa jo...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Namhyung Kim namhy...@kernel.org
Cc: Geng Hui hui.g...@huawei.com
---
tools/lib/lockdep/Makefile| 4
) {
...
}
...
Therefore, would you please consider the following patch, which removes
the assumption that a bank must be fully contained in one section?
===
From b2c4bb5807c755d92274e11bb00cc548fea62242 Mon Sep 17 00:00:00 2001
From: Wang Nan wangn...@huawei.com
Date: Thu, 2 Jan 2014 13:20
HAVE_ARCH_PFN_VALID for CRASH_DUMP, makes crash dump
kernel to use strict version of pfn_valid().
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Li Zefan lize...@huawei.com
---
This is the forth time I post this patch. The previous discussions can be
retrived from:
http://lists.infradead.org
futher coding.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Zhen zhangz...@huawei.com
---
include/linux/mmzone.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6cbd1b6..559e659 100644
--- a/include/linux/mmzone.h
+++ b/include
This patch introduces zone_for_memory() to arch_add_memory() on powerpc
to ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch
.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch/tile/mm/init.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/tile/mm/init.c b/arch/tile/mm/init.c
index bfb3127..22ac6c1 100644
This patch introduces zone_for_memory() to arch_add_memory() on sh to
ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch/sh/mm
This patch introduces zone_for_memory() to arch_add_memory() on x86_64
to ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch
and zoneinfo result in patch 0 as a response to
Zhang Yanfei.
- Fix a problem in tile to add memory into ZONE_HIGHMEM by default.
Wang Nan (7):
memory-hotplug: add zone_for_memory() for selecting zone for new
memory
memory-hotplug: x86_64: suitable memory should go to ZONE_MOVABLE
than movable, it should be added into
ZONE_MOVABLE instead of default zone.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
include/linux/memory_hotplug.h | 1 +
mm/memory_hotplug.c| 28
This patch introduces zone_for_memory() to arch_add_memory() on x86_32
to ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch
This patch introduces zone_for_memory() to arch_add_memory() on ia64 to
ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Zhang Yanfei zhangyan...@cn.fujitsu.com
Cc: Dave Hansen dave.han...@intel.com
---
arch/ia64
, Wang Nan wrote:
This patch introduces zone_for_memory() to arch_add_memory() on tile to
ensure new, higher memory added into ZONE_MOVABLE if movable zone has
already setup.
This patch also fix a problem: on tile, new memory should be added into
ZONE_HIGHMEM by default, not MAX_NR_ZONES-1, which
If we are going to reset hash, we don't need to duplicate old hash
and remove every entries right after allocation.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Ingo Molnar mi...@redhat.com
---
kernel/trace/ftrace.c | 8 +---
1 file changed, 5
If we are going to reset hash, we don't need to duplicate old hash
and remove every entries right after allocation.
Change from v1:
- if statement is swapped to make condition positive.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Ingo Molnar mi
This patch add a --series option to git-quiltimport to allow users to
select the name of series file. This option is an analog of quilt's
QUILT_SERIES environment variable.
Signed-off-by: Wang Nan wangn...@huawei.com
---
Documentation/git-quiltimport.txt | 5 +
git-quiltimport.sh
-off-by: Wang Nan wangn...@huawei.com
Cc: Sasha Levin sasha.le...@oracle.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Geng Hui hui.g...@huawei.com
---
tools/lib/lockdep/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools
-off-by: Wang Nan wangn...@huawei.com
Cc: Arnaldo Carvalho de Melo a...@redhat.com
Cc: Jiri Olsa jo...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Namhyung Kim namhy...@kernel.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Geng Hui hui.g...@huawei.com
-off-by: Wang Nan wangn...@huawei.com
Acked-by: Sasha Levin sasha.le...@oracle.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Geng Hui hui.g...@huawei.com
---
tools/lib/lockdep/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
-off-by: Wang Nan wangn...@huawei.com
Acked-by: Jiri Olsa jo...@redhat.com
Cc: Arnaldo Carvalho de Melo a...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Namhyung Kim namhy...@kernel.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Geng Hui hui.g
This patch frees optinsn slot when range check error to prevent memory
leaks. Before this patch, cache entry in kprobe_insn_cache won't be
freed if kprobe optimizing fails due to range check failure.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/x86/kernel/kprobes/opt.c | 4 +++-
1 file
On 2014/7/29 9:43, Masami Hiramatsu wrote:
(2014/07/28 21:20), Wang Nan wrote:
This patch frees optinsn slot when range check error to prevent memory
leaks. Before this patch, cache entry in kprobe_insn_cache won't be
freed if kprobe optimizing fails due to range check failure.
Signed-off
Hi Steve,
What's your opinion on my v2 patch ( https://lkml.org/lkml/2014/7/14/839 )?
I have swapped if consitions following your suggestion.
On 2014/7/14 12:10, Wang Nan wrote:
If we are going to reset hash, we don't need to duplicate old hash
and remove every entries right after allocation
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for powerpc.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/powerpc/mm/mem.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for sh.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/sh/mm/init.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for x86_64.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/x86/mm/init_64.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for x86_32.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/x86/mm/init_32.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index
solve the problem by checking ZONE_MOVABLE when
choosing zone for new memory. If new memory is inside or higher than
ZONE_MOVABLE, makes it go there instead.
Wang Nan (5):
memory-hotplug: x86_64: suitable memory should go to ZONE_MOVABLE
memory-hotplug: x86_32: suitable memory should go
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for ia64.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/ia64/mm/init.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 25c3502
On 2014/7/18 15:56, Wang Nan wrote:
This patch add new memory to ZONE_MOVABLE if movable zone is setup
and lower than newly added memory for x86_32.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/x86/mm/init_32.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/x86
On 2014/7/18 17:16, Zhang Yanfei wrote:
Hello,
On 07/18/2014 03:55 PM, Wang Nan wrote:
This series of patches fix a problem when adding memory in bad manner.
For example: for a x86_64 machine booted with mem=400M and with 2GiB
memory installed, following commands cause problem:
# echo
' is not
true.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: David A. Long dave.l...@linaro.org
Cc: Russell King li...@arm.linux.org.uk
Cc: Jon Medhurst t...@linaro.org
Cc: Taras Kondratiuk taras.kondrat...@linaro.org
Cc: Ben Dooks ben.do...@codethink.co.uk
---
arch/arm/kernel/probes.c | 3 +--
1 file
On 2014/7/29 9:43, Masami Hiramatsu wrote:
(2014/07/28 21:20), Wang Nan wrote:
This patch frees optinsn slot when range check error to prevent memory
leaks. Before this patch, cache entry in kprobe_insn_cache won't be
freed if kprobe optimizing fails due to range check failure.
Signed-off
.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Russell King li...@arm.linux.org.uk
Cc: David A. Long dave.l...@linaro.org
Cc: Jon Medhurst t...@linaro.org
Cc: Taras Kondratiuk taras.kondrat...@linaro.org
Cc: Ben Dooks ben.do
Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Russell King li...@arm.linux.org.uk
Cc: David A. Long dave.l...@linaro.org
Cc: Jon Medhurst t
register
in ldr Rt, [Rn, Rm] against sp.
For stm instruction, it check sp register in instruction specific
decoder.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Russell King li...@arm.linux.org.uk
Cc: David A. Long dave.l...@linaro.org
Cc: Jon Medhurst t...@linaro.org
Cc: Taras Kondratiuk
it unoptimizable.
Wang Nan (3):
ARM: probes: check stack operation when decoding
kprobes: copy ainsn after alloc aggr kprobe
kprobes: arm: enable OPTPROBES for ARM 32
arch/arm/Kconfig | 1 +
arch/arm/include/asm/kprobes.h | 28 +
arch/arm/include/asm/probes.h| 1 +
arch
On 2014/8/12 9:38, Masami Hiramatsu wrote:
(2014/08/11 22:48), Will Deacon wrote:
Hello,
On Sat, Aug 09, 2014 at 03:12:19AM +0100, Wang Nan wrote:
This patch introduce kprobeopt for ARM 32.
Limitations:
- Currently only kernel compiled with ARM ISA is supported.
- Offset between probe
with
'.long 0' to avoid confusion: reader may regard 'nop' as an
instruction, but it is value in fact.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst (Tixy) t...@linaro.org
Cc: Russell King - ARM Linux li...@arm.linux.org.uk
Cc
Hi Masami and everyone,
When checking my code I found a problem: if we replace a stack operatinon
instruction,
it is possible that the emulate execution of such instruction destroy the stack
used
by kprobeopt:
+
+asm (
+ .global optprobe_template_entry\n
+
after optprobe_template_end and
reexecute them, this patch call singlestep to emulate/simulate the insn
directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Masami Hiramatsu masami.hiramatsu...@hitachi.com
---
arch/arm/Kconfig | 1
On 2014/7/29 19:36, Masami Hiramatsu wrote:
Hi Wang,
(2014/07/29 10:55), Wang Nan wrote:
On 2014/7/29 9:43, Masami Hiramatsu wrote:
(2014/07/28 21:20), Wang Nan wrote:
This patch frees optinsn slot when range check error to prevent memory
leaks. Before this patch, cache entry
Thank you for your comments. I'm waiting for your test result and preparing
the next version. Some response below.
On 2014/8/6 12:44, Masami Hiramatsu wrote:
(2014/08/05 16:28), Wang Nan wrote:
This patch introduce kprobeopt for ARM 32.
Thanks you for the great work! This looks fine for me
On 2014/8/6 21:36, Jon Medhurst (Tixy) wrote:
On Wed, 2014-08-06 at 13:44 +0900, Masami Hiramatsu wrote:
(2014/08/05 16:28), Wang Nan wrote
[...]
+asm (
+ .global optprobe_template_entry\n
+ optprobe_template_entry:\n
+#ifndef CONFIG_THUMB
On 2014/8/7 14:59, Masami Hiramatsu wrote:
(2014/08/06 15:24), Wang Nan wrote:
+
+static void
+optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+{
+ unsigned long flags;
+
+ regs-ARM_pc = (unsigned long)op-kp.addr;
+ regs-ARM_ORIG_r0 = ~0UL
returns true if address is well aligned;
- Improve optimized_callback: using opt_pre_handler();
- Bugfix: correct range checking code and improve comments;
- Fix commit message.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst
:
arch_check_optimized_kprobe(), can_optimize();
- Add missing flush_icache_range() in arch_prepare_optimized_kprobe();
- Remove unneeded 'return;'.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst (Tixy) t...@linaro.org
Cc: Russell King - ARM Linux
On 2014/8/15 23:23, Masami Hiramatsu wrote:
(2014/08/12 13:56), Wang Nan wrote:
+/* Caller must ensure addr 3 == 0 */
+static int can_optimize(unsigned long paddr)
+{
+return 1;
+}
As we have talked on another thread, we'd better filter-out all stack-pushing
instructions here, since
Hi Ingo,
Could you please collect this patch which fixes a perf problem?
Thanks.
On 2014/10/22 15:00, Namhyung Kim wrote:
On Thu, 16 Oct 2014 11:08:29 +0800, Wang Nan wrote:
When 'perf record' write headers, it calls write_xxx in
tools/perf/util/header.c, and check return value. It rolls
:
- Doesn't pass @h and @evlist to __write_cpudesc;
- Coding style fix.
v2 - v3:
- Rebase:
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git perf/core
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Namhyung Kim namhy...@kernel.org
Cc: Arnaldo Carvalho de Melo
by Namhyung Kim:
- Doesn't pass @h and @evlist to __write_cpudesc;
- Coding style fix.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Namhyung Kim namhy...@kernel.org
So now this will work with older kernels and new ones? Cool, thanks for
working on it, but:
[acme@ssdandy
This patch prohibit probing instructions for which the stack
requirement are unable to be determined statically. Some test cases
are found not work again after the modification, this patch also
removes them.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/kernel/kprobes-test-arm.c | 16
space required to be
protected. However, this bug exists since 2007, and gcc for ARM
actually doesn't generate code like it.
Wang Nan (4):
ARM: kprobes: seprates load and store actions
ARM: kprobes: introduces checker
ARM: kprobes: collects stack consumption for store instructions
ARM
This patch seprates actions for load and store. Following patches will
check store instructions for more informations.
Coverage test complains register test coverage missing after this
sepration. This patch introduces one testcase for it.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm
This patch introdces 'checker' to decoding phase, and calls checkers
when instruction decoding. This allows further analysis for specific
instructions.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/kernel/kprobes.c | 2 +-
arch/arm/kernel/kprobes.h | 3 ++-
arch/arm/kernel
This patch use previous introduced checker on store instructions,
record stack consumption informations to arch_probes_insn. With such
information, kprobe opt can decide how much stack needs to be
protected.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/include/asm/probes.h | 1
On 2014/10/24 17:02, Jon Medhurst (Tixy) wrote:
On Fri, 2014-10-24 at 09:52 +0900, Masami Hiramatsu wrote:
(2014/10/22 20:31), Wang Nan wrote:
Previous 5 version of ARM OPTPROBES patches are unable to deal with
stack storing instructions correctly. V5 patches disallow optimizing
every
Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.
v1 - v2:
- Bugfix: copy p-addr when alloc_aggr_kprobe.
Signed-off-by: Wang Nan wangn...@huawei.com
---
kernel/kprobes.c | 6 ++
1 file
mechanism two sepreted series.
Wang Nan (2):
kprobes: copy ainsn after alloc aggr kprobe
ARM: kprobes: enable OPTPROBES for ARM 32
arch/arm/Kconfig | 1 +
arch/arm/include/asm/kprobes.h| 26
arch/arm/kernel/Makefile | 3 +-
arch/arm/kernel/kprobes-opt
- v6:
- Dynamically reserve stack according to instruction.
- Rename: kprobes-opt.c - kprobes-opt-arm.c.
- Set op-optinsn.insn after all works are done.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst (Tixy) t
information becomes 'model name' field.
This patch simply corrects it.
Signed-off-by: Wang Nan wangn...@huawei.com
---
tools/perf/perf-sys.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/perf-sys.h b/tools/perf/perf-sys.h
index 937e432..4293970 100644
--- a/tools
#
#
Signed-off-by: Wang Nan wangn...@huawei.com
---
tools/perf/util/header.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index ce0de00..39b80ac 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util
After kerne 3.7 (commit b4b8f770eb10a1bccaf8aa0ec1956e2dd7ed1e0a),
/proc/cpuinfo replcae 'Processor' to 'model name'. This patch makes
CPUINFO_PROC to an array and provides two choices for ARM, make it
compatible for different kernel version.
Signed-off-by: Wang Nan wangn...@huawei.com
---
tools
On 2014/10/15 23:13, Arnaldo Carvalho de Melo wrote:
Em Wed, Oct 15, 2014 at 11:28:53AM +0800, Wang Nan escreveu:
Commit fbe96f29 (perf tools: Make perf.data more self-descriptive)
read '/proc/cpuinfo' to form cpu descriptor. For ARM, it finds
'Processor' field. It is correct when the patch
On 2014/10/16 22:55, Arnaldo Carvalho de Melo wrote:
Em Thu, Oct 16, 2014 at 11:21:13AM +0800, Wang Nan escreveu:
On 2014/10/15 23:13, Arnaldo Carvalho de Melo wrote:
Em Wed, Oct 15, 2014 at 11:28:53AM +0800, Wang Nan escreveu:
Commit fbe96f29 (perf tools: Make perf.data more self-descriptive
'Processor' to 'model name'. This patch makes
CPUINFO_PROC to an array and provides two choices for ARM, makes it
compatible for different kernel version.
v1 - v2: minor changes as suggested by Namhyung Kim:
- Doesn't pass @h and @evlist to __write_cpudesc;
- Coding style fix.
Signed-off-by: Wang Nan
:
- Doesn't pass @h and @evlist to __write_cpudesc;
- Coding style fix.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Namhyung Kim namhy...@kernel.org
---
tools/perf/perf.h| 24
tools/perf/util/header.c | 27 +--
2 files changed, 33
On 2014/10/22 14:44, Namhyung Kim wrote:
Hi Wang,
On Thu, 16 Oct 2014 11:08:43 +0800, Wang Nan wrote:
After kerne 3.7 (commit b4b8f770eb10a1bccaf8aa0ec1956e2dd7ed1e0a),
/proc/cpuinfo replcae 'Processor' to 'model name'. This patch makes
CPUINFO_PROC to an array and provides two choices
On 2014/10/22 15:00, Namhyung Kim wrote:
On Thu, 16 Oct 2014 11:08:29 +0800, Wang Nan wrote:
When 'perf record' write headers, it calls write_xxx in
tools/perf/util/header.c, and check return value. It rolls back all
working only when return value is negative.
This patch ensures
On 2014/10/22 15:39, Wang Nan wrote:
euler inclusion
target: kernel 3.10
category: bugfix
DTS: DTS2014101306477
Bugzilla: 623
directory: upstreamed
archive: https://lkml.org/lkml/2014/10/15/611
Sorry, the header is for our internal use only
This patch seprates actions for load and store. Following patches will
check store instructions for more informations.
Coverage test complains register test coverage missing after this
sepration. This patch introduces one testcase for it.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm
Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.
v1 - v2:
- Bugfix: copy p-addr when alloc_aggr_kprobe.
Signed-off-by: Wang Nan wangn...@huawei.com
---
kernel/kprobes.c | 6 ++
1 file
This patch is generated simply using:
$ sed -i s/union decode_action/struct decode_action/g `grep decode_action *
-rl`
Which allows futher expansion to decode_action.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/kernel/kprobes-arm.c | 2 +-
arch/arm/kernel/kprobes-thumb.c | 4
This patch use previous introduced checker on store instructions,
record stack consumption informations to arch_probes_insn. With such
information, kprobe opt can decide how much stack needs to be
protected.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/include/asm/probes.h | 1
This patch prohibit probing instructions for which the stack
requirement are unable to be determined statically. Some test cases
are found not work again after the modification, this patch also
removes them.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/kernel/kprobes-test-arm.c | 16
- v6:
- Dynamically reserve stack according to instruction.
- Rename: kprobes-opt.c - kprobes-opt-arm.c.
- Set op-optinsn.insn after all works are done.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst (Tixy) t
This patch introdces a 'checker' field to decode_action, and calls
checkers when instruction decoding. This allows further analysis
for specific instructions.
Signed-off-by: Wang Nan wangn...@huawei.com
---
arch/arm/kernel/probes.c | 10 ++
arch/arm/kernel/probes.h | 10 --
2
to stack information collected by checker.
2. In patch 7/7, stack protection code now is generated according to
the instruction be optimized.
3. In patch 7/7, kprobes-opt.c is renamed to kprobes-opt-arm.c due to
it only deal with ARM case.
4. Bug in v5 is fixed.
Wang Nan (7):
ARM
On 2014/8/28 17:39, Masami Hiramatsu wrote:
(2014/08/27 22:02), Wang Nan wrote:
Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.
Signed-off-by: Wang Nan wangn...@huawei.com
Cc: Russell King li
On 2014/8/29 16:47, Jon Medhurst (Tixy) wrote:
On Thu, 2014-08-28 at 11:24 +0100, Will Deacon wrote:
On Thu, Aug 28, 2014 at 11:20:21AM +0100, Russell King - ARM Linux wrote:
On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
(2014/08/27 22:02), Wang Nan wrote:
This patch
Hi Ingo and Masami,
I still unable to find this bugfix in mainline code. Is there any problem?
Thank you!
On 2014/8/27 21:37, Masami Hiramatsu wrote:
Hi Ingo,
Could you pull this for a bugfix of a memory leak?
(2014/08/27 21:15), Wang Nan wrote:
On 2014/7/29 9:43, Masami Hiramatsu
On 2014/11/18 19:38, Masami Hiramatsu wrote:
Hi Wang,
(2014/11/18 15:32), Wang Nan wrote:
Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.
As I've asked you on the previous series, I prefer
passed to arch_prepare_optimized_kprobe()
to avoid copy ainsn.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Jon Medhurst (Tixy) t...@linaro.org
Cc: Russell King - ARM Linux li...@arm.linux.org.uk
Cc: Will Deacon will.dea...@arm.com
consumption, which can be found at:
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-November/303525.html
Masami Hiramatsu (1):
kprobes: Pass the original kprobe for preparing optimized kprobe
Wang Nan (1):
ARM: kprobes: enable OPTPROBES for ARM 32
arch/arm/Kconfig
From: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Pass the original kprobe for preparing an optimized kprobe arch-dep
part, since for some architecture (e.g. ARM32) requires the information
in original kprobe.
Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Wang Nan
On 2014/11/18 14:32, Wang Nan wrote:
This patch introduce kprobeopt for ARM 32.
Limitations:
- Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than
32MiB. Masami Hiramatsu suggests replacing 2 words, it will make
://lkml.org/lkml/2014/8/27/255
https://lkml.org/lkml/2014/8/12/12
https://lkml.org/lkml/2014/8/8/992
https://lkml.org/lkml/2014/8/8/5
https://lkml.org/lkml/2014/8/5/63
Except fixing an error found by Tixy, the main changes in this series
are mainly small code cleanup and commit message cleanup.
Wang
This patch prohibit probing instructions for which the stack
requirement are unable to be determined statically. Some test cases
are found not work again after the modification, this patch also
removes them.
Signed-off-by: Wang Nan wangn...@huawei.com
---
v1 - v2:
- Use MAX_STACK_SIZE macro
1 - 100 of 5282 matches
Mail list logo