Re: [RFC 2/2] powerpc/kprobes: Move kprobes over to patch_instruction

2017-05-30 Thread Naveen N. Rao
On 2017/05/17 11:40AM, Balbir Singh wrote: > On Tue, 2017-05-16 at 19:05 +0530, Naveen N. Rao wrote: > > On 2017/05/16 01:49PM, Balbir Singh wrote: > > > arch_arm/disarm_probe use direct assignment for copying > > > instructions, replace them with patch_instructio

Re: [PATCH] perf: libdw support for powerpc

2017-05-18 Thread Naveen N. Rao
Paolo Bonzini wrote: The ARM and x86 architectures already use libdw, and it is useful to have as much common code for the unwinder as possible. Porting PPC to libdw only needs an architecture-specific hook to move the register state from perf to libdw. Thanks. Ravi has had a similar patch

Re: [RFC 0/2] Consolidate patch_instruction

2017-05-16 Thread Naveen N. Rao
On 2017/05/16 10:56AM, Anshuman Khandual wrote: > On 05/16/2017 09:19 AM, Balbir Singh wrote: > > patch_instruction is enhanced in this RFC to support > > patching via a different virtual address (text_poke_area). > > Why writing instruction directly into the address is not > sufficient and need

Re: [RFC 2/2] powerpc/kprobes: Move kprobes over to patch_instruction

2017-05-16 Thread Naveen N. Rao
On 2017/05/16 01:49PM, Balbir Singh wrote: > arch_arm/disarm_probe use direct assignment for copying > instructions, replace them with patch_instruction Thanks for doing this! We will also have to convert optprobes and ftrace to use patch_instruction, but that can be done once the basic

[PATCH] powerpc/kprobes: Fix handling of instruction emulation on probe re-entry

2017-05-15 Thread Naveen N. Rao
e-enabling preemption if the instruction emulation was successful. Fix those issues. Fixes: 22d8b3dec214c ("powerpc/kprobes: Emulate instructions on kprobe handler re-entry") Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- Michael, Sorry for letting this slip thr

[PATCH 2/2] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return()

2017-05-15 Thread Naveen N. Rao
Fix a circa 2005 FIXME by implementing a check to ensure that we actually got into the jprobe break handler() due to the trap in jprobe_return(). Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 20 +--- 1 file chan

[PATCH 1/2] powerpc/jprobes: Save and restore the parameter save area

2017-05-15 Thread Naveen N. Rao
frame header. We introduce STACK_FRAME_PARM_SAVE to encode the offset of the parameter save area from the stack frame pointer. Remove the similarly named PARAMETER_SAVE_AREA_OFFSET in ptrace.c as those are currently not used anywhere. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.

Re: [PATCH v2] powerpc/kprobes: refactor kprobe_lookup_name for safer string operations

2017-05-04 Thread 'Naveen N. Rao'
On 2017/05/04 12:45PM, David Laight wrote: > From: Naveen N. Rao [mailto:naveen.n@linux.vnet.ibm.com] > > Sent: 04 May 2017 11:25 > > Use safer string manipulation functions when dealing with a > > user-provided string in kprobe_lookup_name(). > > > > Rep

[PATCH v2] powerpc/kprobes: refactor kprobe_lookup_name for safer string operations

2017-05-04 Thread Naveen N. Rao
Use safer string manipulation functions when dealing with a user-provided string in kprobe_lookup_name(). Reported-by: David Laight <david.lai...@aculab.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- Changed to ignore return value of 0 from strscpy(),

[PATCH v3 2/3] powerpc/kprobes: un-blacklist system_call() from kprobes

2017-05-04 Thread Naveen N. Rao
and mtmsr instructions (checked for in arch_prepare_kprobe). Suggested-by: Michael Ellerman <m...@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- Michael, I have named the new label system_call_exit so as to follow the existing labels

Re: [PATCH v2 2/3] powerpc/kprobes: un-blacklist system_call() from kprobes

2017-05-04 Thread Naveen N. Rao
On 2017/05/04 04:03PM, Michael Ellerman wrote: > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: > > > On 2017/04/27 08:19PM, Michael Ellerman wrote: > >> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: > >> &

Re: [PATCH v2 0/3] powerpc: build out kprobes blacklist

2017-05-03 Thread Naveen N. Rao
On 2017/04/27 02:06PM, Naveen N. Rao wrote: > v2 changes: > - Patches 3 and 4 from the previous series have been merged. > - Updated to no longer blacklist functions involved with stolen time > accounting. > > v1: > https://www.mail-archive.com/linuxppc-dev@lists.ozla

[PATCH 8/8] powerpc/xmon: Disable function_graph tracing while in xmon

2017-05-03 Thread Naveen N. Rao
HAVE_FUNCTION_GRAPH_FP_TEST reveals another area (apart from jprobes) that conflicts with the function_graph tracer: xmon. This is due to the use of longjmp() in various places in xmon. To address this, pause function_graph tracing while in xmon. Signed-off-by: Naveen N. Rao <navee

[PATCH 6/8] powerpc/ftrace: Add support for HAVE_FUNCTION_GRAPH_FP_TEST for -mprofile-kernel

2017-05-03 Thread Naveen N. Rao
This is very handy to catch potential crashes due to unexpected interactions of function_graph tracer with weird things like jprobes. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/asm-prototypes.h | 3 ++- arch/powerpc/include/asm/ft

[PATCH 5/8] powerpc/ftrace: Eliminate duplicate stack setup for ftrace_graph_caller()

2017-05-03 Thread Naveen N. Rao
for saving the original NIP and r15 for storing the possibly modified NIP. r15 is later used to determine if the function has been livepatched. 3. To re-use the same stack frame setup/teardown code, we have ftrace_graph_caller() save the modified LR in pt_regs. Signed-off-by: Naveen N. Rao <navee

[PATCH 7/8] powerpc/livepatch: Clarify location of mcount call site

2017-05-03 Thread Naveen N. Rao
the first _20_ bytes of a function. However, ftrace_location_range() does an inclusive search and hence passing (addr + 16) is still accurate. Clarify the same by updating comments around this. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/livepatch

[PATCH 3/8] powerpc/ftrace: Remove redundant saving of LR in ftrace[_graph]_caller

2017-05-03 Thread Naveen N. Rao
remove the redundant saving of LR in ftrace_graph_caller() for similar reasons. It is sufficient to ensure LR and r0 point to the new return address. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 4 1 file chan

[PATCH 4/8] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

2017-05-03 Thread Naveen N. Rao
r. So, if NIP == R12, we know we came here due to jprobes and we just branch to the new IP. Otherwise, we continue with livepatch processing as usual. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 10 ++ 1 file

[PATCH 2/8] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS

2017-05-03 Thread Naveen N. Rao
. Also, use SAVE_10GPRS() to simplify the code. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofil

[PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling

2017-05-03 Thread Naveen N. Rao
obe_return(), which never returns back to the hook, but instead to the original jprobe'd function. The solution is to momentarily pause function_graph tracing before invoking the jprobe hook and re-enable it when returning back to the original jprobe'd function. Signed-off-by: Naveen N. Rao

[PATCH 0/8] powerpc: Various fixes and enhancements for kprobes and ftrace

2017-05-03 Thread Naveen N. Rao
will be coding up and sending across in a day or two. This series has been run through ftrace selftests. - Naveen Naveen N. Rao (8): powerpc/kprobes: Pause function_graph tracing during jprobes handling powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS powerpc

Re: [PATCH 1/8] powerpc/kprobes: Pause function_graph tracing during jprobes handling

2017-05-03 Thread Naveen N. Rao
[Copying linuxppc-dev list which I missed cc'ing initially] On 2017/05/03 03:58PM, Steven Rostedt wrote: > On Wed, 3 May 2017 23:43:41 +0530 > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: > > > This fixes a crash when function_grap

Re: [PATCH v2 2/3] powerpc/kprobes: un-blacklist system_call() from kprobes

2017-04-27 Thread Naveen N. Rao
On 2017/04/27 08:19PM, Michael Ellerman wrote: > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: > > > It is actually safe to probe system_call() in entry_64.S, but only till > > .Lsyscall_exit. To allow this, convert .Lsyscall_exit to a

[PATCH v2 2/3] powerpc/kprobes: un-blacklist system_call() from kprobes

2017-04-27 Thread Naveen N. Rao
It is actually safe to probe system_call() in entry_64.S, but only till .Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local symbol __system_call() and blacklist that symbol, rather than system_call(). Reviewed-by: Masami Hiramatsu <mhira...@kernel.org> Signed-off-by: Na

[PATCH v2 3/3] powerpc/kprobes: blacklist functions invoked on a trap

2017-04-27 Thread Naveen N. Rao
Blacklist all functions involved while handling a trap. We: - convert some of the labels into private labels, - remove the duplicate 'restore' label, and - blacklist most functions involved while handling a trap. Reviewed-by: Masami Hiramatsu <mhira...@kernel.org> Signed-off-by: Naveen

[PATCH v2 1/3] powerpc/kprobes: cleanup system_call_common and blacklist it from kprobes

2017-04-27 Thread Naveen N. Rao
ed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 25 + 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 9b541d22595a..380361c0bb6a 10064

[PATCH v2 0/3] powerpc: build out kprobes blacklist

2017-04-27 Thread Naveen N. Rao
into private -- these are labels that I felt are not necessary to read stack traces. If any of those are important to have, please let me know. - Naveen Naveen N. Rao (3): powerpc/kprobes: cleanup system_call_common and blacklist it from kprobes powerpc/kprobes: un-blacklist system_call() from

Re: [PATCH 0/4] powerpc: build out kprobes blacklist

2017-04-27 Thread Naveen N. Rao
On 2017/04/27 11:24AM, Masami Hiramatsu wrote: > Hello Naveen, > > On Tue, 25 Apr 2017 22:04:05 +0530 > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: > > > This is the second in the series of patches to build out an appropriate > > kprobes bl

Re: [PATCH] kallsyms: optimize kallsyms_lookup_name() for a few cases

2017-04-26 Thread Naveen N. Rao
Michael Ellerman wrote: > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: >> diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c >> index 6a3b249a2ae1..d134b060564f 100644 >> --- a/kernel/kallsyms.c >> +++ b/kernel/kallsyms.c >> @@ -20

Re: [PATCH] powerpc/kprobes: refactor kprobe_lookup_name for safer string operations

2017-04-26 Thread Naveen N. Rao
Excerpts from Masami Hiramatsu's message of April 26, 2017 10:11: On Tue, 25 Apr 2017 21:37:11 +0530 "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: Use safer string manipulation functions when dealing with a user-provided string in kprobe_lookup_name(). Reported-

RE: [PATCH] kallsyms: optimize kallsyms_lookup_name() for a few cases

2017-04-25 Thread Naveen N. Rao
Excerpts from David Laight's message of April 25, 2017 22:06: From: Naveen N. Rao Sent: 25 April 2017 17:18 1. Fail early for invalid/zero length symbols. 2. Detect names of the form and skip checking for kernel symbols in that case. Signed-off-by: Naveen N. Rao <navee

[PATCH 4/4] powerpc/kprobes: blacklist functions involved when returning from exception

2017-04-25 Thread Naveen N. Rao
Blacklist all functions involved when we return from a trap. We: - convert some of the labels into private labels, - remove the duplicate 'restore' label, and - blacklist most functions involved during returning from a trap. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> ---

[PATCH 3/4] powerpc/kprobes: blacklist functions invoked on a trap

2017-04-25 Thread Naveen N. Rao
Blacklist all functions invoked when we get a trap, through to the time we invoke the kprobe handler. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 1 + arch/powerpc/kernel/exceptions-64s.S | 1 + arch/powerpc/kernel/

[PATCH 2/4] powerpc/kprobes: un-blacklist system_call() from kprobes

2017-04-25 Thread Naveen N. Rao
It is actually safe to probe system_call() in entry_64.S, but only till .Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local symbol __system_call() and blacklist that symbol, rather than system_call(). Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> ---

[PATCH 1/4] powerpc/kprobes: cleanup system_call_common and blacklist it from kprobes

2017-04-25 Thread Naveen N. Rao
Convert some of the labels into private labels and blacklist system_call_common() and system_call() from kprobes. We can't take a trap at parts of these functions as either MSR_RI is unset or the kernel stack pointer is not yet setup. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.

[PATCH 0/4] powerpc: build out kprobes blacklist

2017-04-25 Thread Naveen N. Rao
once I expand my tests. I have converted many labels into private -- these are labels that I felt are not necessary to read stack traces. If any of those are important to have, please let me know. - Naveen Naveen N. Rao (4): powerpc/kprobes: cleanup system_call_common and blacklist it from

[PATCH] kallsyms: optimize kallsyms_lookup_name() for a few cases

2017-04-25 Thread Naveen N. Rao
1. Fail early for invalid/zero length symbols. 2. Detect names of the form and skip checking for kernel symbols in that case. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- Masami, Michael, I have added two very simple checks here, which I felt is good to have, rathe

[PATCH] powerpc/kprobes: refactor kprobe_lookup_name for safer string operations

2017-04-25 Thread Naveen N. Rao
Use safer string manipulation functions when dealing with a user-provided string in kprobe_lookup_name(). Reported-by: David Laight <david.lai...@aculab.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kp

[REBASED PATCH v4 1/2] powerpc: split ftrace bits into a separate file

2017-04-25 Thread Naveen N. Rao
t;m...@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/Makefile | 9 +- arch/powerpc/kernel/entry_32.S| 107 --- arch/powerpc/kernel/entry_64.S| 378 - arch/powerpc/

[REBASED PATCH v4 2/2] powerpc: ftrace_64: split further based on -mprofile-kernel

2017-04-25 Thread Naveen N. Rao
Split ftrace_64.S further retaining the core ftrace 64-bit aspects in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into separate files based on -mprofile-kernel. The livepatch routines are all now contained within the mprofile file. Signed-off-by: Naveen N. Rao <navee

Re: [PATCH v4 3/7] kprobes: validate the symbol name provided during probe registration

2017-04-23 Thread Naveen N. Rao
Excerpts from Michael Ellerman's message of April 22, 2017 11:25: "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: When a kprobe is being registered, we use the symbol_name field to lookup the address where the probe should be placed. Since this is a user-provided fie

Re: [PATCH v4 4/7] powerpc/kprobes: Use safer string functions in kprobe_lookup_name()

2017-04-23 Thread Naveen N. Rao
:33 AM, Naveen N. Rao wrote: Convert usage of strchr()/strncpy()/strncat() to strnchr()/memcpy()/strlcat() for simpler and safer string manipulation. diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 97b5eed1f76d..c73fb6e3b43f 100644 --- a/arch/powerpc/kernel

Re: [PATCH v3 3/7] kprobes: validate the symbol name length

2017-04-23 Thread Naveen N. Rao
Excerpts from Masami Hiramatsu's message of April 21, 2017 19:12: On Wed, 19 Apr 2017 16:38:22 + "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07: > On Wed, 19 Apr 2017 18:21:02 +0530 > &qu

Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32

2017-04-21 Thread Naveen N. Rao
Excerpts from Christophe Leroy's message of April 21, 2017 18:32: This patch implements CONFIG_DEBUG_RODATA on PPC32. As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings in order to allow page protection setup at the level of each page. As BAT/LTLB mappings are deactivated,

Re: [PATCH v4 3/7] kprobes: validate the symbol name provided during probe registration

2017-04-21 Thread Naveen N. Rao
Excerpts from Paul Clarke's message of April 21, 2017 18:41: a nit or two, below... On 04/21/2017 07:32 AM, Naveen N. Rao wrote: diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 6a128f3a7ed1..ff9b1ac72a38 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1383,6 +1383,34 @@ bool

[PATCH v4 4/7] powerpc/kprobes: Use safer string functions in kprobe_lookup_name()

2017-04-21 Thread Naveen N. Rao
Convert usage of strchr()/strncpy()/strncat() to strnchr()/memcpy()/strlcat() for simpler and safer string manipulation. Reported-by: David Laight <david.lai...@aculab.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- Changes: Additionally convert the strch

[PATCH v4 3/7] kprobes: validate the symbol name provided during probe registration

2017-04-21 Thread Naveen N. Rao
When a kprobe is being registered, we use the symbol_name field to lookup the address where the probe should be placed. Since this is a user-provided field, let's ensure that the length of the string is within expected limits. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.

Re: [PATCH v3 3/7] kprobes: validate the symbol name length

2017-04-20 Thread Naveen N. Rao
Excerpts from Michael Ellerman's message of April 20, 2017 11:38: "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 6a128f3a7ed1..bb86681c8a10 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1382,6

Re: [PATCH 1/2] powerpc: kprobes: blacklist exception handlers

2017-04-20 Thread Naveen N. Rao
Excerpts from Michael Ellerman's message of April 20, 2017 12:03: "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 71286dfd76a0..59159337a097 100644 --- a/arch/powerpc/kernel/kprobes.c ++

Re: [PATCH v3 6/7] powerpc: kprobes: emulate instructions on kprobe handler re-entry

2017-04-19 Thread Naveen N. Rao
, I followed this since I felt that Michael Ellerman prefers to keep functional changes separate from refactoring. I'm fine with either approach. Michael? Thanks! - Naveen Thank you, On Wed, 19 Apr 2017 18:21:05 +0530 "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wr

Re: [PATCH v3 3/7] kprobes: validate the symbol name length

2017-04-19 Thread Naveen N. Rao
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07: On Wed, 19 Apr 2017 18:21:02 +0530 "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: When a kprobe is being registered, we use the symbol_name field to lookup the address where the probe should

[PATCH 2/2] powerpc: kprobes: blacklist exception common handlers

2017-04-19 Thread Naveen N. Rao
Blacklist all the exception common/OOL handlers as the kernel stack is not yet setup, which means we can't take a trap at this point. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/head-64.h | 1 + 1 file changed, 1 insertion(+) diff --git

[PATCH 1/2] powerpc: kprobes: blacklist exception handlers

2017-04-19 Thread Naveen N. Rao
Introduce __head_end to mark end of the early fixed sections and use the same to blacklist all exception handlers from kprobes. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/sections.h | 1 + arch/powerpc/kernel/kprobes.c | 9 +

[PATCH 0/2] powerpc: kprobes: blacklist exception vectors

2017-04-19 Thread Naveen N. Rao
, I'm posting these right away. I'd especially appreciate a review of the first patch and feedback on whether it does the right thing with/without relocation. My tests didn't reveal any issues. Thanks, Naveen Naveen N. Rao (2): powerpc: kprobes: blacklist exception handlers powerpc: kprobes

[PATCH v4 6/6] powerpc: kprobes: prefer ftrace when probing function entry

2017-04-19 Thread Naveen N. Rao
k kretprobe_trampoline+0x0[OPTIMIZED] and after patch: # cat ../kprobes/list c00d074c k _do_fork+0xc[DISABLED][FTRACE] c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED] Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/

[PATCH v4 5/6] powerpc: introduce a new helper to obtain function entry points

2017-04-19 Thread Naveen N. Rao
kprobe_lookup_name() is specific to the kprobe subsystem and may not always return the function entry point (in a subsequent patch for KPROBES_ON_FTRACE). For looking up function entry points, introduce a separate helper and use the same in optprobes.c Signed-off-by: Naveen N. Rao <navee

[PATCH v4 2/6] powerpc: ftrace: restore LR from pt_regs

2017-04-19 Thread Naveen N. Rao
. Live patch and function graph continue to work fine with this change. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 13 +++-- 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/p

[PATCH v4 3/6] kprobes: Skip preparing optprobe if the probe is ftrace-based

2017-04-19 Thread Naveen N. Rao
From: Masami Hiramatsu <mhira...@kernel.org> Skip preparing optprobe if the probe is ftrace-based, since anyway, it must not be optimized (or already optimized by ftrace). Tested-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> Signed-off-by: Masami Hiramatsu <mhira...@kernel.o

[PATCH v4 0/6] powerpc: add support for KPROBES_ON_FTRACE

2017-04-19 Thread Naveen N. Rao
as we crash on powerpc without that patch. - Naveen Masami Hiramatsu (1): kprobes: Skip preparing optprobe if the probe is ftrace-based Naveen N. Rao (5): powerpc: ftrace: minor cleanup powerpc: ftrace: restore LR from pt_regs powerpc: kprobes: add support for KPROBES_ON_FTRACE powerpc

[PATCH v4 4/6] powerpc: kprobes: add support for KPROBES_ON_FTRACE

2017-04-19 Thread Naveen N. Rao
on the x86 code by Masami. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- .../debug/kprobes-on-ftrace/arch-support.txt | 2 +- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/kprobes.h | 10 ++ arch/powerpc/

[PATCH v4 1/6] powerpc: ftrace: minor cleanup

2017-04-19 Thread Naveen N. Rao
livepatch_handler() nor ftrace_graph_caller() return back here. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entr

[PATCH v3 5/7] powerpc: kprobes: factor out code to emulate instruction into a helper

2017-04-19 Thread Naveen N. Rao
No functional changes. Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 52 ++- 1 file changed, 31 insertions(+), 21 deletions(-)

[PATCH v3 7/7] powerpc: kprobes: remove duplicate saving of msr

2017-04-19 Thread Naveen N. Rao
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove the redundant save. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/kernel/kprobes.c b/arch/power

[PATCH v3 4/7] powerpc: kprobes: use safer string functions in kprobe_lookup_name()

2017-04-19 Thread Naveen N. Rao
Convert usage of strncpy()/strncat() to memcpy()/strlcat() for simpler and safer string manipulation. Reported-by: David Laight <david.lai...@aculab.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 11 +-- 1 fil

[PATCH v3 6/7] powerpc: kprobes: emulate instructions on kprobe handler re-entry

2017-04-19 Thread Naveen N. Rao
On kprobe handler re-entry, try to emulate the instruction rather than single stepping always. Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 8 1 fil

[PATCH v3 1/7] kprobes: convert kprobe_lookup_name() to a function

2017-04-19 Thread Naveen N. Rao
; Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/kprobes.h | 53 -- arch/powerpc/kernel/kprobes.c | 58 ++ arch/powerpc/kernel/optprobes.c| 4 +-- include/linux/kprobes.h

[PATCH v3 3/7] kprobes: validate the symbol name length

2017-04-19 Thread Naveen N. Rao
When a kprobe is being registered, we use the symbol_name field to lookup the address where the probe should be placed. Since this is a user-provided field, let's ensure that the length of the string is within expected limits. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.

[PATCH v3 2/7] powerpc: kprobes: fix handling of function offsets on ABIv2

2017-04-19 Thread Naveen N. Rao
ine+0x0[OPTIMIZED] Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 4 ++-- arch/powerpc/kernel/optprobes.c | 4 ++-- include/linux/kprobes.h | 2 +- kernel/kp

[PATCH v3 0/7] powerpc: a few kprobe fixes and refactoring

2017-04-19 Thread Naveen N. Rao
ress review comments from David Laight. - Naveen Naveen N. Rao (7): kprobes: convert kprobe_lookup_name() to a function powerpc: kprobes: fix handling of function offsets on ABIv2 kprobes: validate the symbol name length powerpc: kprobes: use safer string functions in kprobe_lookup_name(

Re: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function

2017-04-19 Thread 'Naveen N. Rao'
On 2017/04/19 08:48AM, David Laight wrote: > From: Naveen N. Rao > > Sent: 19 April 2017 09:09 > > To: David Laight; Michael Ellerman > > Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami > > Hiramatsu; Ingo Molnar > > Subject: RE

RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function

2017-04-19 Thread Naveen N. Rao
Excerpts from David Laight's message of April 18, 2017 18:22: From: Naveen N. Rao Sent: 12 April 2017 11:58 ... +kprobe_opcode_t *kprobe_lookup_name(const char *name) +{ ... + char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN]; + const char *modsym; + bool dot_appended

Re: [PATCH] powerpc/configs: Enable function trace by default

2017-04-13 Thread Naveen N. Rao
RACE=y +CONFIG_FUNCTION_TRACER=y +CONFIG_FUNCTION_GRAPH_TRACER=y CONFIG_SCHED_TRACER=y +CONFIG_FTRACE_SYSCALLS=y Any reason to not enable this for ppc64 and pseries defconfigs? Apart from that, for this patch: Acked-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> - Naveen CONFIG_BLK_DE

Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper

2017-04-13 Thread Naveen N. Rao
Excerpts from Masami Hiramatsu's message of April 13, 2017 10:04: On Wed, 12 Apr 2017 16:28:27 +0530 "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: This helper will be used in a subsequent patch to emulate instructions on re-entering the kprobe handler. No f

Re: [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry

2017-04-12 Thread Naveen N. Rao
On 2017/04/13 01:37PM, Masami Hiramatsu wrote: > On Wed, 12 Apr 2017 16:28:28 +0530 > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: > > > On kprobe handler re-entry, try to emulate the instruction rather than > > single stepping always. &g

Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper

2017-04-12 Thread Naveen N. Rao
On 2017/04/13 01:34PM, Masami Hiramatsu wrote: > On Wed, 12 Apr 2017 16:28:27 +0530 > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: > > > This helper will be used in a subsequent patch to emulate instructions > > on re-entering the

Re: [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points

2017-04-12 Thread Naveen N. Rao
On 2017/04/13 01:32PM, Masami Hiramatsu wrote: > On Wed, 12 Apr 2017 16:28:26 +0530 > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote: > > > kprobe_lookup_name() is specific to the kprobe subsystem and may not > > always return the functi

Re: [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring

2017-04-12 Thread Naveen N. Rao
On 2017/04/13 12:02PM, Masami Hiramatsu wrote: > Hi Naveen, Hi Masami, > > BTW, I saw you sent 3 different series, are there any > conflict each other? or can we pick those independently? Yes, all these three patch series are based off powerpc/next and they do depend on each other, as they

Re: [PATCH 1/2] powerpc: string: implement optimized memset variants

2017-04-12 Thread Naveen N. Rao
Excerpts from PrasannaKumar Muralidharan's message of April 5, 2017 11:21: On 30 March 2017 at 12:46, Naveen N. Rao <naveen.n@linux.vnet.ibm.com> wrote: Also, with a simple module to memset64() a 1GB vmalloc'ed buffer, here are the results: generic:0.245315533 seconds time e

[PATCH v2] powerpc: kprobes: convert __kprobes to NOKPROBE_SYMBOL()

2017-04-12 Thread Naveen N. Rao
/perf$ sudo cat /sys/kernel/debug/kprobes/list c05f3b48 k read_mem+0x8[DISABLED] Acked-by: Masami Hiramatsu <mhira...@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- v2: - rebased on top of powerpc/next along with related kprobes patch

[PATCH v4 0/2] powerpc: split ftrace bits into a separate

2017-04-12 Thread Naveen N. Rao
v3: https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg114669.html For v4, this has been rebased on top of powerpc/next as well as the KPROBES_ON_FTRACE series. No other changes. - Naveen Naveen N. Rao (2): powerpc: split ftrace bits into a separate file powerpc: ftrace_64: split

[PATCH v4 1/2] powerpc: split ftrace bits into a separate file

2017-04-12 Thread Naveen N. Rao
t;m...@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/Makefile | 9 +- arch/powerpc/kernel/entry_32.S| 107 --- arch/powerpc/kernel/entry_64.S| 379 - arch/powerpc/

[PATCH v4 2/2] powerpc: ftrace_64: split further based on -mprofile-kernel

2017-04-12 Thread Naveen N. Rao
Split ftrace_64.S further retaining the core ftrace 64-bit aspects in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into separate files based on -mprofile-kernel. The livepatch routines are all now contained within the mprofile file. Signed-off-by: Naveen N. Rao <navee

[PATCH v3 4/5] powerpc: kprobes: add support for KPROBES_ON_FTRACE

2017-04-12 Thread Naveen N. Rao
on the x86 code by Masami. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- .../debug/kprobes-on-ftrace/arch-support.txt | 2 +- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/kprobes.h | 10 ++ arch/powerpc/

[PATCH v3 3/5] kprobes: Skip preparing optprobe if the probe is ftrace-based

2017-04-12 Thread Naveen N. Rao
From: Masami Hiramatsu <mhira...@kernel.org> Skip preparing optprobe if the probe is ftrace-based, since anyway, it must not be optimized (or already optimized by ftrace). Tested-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> Signed-off-by: Masami Hiramatsu <mhira...@kernel.

[PATCH v3 2/5] powerpc: ftrace: restore LR from pt_regs

2017-04-12 Thread Naveen N. Rao
. Live patch and function graph continue to work fine with this change. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 13 +++-- 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/p

[PATCH v3 5/5] powerpc: kprobes: prefer ftrace when probing function entry

2017-04-12 Thread Naveen N. Rao
k kretprobe_trampoline+0x0[OPTIMIZED] and after patch: # cat ../kprobes/list c00d074c k _do_fork+0xc[DISABLED][FTRACE] c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED] Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/

[PATCH v3 0/5] powerpc: add support for KPROBES_ON_FTRACE

2017-04-12 Thread Naveen N. Rao
without that patch. - Naveen Masami Hiramatsu (1): kprobes: Skip preparing optprobe if the probe is ftrace-based Naveen N. Rao (4): powerpc: ftrace: minor cleanup powerpc: ftrace: restore LR from pt_regs powerpc: kprobes: add support for KPROBES_ON_FTRACE powerpc: kprobes: prefer ftrace

[PATCH v3 1/5] powerpc: ftrace: minor cleanup

2017-04-12 Thread Naveen N. Rao
livepatch_handler() nor ftrace_graph_caller() return back here. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/entry_64.S | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entr

[PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points

2017-04-12 Thread Naveen N. Rao
kprobe_lookup_name() is specific to the kprobe subsystem and may not always return the function entry point (in a subsequent patch for KPROBES_ON_FTRACE). For looking up function entry points, introduce a separate helper and use the same in optprobes.c Signed-off-by: Naveen N. Rao <navee

[PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper

2017-04-12 Thread Naveen N. Rao
This helper will be used in a subsequent patch to emulate instructions on re-entering the kprobe handler. No functional change. Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kp

[PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry

2017-04-12 Thread Naveen N. Rao
On kprobe handler re-entry, try to emulate the instruction rather than single stepping always. As a related change, remove the duplicate saving of msr as that is already done in set_current_kprobe() Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen

[PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function

2017-04-12 Thread Naveen N. Rao
The macro is now pretty long and ugly on powerpc. In the light of further changes needed here, convert it to a __weak variant to be over-ridden with a nicer looking function. Suggested-by: Masami Hiramatsu <mhira...@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n@linux.vne

[PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2

2017-04-12 Thread Naveen N. Rao
ine+0x0[OPTIMIZED] Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/kernel/kprobes.c | 4 ++-- arch/powerpc/kernel/optprobes.c | 4 ++-- include/linux/kprobes.h | 2 +- kernel/kp

[PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring

2017-04-12 Thread Naveen N. Rao
v1: https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1334843.html For v2, this series has been re-ordered and rebased on top of powerpc/next so as to make it easier to resolve conflicts with -tip. No other changes. - Naveen Naveen N. Rao (5): kprobes: convert kprobe_lookup_name

Re: [PATCH] ppc64/kprobe: Fix oops when kprobed on 'stdu' instruction

2017-04-10 Thread Naveen N. Rao
lso update the above comment to refer to 'stdu'? Apart from that, for this patch: Reviewed-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> - Naveen - lwz r5,GPR1(r1) + ld r5,GPR1(r1) std r8,0(r5) /* Clear _TIF_EMULATE_STACK_STORE flag */ -- 1.9.3

Re: [PATCH 1/2] powerpc: string: implement optimized memset variants

2017-03-30 Thread Naveen N. Rao
On 2017/03/29 10:36PM, Michael Ellerman wrote: > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: > > I also tested zram today with the command shared by Wilcox: > > > > without patch: 1.493782568 seconds time elapsed( +- 0.08% ) > &g

Re: [PATCH 1/2] powerpc: string: implement optimized memset variants

2017-03-28 Thread Naveen N. Rao
On 2017/03/28 11:44AM, Michael Ellerman wrote: > "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes: > > > diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S > > index 85fa9869aec5..ec531de6 100644 > > --- a/arch/powerpc/lib/mem_

[PATCH 2/2] powerpc: bpf: use memset32() to pre-fill traps in BPF page(s)

2017-03-27 Thread Naveen N. Rao
Use the newly introduced memset32() to pre-fill BPF page(s) with trap instructions. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/net/bpf_jit_comp64.c | 6 +- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp6

[PATCH 1/2] powerpc: string: implement optimized memset variants

2017-03-27 Thread Naveen N. Rao
Based on Matthew Wilcox's patches for other architectures. Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com> --- arch/powerpc/include/asm/string.h | 24 arch/powerpc/lib/mem_64.S | 19 ++- 2 files changed, 42 insertions(+), 1 de

Re: Optimised memset64/memset32 for powerpc

2017-03-27 Thread Naveen N. Rao
is obviously non-critical, but given that we have 64K pages on powerpc64, it does help to speed up the BPF JIT. - Naveen Naveen N. Rao (2): powerpc: string: implement optimized memset variants powerpc: bpf: use memset32() to pre-fill traps in BPF page(s) arch/powerpc/include/asm/string.h | 24

<    5   6   7   8   9   10   11   12   13   14   >