On 2017/05/17 11:40AM, Balbir Singh wrote:
> On Tue, 2017-05-16 at 19:05 +0530, Naveen N. Rao wrote:
> > On 2017/05/16 01:49PM, Balbir Singh wrote:
> > > arch_arm/disarm_probe use direct assignment for copying
> > > instructions, replace them with patch_instructio
Paolo Bonzini wrote:
The ARM and x86 architectures already use libdw, and it is useful to
have as much common code for the unwinder as possible. Porting PPC
to libdw only needs an architecture-specific hook to move the register
state from perf to libdw.
Thanks. Ravi has had a similar patch
On 2017/05/16 10:56AM, Anshuman Khandual wrote:
> On 05/16/2017 09:19 AM, Balbir Singh wrote:
> > patch_instruction is enhanced in this RFC to support
> > patching via a different virtual address (text_poke_area).
>
> Why writing instruction directly into the address is not
> sufficient and need
On 2017/05/16 01:49PM, Balbir Singh wrote:
> arch_arm/disarm_probe use direct assignment for copying
> instructions, replace them with patch_instruction
Thanks for doing this!
We will also have to convert optprobes and ftrace to use
patch_instruction, but that can be done once the basic
e-enabling preemption if the instruction emulation was successful. Fix
those issues.
Fixes: 22d8b3dec214c ("powerpc/kprobes: Emulate instructions on kprobe
handler re-entry")
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Michael,
Sorry for letting this slip thr
Fix a circa 2005 FIXME by implementing a check to ensure that we
actually got into the jprobe break handler() due to the trap in
jprobe_return().
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 20 +---
1 file chan
frame header.
We introduce STACK_FRAME_PARM_SAVE to encode the offset of the parameter
save area from the stack frame pointer. Remove the similarly named
PARAMETER_SAVE_AREA_OFFSET in ptrace.c as those are currently not used
anywhere.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.
On 2017/05/04 12:45PM, David Laight wrote:
> From: Naveen N. Rao [mailto:naveen.n@linux.vnet.ibm.com]
> > Sent: 04 May 2017 11:25
> > Use safer string manipulation functions when dealing with a
> > user-provided string in kprobe_lookup_name().
> >
> > Rep
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight <david.lai...@aculab.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Changed to ignore return value of 0 from strscpy(),
and mtmsr instructions (checked for in arch_prepare_kprobe).
Suggested-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Michael,
I have named the new label system_call_exit so as to follow the
existing labels
On 2017/05/04 04:03PM, Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
>
> > On 2017/04/27 08:19PM, Michael Ellerman wrote:
> >> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
> >>
&
On 2017/04/27 02:06PM, Naveen N. Rao wrote:
> v2 changes:
> - Patches 3 and 4 from the previous series have been merged.
> - Updated to no longer blacklist functions involved with stolen time
> accounting.
>
> v1:
> https://www.mail-archive.com/linuxppc-dev@lists.ozla
HAVE_FUNCTION_GRAPH_FP_TEST reveals another area (apart from jprobes)
that conflicts with the function_graph tracer: xmon. This is due to the
use of longjmp() in various places in xmon.
To address this, pause function_graph tracing while in xmon.
Signed-off-by: Naveen N. Rao <navee
This is very handy to catch potential crashes due to unexpected
interactions of function_graph tracer with weird things like
jprobes.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++-
arch/powerpc/include/asm/ft
for saving the original NIP and r15 for storing the
possibly modified NIP. r15 is later used to determine if the function
has been livepatched.
3. To re-use the same stack frame setup/teardown code, we have
ftrace_graph_caller() save the modified LR in pt_regs.
Signed-off-by: Naveen N. Rao <navee
the first _20_ bytes of
a function.
However, ftrace_location_range() does an inclusive search and hence
passing (addr + 16) is still accurate.
Clarify the same by updating comments around this.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/livepatch
remove the redundant saving of LR in
ftrace_graph_caller() for similar reasons. It is sufficient to ensure
LR and r0 point to the new return address.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 4
1 file chan
r.
So, if NIP == R12, we know we came here due to jprobes and we just
branch to the new IP. Otherwise, we continue with livepatch processing
as usual.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 10 ++
1 file
. Also, use SAVE_10GPRS() to simplify the code.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofil
obe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.
Signed-off-by: Naveen N. Rao
will be coding up and sending
across in a day or two.
This series has been run through ftrace selftests.
- Naveen
Naveen N. Rao (8):
powerpc/kprobes: Pause function_graph tracing during jprobes handling
powerpc/ftrace: Pass the correct stack pointer for
DYNAMIC_FTRACE_WITH_REGS
powerpc
[Copying linuxppc-dev list which I missed cc'ing initially]
On 2017/05/03 03:58PM, Steven Rostedt wrote:
> On Wed, 3 May 2017 23:43:41 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > This fixes a crash when function_grap
On 2017/04/27 08:19PM, Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
>
> > It is actually safe to probe system_call() in entry_64.S, but only till
> > .Lsyscall_exit. To allow this, convert .Lsyscall_exit to a
It is actually safe to probe system_call() in entry_64.S, but only till
.Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local
symbol __system_call() and blacklist that symbol, rather than
system_call().
Reviewed-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Na
Blacklist all functions involved while handling a trap. We:
- convert some of the labels into private labels,
- remove the duplicate 'restore' label, and
- blacklist most functions involved while handling a trap.
Reviewed-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Naveen
ed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 9b541d22595a..380361c0bb6a 10064
into private -- these are labels that I
felt are not necessary to read stack traces. If any of those are
important to have, please let me know.
- Naveen
Naveen N. Rao (3):
powerpc/kprobes: cleanup system_call_common and blacklist it from
kprobes
powerpc/kprobes: un-blacklist system_call() from
On 2017/04/27 11:24AM, Masami Hiramatsu wrote:
> Hello Naveen,
>
> On Tue, 25 Apr 2017 22:04:05 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > This is the second in the series of patches to build out an appropriate
> > kprobes bl
Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
>> diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
>> index 6a3b249a2ae1..d134b060564f 100644
>> --- a/kernel/kallsyms.c
>> +++ b/kernel/kallsyms.c
>> @@ -20
Excerpts from Masami Hiramatsu's message of April 26, 2017 10:11:
On Tue, 25 Apr 2017 21:37:11 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-
Excerpts from David Laight's message of April 25, 2017 22:06:
From: Naveen N. Rao
Sent: 25 April 2017 17:18
1. Fail early for invalid/zero length symbols.
2. Detect names of the form and skip checking for kernel
symbols in that case.
Signed-off-by: Naveen N. Rao <navee
Blacklist all functions involved when we return from a trap. We:
- convert some of the labels into private labels,
- remove the duplicate 'restore' label, and
- blacklist most functions involved during returning from a trap.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Blacklist all functions invoked when we get a trap, through to the time
we invoke the kprobe handler.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 1 +
arch/powerpc/kernel/exceptions-64s.S | 1 +
arch/powerpc/kernel/
It is actually safe to probe system_call() in entry_64.S, but only till
.Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local
symbol __system_call() and blacklist that symbol, rather than
system_call().
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Convert some of the labels into private labels and blacklist
system_call_common() and system_call() from kprobes. We can't take a
trap at parts of these functions as either MSR_RI is unset or the
kernel stack pointer is not yet setup.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.
once I expand my tests.
I have converted many labels into private -- these are labels that I
felt are not necessary to read stack traces. If any of those are
important to have, please let me know.
- Naveen
Naveen N. Rao (4):
powerpc/kprobes: cleanup system_call_common and blacklist it from
1. Fail early for invalid/zero length symbols.
2. Detect names of the form and skip checking for kernel
symbols in that case.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Masami, Michael,
I have added two very simple checks here, which I felt is good to have,
rathe
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight <david.lai...@aculab.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kp
t;m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/Makefile | 9 +-
arch/powerpc/kernel/entry_32.S| 107 ---
arch/powerpc/kernel/entry_64.S| 378 -
arch/powerpc/
Split ftrace_64.S further retaining the core ftrace 64-bit aspects
in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into
separate files based on -mprofile-kernel. The livepatch routines are all
now contained within the mprofile file.
Signed-off-by: Naveen N. Rao <navee
Excerpts from Michael Ellerman's message of April 22, 2017 11:25:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided fie
:33 AM, Naveen N. Rao wrote:
Convert usage of strchr()/strncpy()/strncat() to
strnchr()/memcpy()/strlcat() for simpler and safer string manipulation.
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 97b5eed1f76d..c73fb6e3b43f 100644
--- a/arch/powerpc/kernel
Excerpts from Masami Hiramatsu's message of April 21, 2017 19:12:
On Wed, 19 Apr 2017 16:38:22 +
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07:
> On Wed, 19 Apr 2017 18:21:02 +0530
> &qu
Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
This patch implements CONFIG_DEBUG_RODATA on PPC32.
As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.
As BAT/LTLB mappings are deactivated,
Excerpts from Paul Clarke's message of April 21, 2017 18:41:
a nit or two, below...
On 04/21/2017 07:32 AM, Naveen N. Rao wrote:
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 6a128f3a7ed1..ff9b1ac72a38 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1383,6 +1383,34 @@ bool
Convert usage of strchr()/strncpy()/strncat() to
strnchr()/memcpy()/strlcat() for simpler and safer string manipulation.
Reported-by: David Laight <david.lai...@aculab.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Changes: Additionally convert the strch
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the length of the string is
within expected limits.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.
Excerpts from Michael Ellerman's message of April 20, 2017 11:38:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 6a128f3a7ed1..bb86681c8a10 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1382,6
Excerpts from Michael Ellerman's message of April 20, 2017 12:03:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 71286dfd76a0..59159337a097 100644
--- a/arch/powerpc/kernel/kprobes.c
++
, I followed this since I
felt that Michael Ellerman prefers to keep functional changes separate
from refactoring. I'm fine with either approach.
Michael?
Thanks!
- Naveen
Thank you,
On Wed, 19 Apr 2017 18:21:05 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wr
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07:
On Wed, 19 Apr 2017 18:21:02 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should
Blacklist all the exception common/OOL handlers as the kernel stack is
not yet setup, which means we can't take a trap at this point.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/head-64.h | 1 +
1 file changed, 1 insertion(+)
diff --git
Introduce __head_end to mark end of the early fixed sections and use the
same to blacklist all exception handlers from kprobes.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/sections.h | 1 +
arch/powerpc/kernel/kprobes.c | 9 +
, I'm posting these right away.
I'd especially appreciate a review of the first patch and feedback on
whether it does the right thing with/without relocation. My tests
didn't reveal any issues.
Thanks,
Naveen
Naveen N. Rao (2):
powerpc: kprobes: blacklist exception handlers
powerpc: kprobes
k kretprobe_trampoline+0x0[OPTIMIZED]
and after patch:
# cat ../kprobes/list
c00d074c k _do_fork+0xc[DISABLED][FTRACE]
c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED]
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c
Signed-off-by: Naveen N. Rao <navee
.
Live patch and function graph continue to work fine with this change.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/p
From: Masami Hiramatsu <mhira...@kernel.org>
Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).
Tested-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
Signed-off-by: Masami Hiramatsu <mhira...@kernel.o
as we crash on powerpc without that patch.
- Naveen
Masami Hiramatsu (1):
kprobes: Skip preparing optprobe if the probe is ftrace-based
Naveen N. Rao (5):
powerpc: ftrace: minor cleanup
powerpc: ftrace: restore LR from pt_regs
powerpc: kprobes: add support for KPROBES_ON_FTRACE
powerpc
on the x86 code by Masami.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/
livepatch_handler()
nor ftrace_graph_caller() return back here.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entr
No functional changes.
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 52 ++-
1 file changed, 31 insertions(+), 21 deletions(-)
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove
the redundant save.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/power
Convert usage of strncpy()/strncat() to memcpy()/strlcat() for simpler
and safer string manipulation.
Reported-by: David Laight <david.lai...@aculab.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 11 +--
1 fil
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 8
1 fil
;
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/kprobes.h | 53 --
arch/powerpc/kernel/kprobes.c | 58 ++
arch/powerpc/kernel/optprobes.c| 4 +--
include/linux/kprobes.h
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the length of the string is
within expected limits.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.
ine+0x0[OPTIMIZED]
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 4 ++--
arch/powerpc/kernel/optprobes.c | 4 ++--
include/linux/kprobes.h | 2 +-
kernel/kp
ress review comments
from David Laight.
- Naveen
Naveen N. Rao (7):
kprobes: convert kprobe_lookup_name() to a function
powerpc: kprobes: fix handling of function offsets on ABIv2
kprobes: validate the symbol name length
powerpc: kprobes: use safer string functions in kprobe_lookup_name(
On 2017/04/19 08:48AM, David Laight wrote:
> From: Naveen N. Rao
> > Sent: 19 April 2017 09:09
> > To: David Laight; Michael Ellerman
> > Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami
> > Hiramatsu; Ingo Molnar
> > Subject: RE
Excerpts from David Laight's message of April 18, 2017 18:22:
From: Naveen N. Rao
Sent: 12 April 2017 11:58
...
+kprobe_opcode_t *kprobe_lookup_name(const char *name)
+{
...
+ char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
+ const char *modsym;
+ bool dot_appended
RACE=y
+CONFIG_FUNCTION_TRACER=y
+CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_SCHED_TRACER=y
+CONFIG_FTRACE_SYSCALLS=y
Any reason to not enable this for ppc64 and pseries defconfigs?
Apart from that, for this patch:
Acked-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
- Naveen
CONFIG_BLK_DE
Excerpts from Masami Hiramatsu's message of April 13, 2017 10:04:
On Wed, 12 Apr 2017 16:28:27 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No f
On 2017/04/13 01:37PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:28 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > On kprobe handler re-entry, try to emulate the instruction rather than
> > single stepping always.
&g
On 2017/04/13 01:34PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:27 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > This helper will be used in a subsequent patch to emulate instructions
> > on re-entering the
On 2017/04/13 01:32PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:26 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > kprobe_lookup_name() is specific to the kprobe subsystem and may not
> > always return the functi
On 2017/04/13 12:02PM, Masami Hiramatsu wrote:
> Hi Naveen,
Hi Masami,
>
> BTW, I saw you sent 3 different series, are there any
> conflict each other? or can we pick those independently?
Yes, all these three patch series are based off powerpc/next and they do
depend on each other, as they
Excerpts from PrasannaKumar Muralidharan's message of April 5, 2017 11:21:
On 30 March 2017 at 12:46, Naveen N. Rao
<naveen.n@linux.vnet.ibm.com> wrote:
Also, with a simple module to memset64() a 1GB vmalloc'ed buffer, here
are the results:
generic:0.245315533 seconds time e
/perf$ sudo cat /sys/kernel/debug/kprobes/list
c05f3b48 k read_mem+0x8[DISABLED]
Acked-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
v2:
- rebased on top of powerpc/next along with related kprobes patch
v3:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg114669.html
For v4, this has been rebased on top of powerpc/next as well as the
KPROBES_ON_FTRACE series. No other changes.
- Naveen
Naveen N. Rao (2):
powerpc: split ftrace bits into a separate file
powerpc: ftrace_64: split
t;m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/Makefile | 9 +-
arch/powerpc/kernel/entry_32.S| 107 ---
arch/powerpc/kernel/entry_64.S| 379 -
arch/powerpc/
Split ftrace_64.S further retaining the core ftrace 64-bit aspects
in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into
separate files based on -mprofile-kernel. The livepatch routines are all
now contained within the mprofile file.
Signed-off-by: Naveen N. Rao <navee
on the x86 code by Masami.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/
From: Masami Hiramatsu <mhira...@kernel.org>
Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).
Tested-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
Signed-off-by: Masami Hiramatsu <mhira...@kernel.
.
Live patch and function graph continue to work fine with this change.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/p
k kretprobe_trampoline+0x0[OPTIMIZED]
and after patch:
# cat ../kprobes/list
c00d074c k _do_fork+0xc[DISABLED][FTRACE]
c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED]
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/
without that patch.
- Naveen
Masami Hiramatsu (1):
kprobes: Skip preparing optprobe if the probe is ftrace-based
Naveen N. Rao (4):
powerpc: ftrace: minor cleanup
powerpc: ftrace: restore LR from pt_regs
powerpc: kprobes: add support for KPROBES_ON_FTRACE
powerpc: kprobes: prefer ftrace
livepatch_handler()
nor ftrace_graph_caller() return back here.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/entry_64.S | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entr
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c
Signed-off-by: Naveen N. Rao <navee
This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No functional change.
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kp
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
As a related change, remove the duplicate saving of msr as that is
already done in set_current_kprobe()
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen
The macro is now pretty long and ugly on powerpc. In the light of
further changes needed here, convert it to a __weak variant to be
over-ridden with a nicer looking function.
Suggested-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vne
ine+0x0[OPTIMIZED]
Acked-by: Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 4 ++--
arch/powerpc/kernel/optprobes.c | 4 ++--
include/linux/kprobes.h | 2 +-
kernel/kp
v1:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1334843.html
For v2, this series has been re-ordered and rebased on top of
powerpc/next so as to make it easier to resolve conflicts with -tip. No
other changes.
- Naveen
Naveen N. Rao (5):
kprobes: convert kprobe_lookup_name
lso update the above comment to refer to 'stdu'?
Apart from that, for this patch:
Reviewed-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
- Naveen
- lwz r5,GPR1(r1)
+ ld r5,GPR1(r1)
std r8,0(r5)
/* Clear _TIF_EMULATE_STACK_STORE flag */
--
1.9.3
On 2017/03/29 10:36PM, Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
> > I also tested zram today with the command shared by Wilcox:
> >
> > without patch: 1.493782568 seconds time elapsed( +- 0.08% )
> &g
On 2017/03/28 11:44AM, Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
>
> > diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
> > index 85fa9869aec5..ec531de6 100644
> > --- a/arch/powerpc/lib/mem_
Use the newly introduced memset32() to pre-fill BPF page(s) with trap
instructions.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp6
Based on Matthew Wilcox's patches for other architectures.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/string.h | 24
arch/powerpc/lib/mem_64.S | 19 ++-
2 files changed, 42 insertions(+), 1 de
is obviously non-critical, but given that we have
64K pages on powerpc64, it does help to speed up the BPF JIT.
- Naveen
Naveen N. Rao (2):
powerpc: string: implement optimized memset variants
powerpc: bpf: use memset32() to pre-fill traps in BPF page(s)
arch/powerpc/include/asm/string.h | 24
901 - 1000 of 1321 matches
Mail list logo