works properly on a Power8 machine. More details in
the patch. All other patches are unchanged from v4.
- Naveen
Naveen N. Rao (10):
powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
paths
powerpc64/ftrace: Rearrange #ifdef sections in ftrace.h
powerpc64/ftrace: Add
Benjamin Herrenschmidt wrote:
On Wed, 2018-04-18 at 14:32 +0530, Naveen N. Rao wrote:
+#ifdef CONFIG_PPC_BOOK3S_64
+static char *print_trap(unsigned long trapno)
+{
+ trapno &= 0xff0;
+ switch (trapno) {
+ case 0x100: return "SRESET";
+
: 90009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 2822 XER:
2000
CFAR: c06e4770 DAR: DSISR: 4200 SOFTE: 0
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
I find this useful to have in backtraces, instead of having to look it
up. Some
Michael Ellerman wrote:
Nicholas Piggin writes:
On Sun, 8 Apr 2018 20:17:47 +1000
Balbir Singh wrote:
On Fri, Apr 6, 2018 at 3:56 AM, Nicholas Piggin wrote:
> This crashes with a "Bad real address for load" attempting to load
>
as when it is disabled.
Signed-off-by: Anton Blanchard <an...@samba.org>
Signed-off-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
tools/testing/selftests/powerpc/Makefile | 3 +-
tools/testing/selftes
CPU_FTR_DAWR enabled. Guard __set_breakpoint() within
hw_breakpoint_disable() with ppc_breakpoint_available() to address this.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/hw_breakpoint.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff
implementation for ftrace_caller() that is used when registers
are not required to be saved.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/ftrace.h | 2 -
arch/powerpc/include/asm/module.h | 3 +
arch/powerpc/
Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace.
ies as early.
Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace
ABI")
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/module_64.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/ar
during
kexec.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/machine_kexec.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/machine_kexec.c
b/arch/powerpc/kernel/machine_kexec.c
index 2694d078741d..936c7e2d421e 100644
---
Disable ftrace when a cpu is about to go offline. When the cpu is woken
up, ftrace will get enabled in start_secondary().
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/smp.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/
ondary() for secondary cpus.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/setup_64.c | 10 +++---
arch/powerpc/kernel/smp.c | 4
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kerne
Add some helpers to enable/disable ftrace through paca->ftrace_enabled.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/ftrace.h | 17 +
1 file changed, 17 insertions(+)
diff --git a/arch/powerpc/include/asm/ftrace.h
b/arc
Re-arrange the last #ifdef section in preparation for a subsequent
change.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/ftrace.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/ftrace.h
ftrace by setting paca->ftrace_enabled to zero. Once we exit the
guest and restore host MMU context, we re-enable ftrace.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8
1 file changed, 8 insertions(+)
diff --git a/arc
uses a 'trap' to do its job.
For such scenarios, introduce a new field in paca 'ftrace_enabled',
which is checked on ftrace entry before continuing. This field can then
be set to zero to disable/pause ftrace, and set to a non-zero value to
resume ftrace.
Signed-off-by: Naveen N. Rao <navee
new
implementation of ftrace_caller() that saves the minimum register state
is provided. We switch between the two variants through
ftrace_modify_call(). The necessary support to call into the two
different variants from modules is also added.
- Naveen
Naveen N. Rao (10):
powerpc64/ftrace:
Michael Ellerman wrote:
Michael Ellerman <m...@ellerman.id.au> writes:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
If function_graph tracer is enabled during kexec, we see the below
exception in the simulator:
root@(none):/# kexec -e
Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace.
implementation for ftrace_caller() that is used when registers
are not required to be saved.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Changes since v2:
- Disable ftrace when asked for, in ftrace_caller().
arch/powerpc/include/asm/ftrace.h | 2 -
ies as early.
Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace
ABI")
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/module_64.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/ar
-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/machine_kexec.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/machine_kexec.c
b/arch/powerpc/kernel/machine_kexec.c
index 2694d078741d..4a1b24a9dd61 100644
--- a/arch/p
ftrace by setting paca->ftrace_disabled. Once we exit the guest
and restore host MMU context, we re-enable ftrace.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/po
ost...@goodmis.org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
Changes since v2:
- Move paca->ftrace_disabled out of CONFIG_BOOK3S_64.
- Disable tracing when asked for, not the other way around.
arch/powerpc/include/asm/paca.h| 1 +
arch/pow
different variants from modules is also added.
- Naveen
Naveen N. Rao (6):
powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
paths
powerpc64/ftrace: Disable ftrace during kvm guest entry/exit
powerpc/kexec: Disable ftrace before switching to the new kernel
powerpc64
Naveen N. Rao wrote:
We have some C code that we call into from real mode where we cannot
take any exceptions. Though the C functions themselves are mostly safe,
if these functions are traced, there is a possibility that we may take
an exception. For instance, in certain conditions, the ftrace
Steven Rostedt wrote:
On Wed, 21 Mar 2018 20:59:03 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
Thanks for the review!
You're welcome. Note, I did put "Acked-by" and not "Reviewed-by"
because my "Reviewed-by" is usually a bit
Steven Rostedt wrote:
On Wed, 21 Mar 2018 20:07:32 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
I think that will always be set here. ftrace_64_mprofile.S is only built
for -mprofile-kernel and we select HAVE_DYNAMIC_FTRACE_WITH_REGS if
MPROFILE_KERNEL is e
Steven Rostedt wrote:
On Wed, 21 Mar 2018 16:13:22 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
{
mod->arch.toc = my_r2(sechdrs, mod);
- mod->arch.tramp = create_ft
uses a 'trap' to do its job.
For such scenarios, introduce a new field in paca 'ftrace_disabled',
which is checked on ftrace entry before continuing. This field can then
be set to a non-zero value to disable/pause ftrace, and reset to zero to
resume ftrace.
Signed-off-by: Naveen N. Rao <navee
ftrace by setting paca->ftrace_disabled. Once we exit the guest
and restore host MMU context, we re-enable ftrace.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/po
implementation for ftrace_caller() that is used when registers
are not required to be saved.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/ftrace.h | 2 -
arch/powerpc/include/asm/module.h | 3 +
arch/powerpc/
Our implementation matches that of the generic version, which also
handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in
favor of the generic version.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/trace/ftrace.
ies as early.
Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace
ABI")
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/module_64.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/ar
register
state is provided. We switch between the two variants through
ftrace_modify_call(). The necessary support to call into the two
different variants from modules is also added.
- Naveen
Naveen N. Rao (5):
powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
paths
Michael Ellerman wrote:
Nicholas Piggin <npig...@gmail.com> writes:
On Mon, 19 Mar 2018 14:43:00 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
We have some C code that we call into from real mode where we cannot
take any exceptions. Though the
Steven Rostedt wrote:
On Mon, 19 Mar 2018 14:43:00 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 3f3e81852422..fdf702b4df25 100644
--- a/arc
Nicholas Piggin wrote:
On Mon, 19 Mar 2018 14:43:00 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
We have some C code that we call into from real mode where we cannot
take any exceptions. Though the C functions themselves are mostly safe,
if these func
ftrace by setting paca->ftrace_disabled. Once we exit the guest
and restore host MMU context, we re-enable ftrace.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/po
for this currently, we guard the
ftrace/mcount checks within CONFIG_KVM. This can later be removed
if/when there are other users.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/paca.h| 1 +
arch/powerpc/kernel/asm-offsets.c | 1 +
and as such, it is guarded in CONFIG_KVM as
suggested by Steven Rostedt. This has had some minimal testing, and I
will continue to test it this week and report back if I see any issues.
- Naveen
Naveen N. Rao (2):
powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code
paths
Michael Ellerman wrote:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
My earlier assumption was that we have other scenarios when we are in
realmode (specifically with MSR_RI unset) where we won't be able to
recover from a trap, during function tracing
Steven Rostedt wrote:
On Thu, 08 Mar 2018 00:07:07 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
Yes, that's negligible.
Though, to be honest, I will have to introduce a 'mfmsr' for the older
-pg variant. I still think that the improved reliability far outw
Michael Ellerman wrote:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
We can't take a trap in most parts of real mode code. Instead of adding
the 'notrace' annotation to all C functions that can be invoked from
real mode, detect that we are in real mode on ftrace
Hi Steve,
Steven Rostedt wrote:
On Wed, 7 Mar 2018 22:16:19 +0530
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
We can't take a trap in most parts of real mode code. Instead of adding
the 'notrace' annotation to all C functions that can be invoked from
real mode
We can't take a trap in most parts of real mode code. Instead of adding
the 'notrace' annotation to all C functions that can be invoked from
real mode, detect that we are in real mode on ftrace entry and return
back.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
This RF
ofile-kernel, and would need to be updated
to deal with other ftrace entry code.
Naveen N. Rao (1):
powerpc/ftrace: Exclude real mode code from being traced
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 15 +++
1 file changed, 15 insertions(+)
--
2.16.1
Madhavan Srinivasan wrote:
Sampled Data Address Register (SDAR) is a 64-bit
register that contains the effective address of
the storage operand of an instruction that was
being executed, possibly out-of-order, at or around
the time that the Performance Monitor alert occurred.
In certain
Hi Sasha,
Sasha Levin wrote:
From: "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com>
[ Upstream commit 90ec5e89e393c76e19afc845d8f88a5dc8315919 ]
Sorry if this is obvious, but why was this patch picked up for -stable?
I don't see the upstream commit tagging -stable
Daniel Borkmann wrote:
On 02/27/2018 01:13 PM, Sandipan Das wrote:
With this patch, it will look like this:
0: (85) call pc+2#bpf_prog_8f85936f29a7790a+3
(Note the +2 is the insn->off already.)
1: (b7) r0 = 1
2: (95) exit
3: (b7) r0 = 2
4: (95) exit
where 8f85936f29a7790a is
Mark Lord wrote:
On 18-02-21 07:52 AM, Mark Lord wrote:
On 18-02-21 03:35 AM, Naveen N. Rao wrote:
..
Looks good to me, but I am not able to apply this patch. There seems to be
whitespace damage.
Here (attached) is a clean copy.
Again, this time with the commit message included!
Thanks
Mark Lord wrote:
I am using SECCOMP to filter syscalls on a ppc32 platform,
and noticed that the JIT compiler was failing on the BPF
even though the interpreter was working fine.
The issue was that the compiler was missing one of the instructions
used by SECCOMP, so here is a patch to enable
Michael Ellerman wrote:
"Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
Daniel Borkmann wrote:
On 02/15/2018 05:25 PM, Daniel Borkmann wrote:
On 02/13/2018 05:05 AM, Sandipan Das wrote:
The imm field of a bpf_insn is a signed 32-bit integer. For
JIT-ed bpf-to-bp
Daniel Borkmann wrote:
On 02/15/2018 05:25 PM, Daniel Borkmann wrote:
On 02/13/2018 05:05 AM, Sandipan Das wrote:
The imm field of a bpf_insn is a signed 32-bit integer. For
JIT-ed bpf-to-bpf function calls, it stores the offset from
__bpf_call_base to the start of the callee function.
For
Naveen N. Rao wrote:
Alexei Starovoitov wrote:
On 2/8/18 4:03 AM, Sandipan Das wrote:
The imm field of a bpf_insn is a signed 32-bit integer. For
JIT-ed bpf-to-bpf function calls, it stores the offset from
__bpf_call_base to the start of the callee function.
For some architectures
Alexei Starovoitov wrote:
On 2/8/18 4:03 AM, Sandipan Das wrote:
The imm field of a bpf_insn is a signed 32-bit integer. For
JIT-ed bpf-to-bpf function calls, it stores the offset from
__bpf_call_base to the start of the callee function.
For some architectures, such as powerpc64, it was found
Michael Ellerman wrote:
Ananth N Mavinakayanahalli <ana...@linux.vnet.ibm.com> writes:
On Wed, Jan 17, 2018 at 05:52:24PM +0530, Naveen N. Rao wrote:
Michael Ellerman reported the following call trace when running
ftracetest:
BUG: using __this_cpu_write() in preemptible [
ing
preemption and resetting current kprobe to the probe handlers
(kprobe_handler() or optimized_callback()).
Reported-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 30 +--
] perf_event_interrupt+0x298/0x460
[c0027964] performance_monitor_exception+0x54/0x70
[c0009ba4] performance_monitor_common+0x114/0x120
Fix this by deferefencing them safely.
Suggested-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
Signed-off-by: Ravi Bangoria <r
Michael Ellerman wrote:
Balbir Singh writes:
On Thu, Nov 23, 2017 at 4:32 AM, Mahesh J Salgaonkar
wrote:
From: Mahesh Salgaonkar
Rebooting into a new kernel with kexec fails in trace_tlbie() which is
called from
Mahesh Jagannath Salgaonkar wrote:
On 11/23/2017 12:37 AM, Naveen N. Rao wrote:
Mahesh J Salgaonkar wrote:
From: Mahesh Salgaonkar <mah...@linux.vnet.ibm.com>
Rebooting into a new kernel with kexec fails in trace_tlbie() which is
called from native_hpte_clear(). This happens if the r
Mahesh J Salgaonkar wrote:
From: Mahesh Salgaonkar
Rebooting into a new kernel with kexec fails in trace_tlbie() which is
called from native_hpte_clear(). This happens if the running kernel has
CONFIG_LOCKDEP enabled. With lockdep enabled, the tracepoints always
Kamalesh Babulal wrote:
On Thursday 16 November 2017 11:15 PM, Josh Poimboeuf wrote:
On Thu, Nov 16, 2017 at 06:39:03PM +0530, Naveen N. Rao wrote:
Josh Poimboeuf wrote:
On Wed, Nov 15, 2017 at 02:58:33PM +0530, Naveen N. Rao wrote:
+int instr_is_link_branch(unsigned int instr
Josh Poimboeuf wrote:
On Wed, Nov 15, 2017 at 02:58:33PM +0530, Naveen N. Rao wrote:
> +int instr_is_link_branch(unsigned int instr)
> +{
> + return (instr_is_branch_iform(instr) || instr_is_branch_bform(instr)) &&
> + (instr & BRANCH_SET_LINK);
> +}
>
Josh Poimboeuf wrote:
On Tue, Nov 14, 2017 at 03:59:21PM +0530, Naveen N. Rao wrote:
Kamalesh Babulal wrote:
> From: Josh Poimboeuf <jpoim...@redhat.com>
>
> When attempting to load a livepatch module, I got the following error:
>
> module_64: patch_module: Expect noo
Kamalesh Babulal wrote:
From: Josh Poimboeuf
When attempting to load a livepatch module, I got the following error:
module_64: patch_module: Expect noop after relocate, got 3c82
The error was triggered by the following code in
unregister_netdevice_queue():
14c:
Michael Ellerman wrote:
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> writes:
>
>> On 2017/06/19 03:21PM, Aneesh Kumar K.V wrote:
>>> > @@ -1445,8 +1446,8 @@ do_hash_page:
>>> > handle_page_fault:
>>> > andis. r0,r4,DSISR_
Josh Poimboeuf wrote:
On Tue, Nov 07, 2017 at 12:31:05PM +0100, Torsten Duwe wrote:
On Tue, Nov 07, 2017 at 07:34:29PM +1100, Michael Ellerman wrote:
> > So, just brainstorming a bit, here are the possible solutions I can
> > think of:
> >
> > a) Create a special klp stub for such calls (as in
On 2017/10/31 03:30PM, Torsten Duwe wrote:
> On Tue, Oct 31, 2017 at 07:49:59PM +0530, Naveen N . Rao wrote:
> > Hi Kamalesh,
> > Sorry for the late review. Overall, the patch looks good to me.
>
> If you're good with a hammer...
>
> Maybe I failed to express my views
Hi Kamalesh,
Sorry for the late review. Overall, the patch looks good to me. So:
Acked-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
However, I have a few minor comments which can be addressed in a
subsequent patch.
On 2017/10/17 05:18AM, Kamalesh Babulal wrote:
> Livepatch re-us
it needed to be dereferenced. This is actually
only an issue for kprobe blacklisted asm labels (through use of
_ASM_NOKPROBE_SYMBOL) and can cause other issues with ftrace. Also, the
additional checks are not really necessary for our other uses.
As such, move this check to the kprobes subsystem.
Signed-off-by
a recursive
loop.
Reported-by: Chandan Rajendra <chan...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/code-patching.h | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/cod
in ppc_function_entry() for all users.
- Naveen
Naveen N. Rao (2):
Revert "powerpc64/elfv1: Only dereference function descriptor for
non-text symbols"
powerpc/kprobes: Dereference function pointers only if the address
does not belong to kernel text
arch/powerpc/includ
On 2017/10/25 04:35PM, Masami Hiramatsu wrote:
> On Mon, 23 Oct 2017 22:07:41 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > Use safer string manipulation functions when dealing with a
> > user-provided string in kprobe_lookup_name().
On 2017/10/25 02:18AM, Masami Hiramatsu wrote:
> On Mon, 23 Oct 2017 22:07:38 +0530
> "Naveen N. Rao" <naveen.n@linux.vnet.ibm.com> wrote:
>
> > Per Documentation/kprobes.txt, probe handlers need to be invoked with
> > preemption disabled. Update opt
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight <david.lai...@aculab.com>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kp
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/lib/sstep.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 8c3955e183d4..70274b7b4773 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/ss
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes-ftrace.c | 10 ++
arch/powerpc/kernel/optprobes.c | 10 --
2 files changed, 2 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerp
if
CONFIG_PREEMPT was enabled. Commit a30b85df7d599f ("kprobes: Use
synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT=y") changes
this.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/optprobes.c | 5 +++--
1 file changed, 3 insertions(+), 2 de
pendency for registers that are not used for
> CLOCK_REALTIME_COARSE (Naveen)
> - Reorder instructions to get proper dependency setup (Naveen)
>
> arch/powerpc/kernel/asm-offsets.c | 2 +
> arch/powerpc/kernel/vdso64/gettimeofday.S | 68
> ++-
> 2 files changed, 59 insertions(+), 11 deletions(-)
Looks good to me.
Reviewed-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
Hi Santosh,
This seems to have gone from v4 to v6 -- did I miss v5?
On 2017/10/10 11:10PM, Santosh Sivaraj wrote:
> Current vDSO64 implementation does not have support for coarse clocks
> (CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME_COARSE), for which it falls back
> to system call, increasing the
On 2017/10/10 09:03AM, Santosh Sivaraj wrote:
> * Naveen N. Rao <naveen.n@linux.vnet.ibm.com> wrote (on 2017-10-09
> 10:39:18 +):
>
> > On 2017/10/09 08:09AM, Santosh Sivaraj wrote:
[snip]
> > > + add r3,r3,r0
> > > + ld r0,CFG_TB_UPD
nge analyse_instr so it doesn't modify
> *regs")
> Signed-off-by: Sandipan Das <sandi...@linux.vnet.ibm.com>
Reviewed-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
> ---
> v2: Make zero-checking condition more compact.
> Add details of original commit that
On 2017/10/09 11:07AM, Sandipan Das wrote:
> According to the GCC documentation, the behaviour of __builtin_clz()
> and __builtin_clzl() is undefined if the value of the input argument
> is zero. Without handling this special case, these builtins have been
> used for emulating the following
On 2017/10/09 08:09AM, Santosh Sivaraj wrote:
> Current vDSO64 implementation does not have support for coarse clocks
> (CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME_COARSE), for which it falls back
> to system call, increasing the response time, vDSO implementation reduces
> the cycle time. Below is a
On 2017/09/18 09:23AM, Santosh Sivaraj wrote:
> Current vDSO64 implementation does not have support for coarse clocks
> (CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME_COARSE), for which it falls back
> to system call, increasing the response time, vDSO implementation reduces
> the cycle time. Below is a
On 2017/09/18 09:23AM, Santosh Sivaraj wrote:
> Current vDSO64 implementation does not have support for coarse clocks
> (CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME_COARSE), for which it falls back
> to system call, increasing the response time, vDSO implementation reduces
> the cycle time. Below is a
Hi Santosh,
On 2017/09/18 09:23AM, Santosh Sivaraj wrote:
> Reorganize code to make it easy to introduce CLOCK_REALTIME_COARSE and
> CLOCK_MONOTONIC_COARSE timer support.
>
> Signed-off-by: Santosh Sivaraj
> ---
> arch/powerpc/kernel/vdso64/gettimeofday.S | 14
egular stack and is used to store/restore TOC/LR values, other than
> the stub setup and branch. The additional instructions sequences to
> handle klp_stub increases the stub size and current ppc64_stub_insn[]
> is not sufficient to hold them. This patch also introduces new
> ppc6
ues, other than
> the stub setup and branch. The additional instructions sequences to handle
> klp_stub increases the stub size and current ppc64_stub_insn[] is not
> sufficient to hold them. This patch also introduces new
> ppc64le_klp_stub_entry[], along with the helpers to find/allocate
> livep
n behaviour of all these instructions needs to
> be updated to set these new bits accordingly.
>
> Signed-off-by: Sandipan Das <sandi...@linux.vnet.ibm.com>
For this series:
Acked-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
> ---
> arch/powerpc/lib/sstep.c | 2 ++
>
Fix a circa 2005 FIXME by implementing a check to ensure that we
actually got into the jprobe break handler() due to the trap in
jprobe_return().
Acked-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kp
dler(). Disable it.
Fixes: ead514d5fb30a0 ("powerpc/kprobes: Add support for KPROBES_ON_FTRACE")
Acked-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes-ftrace.c | 15 +++
patch
renames is_current_kprobe_addr() to __is_active_jprobe() and adds a
comment to (hopefully) better clarify the purpose of this helper. The
helper has also now been moved to kprobes-ftrace.c so that it is only
available for KPROBES_ON_FTRACE.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ib
check done by __this_cpu_read().
Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
Reported-by: Kamalesh Babulal <kamal...@linux.vnet.ibm.com>
Tested-by: Kamalesh Babulal <kamal...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@li
1. This is only used in kprobes.c, so make it static.
2. Remove the un-necessary (ret == 0) comparison in the else clause.
Reviewed-by: Masami Hiramatsu <mhira...@kernel.org>
Reviewed-by: Kamalesh Babulal <kamal...@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n@lin
at least once, then we single step only this probe hit and
continue to try emulating the instruction in subsequent probe hits.
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
---
arch/powerpc/kernel/kprobes.c | 17 ++---
1 file changed, 14 insertions(+), 3 del
detection of jprobe in
ftrace_caller() and that this is only for KPROBES_ON_FTRACE.
- Naveen
Naveen N. Rao (6):
powerpc/kprobes: Some cosmetic updates to try_to_emulate()
powerpc/kprobes: Do not suppress instruction emulation if a single run
failed
powerpc/kprobes: Clean up jprobe
On 2017/09/21 09:00PM, Balbir Singh wrote:
> On Thu, Sep 21, 2017 at 8:02 PM, Michael Ellerman wrote:
> > Kamalesh Babulal writes:
> >
> >> While running stress test with livepatch module loaded, kernel
> >> bug was triggered.
> >>
> >> cpu 0x5:
g task stack and livepatch stack into r1 register.
> Using r11 register also avoids disabling/enabling irq's while setting
> up the livepatch stack.
>
> Signed-off-by: Kamalesh Babulal <kamal...@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsinghar...@gmail.com>
> Cc: Naveen N. Rao &
701 - 800 of 1321 matches
Mail list logo