enghani
> ---
> v1 -> v2:
> 1. Remove helpers of extended_cede_processor()
Acked-by: Naveen N Rao
>
> arch/powerpc/include/asm/plpar_wrappers.h | 28 ---
> 1 file changed, 28 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/plpar_wrapp
On Tue, May 14, 2024 at 03:35:03PM GMT, Gautam Menghani wrote:
> Remove extended_cede_processor() definition as it has no callers since
> commit 48f6e7f6d948("powerpc/pseries: remove cede offline state for CPUs")
extended_cede_processor() was added in commit 69ddb57cbea0
("powerpc/pseries: Add
On Tue, May 14, 2024 at 04:39:30AM GMT, Christophe Leroy wrote:
>
>
> Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> > On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
> >> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> >>> This u
dbe6e2456fb0 ("powerpc/bpf/64: add support for atomic fetch operations")
Fixes: 1e82dfaa7819 ("powerpc/bpf/64: Add instructions for atomic_[cmp]xchg")
> Signed-off-by: Puranjay Mohan
> Acked-by: Paul E. McKenney
Cc: sta...@vger.kernel.org # v6.0+
I have tested this with test_bpf and test_progs.
Reviewed-by: Naveen N Rao
- Naveen
On Wed, May 08, 2024 at 11:54:04AM GMT, Puranjay Mohan wrote:
> The Linux Kernel Memory Model [1][2] requires RMW operations that have a
> return value to be fully ordered.
>
> BPF atomic operations with BPF_FETCH (including BPF_XCHG and
> BPF_CMPXCHG) return a value back so they need to be JITed
return -EINVAL;
>
> - if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
> - reladdr = func_addr - local_paca->kernelbase;
> +#ifdef CONFIG_PPC_KERNEL_PCREL
Would be good to retain use of IS_ENABLED().
Reviewed-by: Naveen N Rao
- Naveen
0x8000L)) {
> @@ -233,9 +235,9 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, struct
> codegen_context *ctx, u
>
> EMIT(PPC_RAW_ADDIS(_R12, _R2, PPC_HA(reladdr)));
> EMIT(PPC_RAW_ADDI(_R12, _R12, PPC_LO(reladdr)));
> - EMIT(PPC_RAW_MTCTR(_R12));
> - EMIT(PPC_RAW_BCTRL());
> }
> + EMIT(PPC_RAW_MTCTR(_R12));
> + EMIT(PPC_RAW_BCTRL());
This change shouldn't be necessary since these instructions are moved
back into the conditional in the next patch.
Other than those minor comments:
Reviewed-by: Naveen N Rao
- Naveen
_to_ns(be64_to_cpu(lp->l2_to_l1_cs_tb));
> + l2_runtime_ns = tb_to_ns(be64_to_cpu(lp->l2_runtime_tb));
> + trace_kvmppc_vcpu_stats(vcpu, l1_to_l2_ns - local_paca->l1_to_l2_cs,
> + l2_to_l1_ns - local_paca->l2_to_l1_cs,
> +
On Wed, Apr 24, 2024 at 11:08:38AM +0530, Gautam Menghani wrote:
> On Mon, Apr 22, 2024 at 09:15:02PM +0530, Naveen N Rao wrote:
> > On Tue, Apr 02, 2024 at 12:36:54PM +0530, Gautam Menghani wrote:
> > > static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64
powermac/smp.c | 2 +-
> 6 files changed, 132 insertions(+), 20 deletions(-)
Apart from the minor comments, for this series:
Acked-by: Naveen N Rao
Thanks for working on this.
- Naveen
On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> This use of patch_instruction() is working on 32 bit data, and can fail
> if the data looks like a prefixed instruction and the extra write
> crosses a page boundary. Use patch_u32() to fix the write size.
>
> Fixes: 8734b41b3efe
On Mon, Mar 25, 2024 at 04:53:02PM +1100, Benjamin Gray wrote:
> Extend the code patching selftests with some basic coverage of the new
> data patching variants too.
>
> Signed-off-by: Benjamin Gray
>
> ---
>
> v3: * New in v3
> ---
> arch/powerpc/lib/test-code-patching.c | 36
On Tue, Apr 02, 2024 at 12:36:54PM +0530, Gautam Menghani wrote:
> PAPR hypervisor has introduced three new counters in the VPA area of
> LPAR CPUs for KVM L2 guest (see [1] for terminology) observability - 2
> for context switches from host to guest and vice versa, and 1 counter
> for getting the
On Tue, Apr 02, 2024 at 04:28:06PM +0530, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be kernel text. This is false for kfunc case, as function
> addresses can be module addresses as well. So, ensure module addresses
> are supported to
On Tue, Feb 13, 2024 at 07:54:27AM +, Christophe Leroy wrote:
>
>
> Le 01/02/2024 à 18:12, Hari Bathini a écrit :
> > With module addresses supported, override bpf_jit_supports_kfunc_call()
> > to enable kfunc support. Module address offsets can be more than 32-bit
> > long, so override
On Thu, Feb 01, 2024 at 10:42:48PM +0530, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be kernel text. This is false for kfunc case, as function
> addresses are mostly module addresses in that case. Ensure module
> addresses are supported
;)
Cc: sta...@vger.kernel.org
Reported-by: Michael Ellerman
Signed-off-by: Naveen N Rao
Reviewed-by: Benjamin Gray
---
v2:
- Rename exit text section variable name to match other architectures
- Fix clang builds
I've collected Benjamin's Reviewed-by tag since those parts of the patch
remain the same.
On Mon, Feb 12, 2024 at 07:31:03PM +, Christophe Leroy wrote:
>
>
> Le 09/02/2024 à 08:59, Naveen N Rao a écrit :
> > diff --git a/arch/powerpc/include/asm/sections.h
> > b/arch/powerpc/include/asm/sections.h
> > index ea26665f82cf..d389dcecdb0b 100644
> &g
;)
Cc: sta...@vger.kernel.org
Reported-by: Michael Ellerman
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 9 +
arch/powerpc/include/asm/sections.h | 1 +
arch/powerpc/kernel/trace/ftrace.c | 12
arch/powerpc/kernel/vmlinux.lds.S | 2 ++
4 files c
On Mon, Feb 05, 2024 at 01:30:46PM +1100, Benjamin Gray wrote:
> On Thu, 2023-11-30 at 15:55 +0530, Naveen N Rao wrote:
> > On Mon, Oct 16, 2023 at 04:01:45PM +1100, Benjamin Gray wrote:
> > >
> > > diff --git a/arch/powerpc/include/asm/code-patching.h
> >
[unknown]
[unknown]
__clone
-multipathd (698)
3001661
Fixes: 7fa95f9adaee ("powerpc/64s: system call support for scv/rfscv
instructions")
Cc: sta...@vger.kernel.org
Reported-by: Nysal Jan K.A
Signed-off-by: Naveen N Rao
---
v2: Update change log,
On Fri, Feb 02, 2024 at 01:02:39PM +1100, Michael Ellerman wrote:
> Segher Boessenkool writes:
> > Hi!
> >
> > On Thu, Jan 25, 2024 at 05:12:28PM +0530, Naveen N Rao wrote:
> >> diff --git a/arch/powerpc/kernel/interrupt_64.S
> >> b/arch/powerpc/kernel
-python (1293)
11
clock_nanosleep
clock_nanosleep
nanosleep
sleep
[unknown]
[unknown]
__clone
-multipathd (698)
3001661
Reported-by: Nysal Jan K.A
Signed-off-by: Naveen N Rao
All supported compilers today (gcc v5.1+ and clang v11+) have support for
-mcmodel=medium. As such, NO_MINIMAL_TOC is no longer being set. Remove
NO_MINIMAL_TOC as well as the fallback to -mminimal-toc.
Reviewed-by: Christophe Leroy
Signed-off-by: Naveen N Rao
---
v2: Drop the call to cc-option
On Tue, Jan 09, 2024 at 12:39:36PM -0600, Segher Boessenkool wrote:
> On Tue, Jan 09, 2024 at 03:15:35PM +, Christophe Leroy wrote:
> > > CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcall-aixdesc)
> > > endif
> > > endif
> > > -CFLAGS-$(CONFIG_PPC64) += $(call
All supported compilers today (gcc v5.1+ and clang v11+) have support for
-mcmodel=medium. As such, NO_MINIMAL_TOC is no longer being set. Remove
NO_MINIMAL_TOC as well as the fallback to -mminimal-toc.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Makefile | 6 +-
arch
On Thu, Dec 21, 2023 at 10:46:08AM +, Christophe Leroy wrote:
>
>
> Le 08/12/2023 à 17:30, Naveen N Rao a écrit :
> > Function profile sequence on powerpc includes two instructions at the
> > beginning of each function:
> >
> > mflrr0
> >
On Wed, Dec 20, 2023 at 10:26:21PM +0530, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be kernel text. This is false for kfunc case, as function
> addresses are mostly module addresses in that case. Ensure module
> addresses are supported
On Thu, Dec 14, 2023 at 05:55:33AM +, Nicholas Miehlbradt wrote:
> KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
> Manually unpoison it to prevent false positives.
>
> Signed-off-by: Nicholas Miehlbradt
> ---
> arch/powerpc/kernel/kprobes.c | 2 ++
> 1 file changed,
Michael Ellerman wrote:
Aneesh and Naveen are helping out with some aspects of upstream
maintenance, add them as reviewers.
Signed-off-by: Michael Ellerman
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
Acked-by: Naveen N. Rao
Thanks,
Naveen
diff --git a/MAINTAINERS b
Replace seven spaces with a tab character to fix an indentation issue
reported by the kernel test robot.
Reported-by: kernel test robot
Closes:
https://lore.kernel.org/oe-kbuild-all/202311221731.aluwtdim-...@intel.com/
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 2
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to
show the sample instruction sequence to be used by ftrace direct calls
to adhere to the ftrace ABI.
On 64-bit powerpc, TOC setup requires some additional work.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig
pr3 that can then be tested on the
return path from the ftrace trampoline to branch into the direct caller.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ftrace.h| 15
arch/powerpc/kernel/asm-offsets.c| 3 +
arch/powe
into
ftrace_ops->func().
For 64-bit powerpc, we also select FUNCTION_ALIGNMENT_8B so that the
ftrace_ops pointer is double word aligned and can be updated atomically.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 2 +
arch/powerpc/kernel/asm-offsets.c|
). On 64-bit powerpc with the current
implementation of -fpatchable-function-entry though, this is not
avoidable since we are forced to emit 6 instructions between the GEP and
the LEP even if we are to only support DYNAMIC_FTRACE_WITH_CALL_OPS.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Makefile
fall back to using a fixed
offset of 8 (two instructions) to categorize a probe as being at
function entry for 64-bit elfv2.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/kprobes.c | 18 --
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel
From: Sathvika Vasireddy
Commit d49a0626216b95 ("arch: Introduce CONFIG_FUNCTION_ALIGNMENT")
introduced a generic function-alignment infrastructure. Move to using
FUNCTION_ALIGNMENT_4B on powerpc, to use the same alignment as that of
the existing _GLOBAL macro.
Signed-off-by: Sathvika Vasireddy
ftrace_stub is within the same CU, so there is no need for a subsequent
nop instruction.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace_entry.S | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_entry.S
b/arch/powerpc/kernel/trace
instruction sequence for function profiling (with -mprofile-kernel) with
a 'std' instruction to mimic the 'stw' above. Address that scenario also
by nop-ing out the 'std' instruction during ftrace init.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 6 --
arch/powerpc/kernel
.
- Naveen
Naveen N Rao (8):
powerpc/ftrace: Fix indentation in ftrace.h
powerpc/ftrace: Unify 32-bit and 64-bit ftrace entry code
powerpc/ftrace: Remove nops after the call to ftrace_stub
powerpc/kprobes: Use ftrace to determine if a probe is at function
entry
powerpc/ftrace: Update
On Mon, Oct 16, 2023 at 04:01:46PM +1100, Benjamin Gray wrote:
> This use of patch_instruction() is working on 32 bit data, and can fail
> if the data looks like a prefixed instruction and the extra write
> crosses a page boundary. Use patch_u32() to fix the write size.
>
> Fixes: 8734b41b3efe
On Mon, Oct 16, 2023 at 04:01:45PM +1100, Benjamin Gray wrote:
> patch_instruction() is designed for patching instructions in otherwise
> readonly memory. Other consumers also sometimes need to patch readonly
> memory, so have abused patch_instruction() for arbitrary data patches.
>
> This is a
n addition, the commit missed saving the correct stack pointer in
pt_regs. Update the same.
Fixes: 41a506ef71eb ("powerpc/ftrace: Create a dummy stackframe to fix stack
unwind")
Cc: sta...@vger.kernel.org
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace_entry.S | 4 ++-
On Thu, Nov 23, 2023 at 09:17:54AM -0600, Gustavo A. R. Silva wrote:
>
> > > To be honest I don't know how paranoid we want to get, we could end up
> > > putting WARN's all over the kernel :)
> > >
> > > In this case I guess if the size is too large we overflow the buffer on
> > > the kernel
kernel stack corruption.
Signed-off-by: Naveen N Rao
---
arch/powerpc/lib/sstep.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index a13f05cfc7db..5766180f5380 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
On Wed, Nov 22, 2023 at 03:44:07PM +1100, Michael Ellerman wrote:
> Naveen N Rao writes:
> > On Tue, Nov 21, 2023 at 10:54:36AM +1100, Michael Ellerman wrote:
> >> Building with GCC 13 (which has -array-bounds enabled) there are several
> >
> > Thanks, gcc13 indee
On Tue, Nov 21, 2023 at 10:54:36AM +1100, Michael Ellerman wrote:
> Building with GCC 13 (which has -array-bounds enabled) there are several
Thanks, gcc13 indeed helps reproduce the warnings.
> warnings in sstep.c along the lines of:
>
> In function ‘do_byte_reverse’,
> inlined from
On Mon, Nov 20, 2023 at 08:33:45AM -0600, Gustavo A. R. Silva wrote:
>
>
> On 11/20/23 08:25, Naveen N Rao wrote:
> > On Fri, Nov 17, 2023 at 12:36:01PM -0600, Gustavo A. R. Silva wrote:
> > > Hi all,
> > >
> > > I'm trying to fix the following
On Fri, Nov 17, 2023 at 12:36:01PM -0600, Gustavo A. R. Silva wrote:
> Hi all,
>
> I'm trying to fix the following -Wstringop-overflow warnings, and I'd like
> to get your feedback on this, please:
>
> In function 'do_byte_reverse',
> inlined from 'do_vec_store' at
>
ction-entry")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 54b9387c3691..3aaadfd2c8eb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/power
bench/breakpoint.c | 24 +---
> 1 file changed, 21 insertions(+), 3 deletions(-)
Thanks for fixing this to not report an error. A minor nit below, but
otherwise:
Acked-by: Naveen N Rao
>
> diff --git a/tools/perf/bench/breakpoint.c b/tools/perf/bench/breakpoint.c
Christophe Leroy wrote:
Le 19/06/2023 à 11:47, Naveen N Rao a écrit :
With ppc64 -mprofile-kernel and ppc32 -pg, profiling instructions to
call into ftrace are emitted right at function entry. The instruction
sequence used is minimal to reduce overhead. Crucially, a stackframe is
not created
Christophe Leroy wrote:
Le 19/06/2023 à 11:47, Naveen N Rao a écrit :
GCC v13.1 updated support for -fpatchable-function-entry on ppc64le to
emit nops after the local entry point, rather than before it. This
allows us to use this in the kernel for ftrace purposes. A new script is
added under
Hi Christophe,
Christophe Leroy wrote:
Le 19/06/2023 à 11:47, Naveen N Rao a écrit :
ftrace_low.S has just the _mcount stub and return_to_handler(). Merge
this back into ftrace_mprofile.S and ftrace_64_pg.S to keep all ftrace
code together, and to allow those to evolve independently
try code, but
produces reliable backtraces.
Fixes: 153086644fd1 ("powerpc/ftrace: Add support for -mprofile-kernel ftrace
ABI")
Cc: sta...@vger.kernel.org
Signed-off-by: Naveen N Rao
---
Per Nick's suggestion, I'm posting a minimal fix separately to make this
easier to backport.
- N
Christophe Leroy wrote:
Le 20/06/2023 à 08:04, Naveen N Rao a écrit :
Christophe Leroy wrote:
A lot of work is required in .S files in order to get them ready
for objtool checks.
For the time being, exclude them from the checks.
This is done with the script below:
#!/bin/sh
DIRS
Christophe Leroy wrote:
A lot of work is required in .S files in order to get them ready
for objtool checks.
For the time being, exclude them from the checks.
This is done with the script below:
#!/bin/sh
DIRS=`find arch/powerpc -name "*.S" -exec dirname {} \; | sort | uniq`
be better to disable it for now.
Acked-by: Naveen N Rao
- Naveen
Since we now support DYNAMIC_FTRACE_WITH_ARGS across ppc32 and ppc64
ELFv2, we can simplify function_graph tracer support code in ftrace.c
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 64 --
1 file changed, 7 insertions(+), 57 deletions
instruction at the ftrace location before
patching it with the updated branch destination.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 161 -
1 file changed, 21 insertions(+), 140 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b
patching it.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 187 +
1 file changed, 31 insertions(+), 156 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 05153a1038fdff..6ea8b90246a540 100644
at the ftrace location before nop-ing it out.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 220 +
1 file changed, 32 insertions(+), 188 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 98bd099c428ee0
.
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 6 +++
arch/powerpc/kernel/trace/ftrace.c | 71 ++
2 files changed, 77 insertions(+)
diff --git a/arch/powerpc/include/asm/ftrace.h
b/arch/powerpc/include/asm/ftrace.h
index 702aaf2efa966c
.
Stop re-purposing the linker-generated long branches for ftrace to
simplify the code. If there are good reasons to support ftrace on
kernels beyond 64MB, we can consider adding support by using
-fpatchable-function-entry.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftra
Split up ftrace_modify_code() into a few helpers for future use. Also
update error messages accordingly.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 51 +-
1 file changed, 29 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/kernel
-by: Naveen N Rao
---
arch/powerpc/kernel/trace/Makefile| 17 +++--
arch/powerpc/kernel/trace/ftrace_64_pg.S | 67 ---
.../trace/{ftrace_pg.c => ftrace_64_pg.c} | 0
.../{ftrace_low.S => ftrace_64_pg_entry.S}| 58 +++-
.../{ftrace_mpro
stub from 64 bytes to 32
bytes since the different stub variants are all less than 8
instructions.
To reduce use of #ifdef, a stub implementation is provided for
kernel_toc_address() and -SZ_2G is cast to 'long long' to prevent
errors on ppc32.
Signed-off-by: Naveen N Rao
---
arch/powerpc/i
Instead of keying off DYNAMIC_FTRACE_WITH_REGS, use FTRACE_REGS_ADDR to
identify the proper ftrace trampoline address to use.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/trace
lts in two additional stores in the ftrace entry code, but
produces reliable backtraces. Note that this change now aligns with
other architectures (arm64, s390, x86).
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 6 --
arch/powerpc/kernel/trace/ftrace_entry.S |
'.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 14 +++---
arch/powerpc/Makefile | 5
arch/powerpc/include/asm/ftrace.h | 6 +++--
arch/powerpc/include/asm/vermagic.h | 4 ++-
arch/powerpc/kernel
should no longer be called.
This lays the groundwork to enable better control in patching ftrace
locations, including the ability to nop-out preceding profiling
instructions when ftrace is disabled.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 173
ftrace_create_branch_inst() is clearer about its intent than
ftrace_call_replace().
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 17 ++---
1 file changed, 2 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel
Fixes: 7af82ff90a2b06 ("powerpc/ftrace: Ignore weak functions")
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/ftrace.h
b/arch/powerpc/include/asm/ftrace.h
index 91c0
.
ftrace.c can then be refactored and enhanced with a focus on ppc32 and
ppc64 ELFv2.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/Makefile| 13 +-
arch/powerpc/kernel/trace/ftrace.c| 10 -
arch/powerpc/kernel/trace/ftrace_pg.c | 846 ++
3 files
.ftrace.tramp section is not used for any purpose. This code was added
all the way back in the original commit introducing support for dynamic
ftrace on ppc64 modules. Remove it.
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/module.h | 4
1 file changed, 4 deletions(-)
diff
good to me. Christophe
mentioned that this results in a slowdown with ftrace [de-]activation on
ppc32, but that isn't performance critical and we can address that
separately.
(*) http://lore.kernel.org/cover.1686151854.git.nav...@kernel.org
- Naveen
Naveen N Rao (17):
powerpc/ftrace: Fix
Dominique Martinet wrote:
Naveen N Rao wrote on Fri, Jun 16, 2023 at 04:28:53PM +0530:
> We're not stripping anything in vmlinuz for other archs -- the linker
> script already should be including only the bare minimum to decompress
> itself (+compressed useful bits), so I guess it's
[Cc linuxppc-dev]
Dominique Martinet wrote:
Alan Maguire wrote on Thu, Jun 15, 2023 at 03:31:49PM +0100:
However the problem I suspect is this:
51 .debug_info 0a488b55 026f8d20
2**0
CONTENTS, READONLY, DEBUGGING
[...]
The debug info
lts in two additional stores in the ftrace entry code, but
produces reliable backtraces. Note that this change now aligns with
other architectures (arm64, s390, x86).
Signed-off-by: Naveen N Rao
---
This applies atop the below RFC patchset:
http://lore.kernel.org/cover.1686151854.git.
Fixes: 7af82ff90a2b06 ("powerpc/ftrace: Ignore weak functions")
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/ftrace.h
b/arch/powerpc/include/asm/ftrace.h
index 91c0
When creating a kprobe on function entry through tracefs, enable
arguments to be recorded to be specified using $argN syntax.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ptrace.h | 17 +
2 files changed, 18 insertions
, similar to the pre
-mprofile-kernel ABI on ppc64. This is not supported.
Disable ftrace on ppc32 if using clang for now. This can be re-enabled
later if clang picks up support for -fpatchable-function-entry on ppc32.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 2 +-
1 file changed, 1
Christophe Leroy wrote:
Le 23/05/2023 à 11:31, Naveen N Rao a écrit :
Christophe Leroy wrote:
Ok, I simplified this further, and this is as close to the previous
fast path as we can get (applies atop the original RFC). The only
difference left is the ftrace_rec iterator.
That's
.ftrace.tramp section is not used for any purpose. This code was added
all the way back in the original commit introducing support for dynamic
ftrace on ppc64 modules. Remove it.
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/module.h | 4
1 file changed, 4 deletions(-)
diff
at the ftrace location before nop-ing it out.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 220 +
1 file changed, 32 insertions(+), 188 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index c0d185742c23ca
.
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h | 6 +++
arch/powerpc/kernel/trace/ftrace.c | 71 ++
2 files changed, 77 insertions(+)
diff --git a/arch/powerpc/include/asm/ftrace.h
b/arch/powerpc/include/asm/ftrace.h
index 1a5d365523e160
.
Stop re-purposing the linker-generated long branches for ftrace to
simplify the code. If there are good reasons to support ftrace on
kernels beyond 64MB, we can consider adding support by using
-fpatchable-function-entry.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftra
Split up ftrace_modify_code() into a few helpers for future use. Also
update error messages accordingly.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 51 +-
1 file changed, 29 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/kernel
-by: Naveen N Rao
---
arch/powerpc/kernel/trace/Makefile| 17 +++--
arch/powerpc/kernel/trace/ftrace_64_pg.S | 67 ---
.../trace/{ftrace_pg.c => ftrace_64_pg.c} | 0
.../{ftrace_low.S => ftrace_64_pg_entry.S}| 58 +++-
.../{ftrace_mpro
stub from 64 bytes to 32
bytes since the different stub variants are all less than 8
instructions.
To reduce use of #ifdef, a stub implementation is provided for
kernel_toc_address() and -SZ_2G is cast to 'long long' to prevent
errors on ppc32.
Signed-off-by: Naveen N Rao
---
arch/powerpc/i
Instead of keying off DYNAMIC_FTRACE_WITH_REGS, use FTRACE_REGS_ADDR to
identify the proper ftrace trampoline address to use.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/trace
Since we now support DYNAMIC_FTRACE_WITH_ARGS across ppc32 and ppc64
ELFv2, we can simplify function_graph tracer support code in ftrace.c
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 64 --
1 file changed, 7 insertions(+), 57 deletions
.
ftrace.c can then be refactored and enhanced with a focus on ppc32 and
ppc64 ELFv2.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/Makefile| 13 +-
arch/powerpc/kernel/trace/ftrace.c| 10 -
arch/powerpc/kernel/trace/ftrace_pg.c | 846 ++
3 files
'.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 14 +++---
arch/powerpc/Makefile | 5
arch/powerpc/include/asm/ftrace.h | 6 +++--
arch/powerpc/include/asm/vermagic.h | 4 ++-
arch/powerpc/kernel
should no longer be called.
This lays the groundwork to enable better control in patching ftrace
locations, including the ability to nop-out preceding profiling
instructions when ftrace is disabled.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 173
ftrace_create_branch_inst() is clearer about its intent than
ftrace_call_replace().
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 17 ++---
1 file changed, 2 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel
instruction at the ftrace location before
patching it with the updated branch destination.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 161 -
1 file changed, 21 insertions(+), 140 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b
patching it.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 187 +
1 file changed, 31 insertions(+), 156 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 67773cd14da71a..8d5d91b8ae85a0 100644
This is a follow-on RFC to the patch I previously posted here:
http://lore.kernel.org/20230519192600.2593506-1-nav...@kernel.org
Since then, I have split up the patches, picked up a few more changes
and given this more testing. More details in the individual patches.
- Naveen
Naveen N Rao
Toolchains don't always default to the ELFv2 ABI. This is true with at
least the kernel.org toolchains. As such, pass -mabi=elfv2 explicitly to
ensure that we are testing against the correct compiler output.
Signed-off-by: Naveen N Rao
---
The script works fine without this change, so
1 - 100 of 1311 matches
Mail list logo