Hi Nick,
Nicholas Piggin wrote:
This warns and prevents tracing attempted in a real-mode context.
Is this something you're seeing often? Last time we looked at this, KVM
was the biggest offender and we introduced paca->ftrace_enabled as a way
to disable ftrace while in KVM code.
While
but since this is for selftests,
we don't need to enforce that.
Long term, we should also consider generalizing the macros across this
and the eBPF codebase so that we can reuse these.
Reviewed-by: Naveen N. Rao
- Naveen
Nathan Lynch wrote:
"Naveen N. Rao" writes:
Gautham R Shenoy wrote:
On Fri, Feb 21, 2020 at 10:50:12AM -0600, Nathan Lynch wrote:
It's regrettable that we have to wake up potentially idle CPUs in order
to derive correct idle statistics for them, but I suppose the main user
-fno-asynchronous-unwind-tables to KBUILD_CFLAGS to suppress
generation of .eh_frame section. Note that our VDSOs need .eh_frame, but
are not affected by this change since our VDSO code are all in assembly.
Reported-by: Rasmus Villemoes
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Makefile | 3
pport for R_PPC64_REL32 relocations").
So, drop this flag from our Makefile.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Makefile | 5 -
1 file changed, 5 deletions(-)
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index cbe5ca4f0ee5..89956c4f1ce3 100644
--- a/arch/powerp
Naveen N. Rao wrote:
Naveen N. Rao wrote:
Rasmus Villemoes wrote:
Can you check if the below patch works? I am yet to test this in more
detail, but would be good to know the implications for ppc32.
- Naveen
---
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index f35730548e42
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Rasmus Villemoes wrote:
I'm building a ppc32 kernel, and noticed that after upgrading from gcc-7
to gcc-8 all object files now end up having .eh_frame section. For
vmlinux, that's not a problem, because they all get discarded in
ar
Naveen N. Rao wrote:
Rasmus Villemoes wrote:
I'm building a ppc32 kernel, and noticed that after upgrading from gcc-7
to gcc-8 all object files now end up having .eh_frame section. For
vmlinux, that's not a problem, because they all get discarded in
arch/powerpc/kernel/vmlinux.lds.S . However
Segher Boessenkool wrote:
On Mon, Mar 02, 2020 at 11:56:05AM +0100, Rasmus Villemoes wrote:
I'm building a ppc32 kernel, and noticed that after upgrading from gcc-7
to gcc-8 all object files now end up having .eh_frame section.
Since GCC 8, we enable -fasynchronous-unwind-tables by default
Rasmus Villemoes wrote:
I'm building a ppc32 kernel, and noticed that after upgrading from gcc-7
to gcc-8 all object files now end up having .eh_frame section. For
vmlinux, that's not a problem, because they all get discarded in
arch/powerpc/kernel/vmlinux.lds.S . However, they stick around in
Gautham R Shenoy wrote:
On Fri, Feb 21, 2020 at 10:50:12AM -0600, Nathan Lynch wrote:
"Gautham R. Shenoy" writes:
> diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
> index 80a676d..5b4b450 100644
> --- a/arch/powerpc/kernel/sysfs.c
> +++ b/arch/powerpc/kernel/sysfs.c
>
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Selecting CONFIG_DEBUG_INFO_BTF results in the below warning from ld:
ld: warning: orphan section `.BTF' from `.btf.vmlinux.bin.o' being placed in
section `.BTF'
Include .BTF section in vmlinux explicitly to fix the same.
I don
Selecting CONFIG_DEBUG_INFO_BTF results in the below warning from ld:
ld: warning: orphan section `.BTF' from `.btf.vmlinux.bin.o' being placed in
section `.BTF'
Include .BTF section in vmlinux explicitly to fix the same.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/vmlinux.lds.S
ppened with MSR_IR cleared, return 0 immediately.
Reported-by: Larry Finger
Fixes: 6cc89bad60a6 ("powerpc/kprobes: Invoke handlers directly")
Cc: sta...@vger.kernel.org
Cc: Naveen N. Rao
Cc: Masami Hiramatsu
Signed-off-by: Christophe Leroy
---
v2: bailing out instead of converting real-time ad
Christophe Leroy wrote:
if (a) {
if (b)
do_something();
}
Is equivalent to
if (a & b)
do_something();
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/kprobes.c | 58 +--
1
Christophe Leroy wrote:
At the time being we have something like
if (something) {
p = get();
if (p) {
if (something_wrong)
goto out;
...
return;
Masami, Christophe,
Apologies for pitching in late here...
Masami Hiramatsu wrote:
On Tue, 18 Feb 2020 12:04:41 +0100
Christophe Leroy wrote:
>> Nevertheless, if one symbol has been forgotten in the blacklist, I think
>> it is a problem if it generate Oopses.
>
> There is a long history
Christophe Leroy wrote:
Le 27/11/2019 à 13:01, Gautham R. Shenoy a écrit :
From: "Gautham R. Shenoy"
On Pseries LPARs, to calculate utilization, we need to know the
[S]PURR ticks when the CPUs were busy or idle.
The total PURR and SPURR ticks are already exposed via the per-cpu
sysfs files
Gautham R Shenoy wrote:
With repect to lparstat, the read interval is user-specified and just gets
passed onto sleep().
Ok. So I guess currently you will be sending smp_call_function every
time you read a PURR and SPURR. That number will now increase by 2
times when we read idle_purr and
Gautham R Shenoy wrote:
Hi Naveen,
On Thu, Dec 05, 2019 at 10:23:58PM +0530, Naveen N. Rao wrote:
>diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
>index 80a676d..42ade55 100644
>--- a/arch/powerpc/kernel/sysfs.c
>+++ b/arch/powerpc/kernel/sysfs.c
>@@ -
Naveen N. Rao wrote:
Hi Nathan,
Nathan Lynch wrote:
Hi Kamalesh,
Kamalesh Babulal writes:
On 12/5/19 3:54 AM, Nathan Lynch wrote:
"Gautham R. Shenoy" writes:
Tools such as lparstat which are used to compute the utilization need
to know [S]PURR ticks when the cpu was busy or id
Hi Nathan,
Nathan Lynch wrote:
Hi Kamalesh,
Kamalesh Babulal writes:
On 12/5/19 3:54 AM, Nathan Lynch wrote:
"Gautham R. Shenoy" writes:
Tools such as lparstat which are used to compute the utilization need
to know [S]PURR ticks when the cpu was busy or idle. The [S]PURR
counters are
Gautham R. Shenoy wrote:
From: "Gautham R. Shenoy"
On Pseries LPARs, to calculate utilization, we need to know the
[S]PURR ticks when the CPUs were busy or idle.
The total PURR and SPURR ticks are already exposed via the per-cpu
sysfs files /sys/devices/system/cpu/cpuX/purr and
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor c
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor cede (also called extended cede)
- Returned to the Hypervisor via RTAS "stop-self"
t on instruction that can't be emulated.
"
"Breakpoint at 0x%lx will be disabled.\n",
addr);
Otherwise:
Acked-by: Naveen N. Rao
- Naveen
+ goto disable;
+ }
/* Do not emulate user-space instructions, instead single-step them */
if (user_mode(regs)) {
@@
] return_to_handler+0x0/0x40
(vfs_read+0xb8/0x1b0)
[c000d1e33dd0] [c006ab58] return_to_handler+0x0/0x40
(ksys_read+0x7c/0x140)
[c000d1e33e20] [c006ab58] return_to_handler+0x0/0x40
(system_call+0x5c/0x68)
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/process.c
This associates entries in the ftrace_ret_stack with corresponding stack
frames, enabling more robust stack unwinding. Also update the only user
of ftrace_graph_ret_addr() to pass the stack pointer.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++-
arch
This ensures that we use the right address on architectures that use
function descriptors.
Signed-off-by: Naveen N. Rao
---
kernel/trace/fgraph.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 8dfd5021b933
Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTR for more robust stack unwinding
when function graph tracer is in use. Convert powerpc show_stack() to
use ftrace_graph_ret_addr() for better stack unwinding.
- Naveen
Naveen N. Rao (3):
ftrace: Look up the address of return_to_handler() using helpers
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
Currently if we oops or warn while function_graph is active the stack
trace looks like:
.trace_graph_return+0xac/0x100
.ftrace_return_to_handler+0x98/0x140
.return_to_handler+0x20/0x40
.return_to_handle
Ravi Bangoria wrote:
On Powerpc64, watchpoint match range is double-word granular. On
a watchpoint hit, DAR is set to the first byte of overlap between
actual access and watched range. And thus it's quite possible that
DAR does not point inside user specified range. Ex, say user creates
a
Steven Rostedt wrote:
On Thu, 4 Jul 2019 20:04:41 +0530
"Naveen N. Rao" wrote:
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7b037295a1f1..0791eafb693d 100644
--- a/kernel/trace/ftrace.c
+++ b/ke
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 0a56e0603fa13af08816d673f6f71b68cda2fb2e
Gitweb:
https://git.kernel.org/tip/0a56e0603fa13af08816d673f6f71b68cda2fb2e
Author:Naveen N. Rao
AuthorDate:Tue, 27 Aug 2019 12:44:58 +05:30
cccd0 ("y2038: rename old time and utime syscalls")
commit 00bf25d693e7 ("y2038: use time32 syscall names on 32-bit")
commit 8dabe7245bbc ("y2038: syscalls: rename y2038 compat syscalls")
commit 0d6040d46817 ("arch: add split IPC system calls where needed"
Jiong Wang wrote:
Naveen N. Rao writes:
Since BPF constant blinding is performed after the verifier pass, the
ALU32 instructions inserted for doubleword immediate loads don't have a
corresponding zext instruction. This is causing a kernel oops on powerpc
and can be reproduced by running
ad? It will be a nop for
ABIv2, which would be nice, but not really a major deal.
In either case:
Reviewed-by: Naveen N. Rao
- Naveen
this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
Changes since RFC:
- Removed
Jiong Wang wrote:
Michael Ellerman writes:
"Naveen N. Rao" writes:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops
Naveen N. Rao wrote:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops on
powerpc and can be reproduced by running 'test_cgroup_storage
.
Fix this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
This approach (the location whe
Naveen N. Rao wrote:
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
Steven,
Can you
In register_ftrace_function_probe(), we are not checking the return
value of alloc_and_copy_ftrace_hash(). The subsequent call to
ftrace_match_records() may end up dereferencing the same. Add a check to
ensure this doesn't happen.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 5
a NULL
filter_hash.
Fix this by just checking for a NULL filter_hash in t_probe_next(). If
the filter_hash is NULL, then this probe is just being added and we can
simply return from here.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+)
diff --
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
- Naveen
Naveen N. Rao (2):
ftrace: Fix
Add a document describing the fields provided by
/proc/powerpc/vcpudispatch_stats.
Signed-off-by: Naveen N. Rao
---
Documentation/powerpc/vcpudispatch_stats.txt | 68
1 file changed, 68 insertions(+)
create mode 100644 Documentation/powerpc/vcpudispatch_stats.txt
diff
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/topology.h | 6 +
arch/powerpc/mm/numa.c| 16 +
arch/powerpc/platforms/pseries/lpar.c | 525 +-
3 files changed, 545 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/topology
hcall_vphn() is specific to pseries and will be used in a subsequent
patch. So, move it to a more appropriate place under
arch/powerpc/platforms/pseries. Also merge vphn.h into lppaca.h
and update vphn selftest to use the new files.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm
-by: Naveen N. Rao
---
arch/powerpc/mm/book3s64/vphn.h | 8
arch/powerpc/mm/numa.c | 27 +--
2 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/vphn.h b/arch/powerpc/mm/book3s64/vphn.h
index f0b93c2dd578..f7ff1e0c3801
-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 2 ++
arch/powerpc/platforms/pseries/dtl.c | 11 ++-
arch/powerpc/platforms/pseries/lpar.c | 4
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/lppaca.h
Introduce new helpers for DTL buffer allocation and registration and
have the existing code use those.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 3 ++
arch/powerpc/platforms/pseries/lpar.c | 66 +++---
arch/powerpc/platforms/pseries/setup.c
need to save and restore the earlier mask value if
CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not enabled. So, remove the field
from the structure as well.
Acked-by: Nathan Lynch
Signed-off-by: Naveen N. Rao
---
arch/powerpc/platforms/pseries/dtl.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions
Introduce macros to encode the DTL enable mask fields and use those
instead of hardcoding numbers.
Acked-by: Nathan Lynch
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 11 +++
arch/powerpc/platforms/pseries/dtl.c | 8 +---
arch/powerpc/platforms
were on a different chip compared to
its last dispatch.
Also, out of the total of 6839 dispatches, we see that there have been
6821 dispatches on the vcpu's home node, while 18 dispatches were
outside its home node, on a neighbouring chip.
- Naveen
Naveen N. Rao (9):
powerpc/pseries: Use
Steven Rostedt wrote:
On Thu, 27 Jun 2019 20:58:20 +0530
"Naveen N. Rao" wrote:
> But interesting, I don't see a synchronize_rcu_tasks() call
> there.
We felt we don't need it in this case. We patch the branch to ftrace
with a nop first. Other cpus should see that first.
Nathan Lynch wrote:
Aravinda Prasad writes:
Calculating the maximum memory based on the number of lmbs
and lmb size does not account for the RMA region. Hence
use memory_hotplug_max(), which already accounts for the
RMA region, to fetch the maximum memory value. Thanks to
Nathan Lynch for
Hi Steven,
Thanks for the review!
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:52 +0530
"Naveen N. Rao" wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
used to only patch out
Naveen N. Rao wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
used to only patch out the branch to _mcount(). However, mflr is
executed by the branch unit that can only execute one per cycle
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:50 +0530
"Naveen N. Rao" wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable
Naveen N. Rao wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable' flag. However, the
x86 version of this function was not updated. Fix the
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
kernel
to the pre and post probe handlers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes-ftrace.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerpc/kernel/kprobes-ftrace.c
index 972cb28174b2
the
'mflr r0'. Earlier -mprofile-kernel ABI included a 'std r0,stack'
instruction between the 'mflr r0' and the 'bl _mcount'. This is harmless
as the 'std r0,stack' instruction is inconsequential and is not relied
upon.
Suggested-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
arch
(). We override
ftrace_replace_code() with a powerpc64 variant for this purpose.
Suggested-by: Nicholas Piggin
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 258 ++---
1 file changed, 236 insertions(+), 22 deletions
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
7fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..f34005a17051 100644
--- a/
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Reviewed-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5
Naveen N. Rao (7):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix use of flags in ftrace_replace_code()
ftrace: Expose __ftrace_replace_code()
powerpc/ftrace: Additionally nop out the preceding mflr with
-mprofile-kernel
ftrace: Update ftrace_location() for powerpc
ot;powerpc/xmon: Disable tracing when entering xmon")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/xmon/xmon.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index d0620d762a5a..4a721fd62406 100644
--- a/arch/power
Fixes: c7d64b560ce80 ("powerpc/ftrace: Enable C Version of recordmcount")
Signed-off-by: Naveen N. Rao
---
scripts/recordmcount.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
index 13c5e6c8829c..47fca2c69a73 100644
--- a/script
Masami Hiramatsu wrote:
On Tue, 18 Jun 2019 20:17:06 +0530
"Naveen N. Rao" wrote:
With KPROBES_ON_FTRACE, kprobe is allowed to be inserted on instructions
that branch to _mcount (referred to as ftrace location). With
-mprofile-kernel, we now include the preceding 'mflr r0' as
Nicholas Piggin wrote:
Naveen N. Rao's on June 19, 2019 7:53 pm:
Nicholas Piggin wrote:
Michael Ellerman's on June 19, 2019 3:14 pm:
I'm also not convinced the ordering between the two patches is
guaranteed by the ISA, given that there's possibly no isync on the other
CPU.
Will they go
Nicholas Piggin wrote:
Michael Ellerman's on June 19, 2019 3:14 pm:
Hi Naveen,
Sorry I meant to reply to this earlier .. :/
No problem. Thanks for the questions.
"Naveen N. Rao" writes:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable functi
Steven Rostedt wrote:
On Tue, 18 Jun 2019 23:53:11 +0530
"Naveen N. Rao" wrote:
Naveen N. Rao wrote:
> Steven Rostedt wrote:
>> On Tue, 18 Jun 2019 20:17:04 +0530
>> "Naveen N. Rao" wrote:
>>
>>> @@ -1551,7 +1551,7 @@ unsigned long f
Naveen N. Rao wrote:
Steven Rostedt wrote:
On Tue, 18 Jun 2019 20:17:04 +0530
"Naveen N. Rao" wrote:
@@ -1551,7 +1551,7 @@ unsigned long ftrace_location_range(unsigned long start,
unsigned long end)
key.flags = end;/* overload flags, as it is unsigned long */
Steven Rostedt wrote:
On Tue, 18 Jun 2019 20:17:04 +0530
"Naveen N. Rao" wrote:
@@ -1551,7 +1551,7 @@ unsigned long ftrace_location_range(unsigned long start,
unsigned long end)
key.flags = end;/* overload flags, as it is unsigned long */
for (pg = ftrace_pages
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
a custom version of ftrace_cmp_recs() which
looks at the instruction preceding the branch to _mcount() and marks
that instruction as belonging to ftrace if it is a 'nop' or 'mflr r0'.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 31 ++
include
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.
Signed-off-by: Naveen N. Rao
---
kernel/kprobes.c | 10 +-
1 file
to the pre and post probe handlers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes-ftrace.c | 30
1 file changed, 30 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerpc/kernel/kprobes-ftrace.c
index 972cb28174b2..6a0bd3c16cb6
ftrace_replace_code() with a powerpc64 variant for this
purpose.
Suggested-by: Nicholas Piggin
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 241 ++---
1 file changed, 219 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc
7fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..f34005a17051 100644
--- a/
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Reviewed-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5
in two instructions being
emitted: 'mflr r0' and 'bl _mcount'. So far, we were only nop'ing out
the branch to _mcount(). This series implements an approach to also nop
out the preceding mflr.
- Naveen
Naveen N. Rao (7):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix
-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +-
arch/powerpc/platforms/pseries/lpar.c | 29 ---
arch/powerpc/platforms/pseries/setup.c| 2 +-
3 files changed, 22 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/include/asm
Add a document describing the fields provided by
/proc/powerpc/vcpudispatch_stats.
Signed-off-by: Naveen N. Rao
---
Documentation/powerpc/vcpudispatch_stats.txt | 68
1 file changed, 68 insertions(+)
create mode 100644 Documentation/powerpc/vcpudispatch_stats.txt
diff
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/topology.h | 6 +
arch/powerpc/mm/numa.c| 16 +
arch/powerpc/platforms/pseries/lpar.c | 536 +-
3 files changed, 556 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/topology
hcall_vphn() is specific to pseries and will be used in a subsequent
patch. So, move it to a more appropriate place under
arch/powerpc/platforms/pseries. Also merge vphn.h into plpar_wrappers.h
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 19
-by: Naveen N. Rao
---
arch/powerpc/mm/book3s64/vphn.h | 8
arch/powerpc/mm/numa.c | 27 +--
2 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/vphn.h b/arch/powerpc/mm/book3s64/vphn.h
index f0b93c2dd578..f7ff1e0c3801
Introduce new helpers for DTL buffer allocation and registration and
have the existing code use those.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +
arch/powerpc/platforms/pseries/lpar.c | 66 ---
arch/powerpc/platforms/pseries
/accessing DTLB for all online cpus. These
helpers allow any number of per-cpu users, or a single global user
exclusively.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 ++
arch/powerpc/platforms/pseries/dtl.c | 10 +-
arch/powerpc/platforms/pseries
need to save and restore the earlier mask value if
CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not enabled. So, remove the field
from the structure as well.
Acked-by: Nathan Lynch
Signed-off-by: Naveen N. Rao
---
arch/powerpc/platforms/pseries/dtl.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions
Introduce macros to encode the DTL enable mask fields and use those
instead of hardcoding numbers.
Acked-by: Nathan Lynch
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 11 +++
arch/powerpc/platforms/pseries/dtl.c | 8 +---
arch/powerpc/platforms
home node, while 18 dispatches were
outside its home node, on a neighbouring chip.
- Naveen
Naveen N. Rao (9):
powerpc/pseries: Use macros for referring to the DTL enable mask
powerpc/pseries: Do not save the previous DTL mask value
powerpc/pseries: Factor out DTL buffer allocation
If the result of the division is LLONG_MIN, current tests do not detect
the error since the return value is truncated to a 32-bit value and ends
up being 0.
Signed-off-by: Naveen N. Rao
---
.../testing/selftests/bpf/verifier/div_overflow.c | 14 ++
1 file changed, 10 insertions
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/net/bpf_jit.h| 2 +-
arch/powerpc/net/bpf_jit_comp64.c | 8
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h
b/arch/powerpc/inclu
The first patch updates DIV64 overflow tests to properly detect error
conditions. The second patch fixes powerpc64 JIT to generate the proper
unsigned division instruction for BPF_ALU64.
- Naveen
Naveen N. Rao (2):
bpf: fix div64 overflow tests to properly detect errors
powerpc/bpf: use
Paul Clarke wrote:
What are the circumstances in which raw_syscalls:sys_exit reports "-1" for the
syscall ID?
perf 5375 [007] 59632.478528: raw_syscalls:sys_enter: NR 1 (3, 9fb888,
8, 2d83740, 1, 7)
perf 5375 [007] 59632.478532:raw_syscalls:sys_exit: NR 1 = 8
perf
uot;powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit
server processors")
Reviewed-by: Naveen N. Rao
- Naveen
Hi Steven,
Steven Rostedt wrote:
On Mon, 20 May 2019 09:13:20 -0400
Steven Rostedt wrote:
> I haven't yet tested this patch on x86, but this looked wrong so sending
> this as a RFC.
This code has been through a bit of updates, and I need to go through
and clean it up. I'll have to take
501 - 600 of 1321 matches
Mail list logo