> On 9 Mar 2026, at 9:59 PM, Saket Kumar Bhaskar <[email protected]> wrote:
> 
> On Mon, Mar 09, 2026 at 03:40:45PM +0100, Viktor Malik wrote:
>> It may happen that mm is already released, which leads to kernel panic.
>> This adds the NULL check for current->mm, similarly to 20afc60f892d
>> ("x86, perf: Check that current->mm is alive before getting user
>> callchain").
>> 
>> I was getting this panic when running a profiling BPF program
>> (profile.py from bcc-tools):
>> 
>>    [26215.051935] Kernel attempted to read user page (588) - exploit 
>> attempt? (uid: 0)
>>    [26215.051950] BUG: Kernel NULL pointer dereference on read at 0x00000588
>>    [26215.051952] Faulting instruction address: 0xc00000000020fac0
>>    [26215.051957] Oops: Kernel access of bad area, sig: 11 [#1]
>>    [...]
>>    [26215.052049] Call Trace:
>>    [26215.052050] [c000000061da6d30] [c00000000020fc10] 
>> perf_callchain_user_64+0x2d0/0x490 (unreliable)
>>    [26215.052054] [c000000061da6dc0] [c00000000020f92c] 
>> perf_callchain_user+0x1c/0x30
>>    [26215.052057] [c000000061da6de0] [c0000000005ab2a0] 
>> get_perf_callchain+0x100/0x360
>>    [26215.052063] [c000000061da6e70] [c000000000573bc8] 
>> bpf_get_stackid+0x88/0xf0
>>    [26215.052067] [c000000061da6ea0] [c008000000042258] 
>> bpf_prog_16d4ab9ab662f669_do_perf_event+0xf8/0x274
>>    [...]
>> 
>> In addition, move storing the top-level stack entry to generic
>> perf_callchain_user to make sure the top-evel entry is always captured,
>> even if current->mm is NULL.
>> 
>> Fixes: 20002ded4d93 ("perf_counter: powerpc: Add callchain support")
>> Signed-off-by: Viktor Malik <[email protected]>
>> ---

Tested-by: Venkat Rao Bagalkote <[email protected]>

This patch fixes, reported issue.

Regards,
Venkat.
>> Changes in v2:
>> - Move call to perf_callchain_store() for the top-level stack entry to
>>  common perf_callchain_user (Saket)
>> 
>> arch/powerpc/perf/callchain.c    | 5 +++++
>> arch/powerpc/perf/callchain_32.c | 1 -
>> arch/powerpc/perf/callchain_64.c | 1 -
>> 3 files changed, 5 insertions(+), 2 deletions(-)
>> 
>> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
>> index 26aa26482c9a..992cc5c98214 100644
>> --- a/arch/powerpc/perf/callchain.c
>> +++ b/arch/powerpc/perf/callchain.c
>> @@ -103,6 +103,11 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx 
>> *entry, struct pt_regs *re
>> void
>> perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs 
>> *regs)
>> {
>> + perf_callchain_store(entry, perf_arch_instruction_pointer(regs));
>> +
>> + if (!current->mm)
>> + return;
>> +
>> if (!is_32bit_task())
>> perf_callchain_user_64(entry, regs);
>> else
>> diff --git a/arch/powerpc/perf/callchain_32.c 
>> b/arch/powerpc/perf/callchain_32.c
>> index ddcc2d8aa64a..0de21c5d272c 100644
>> --- a/arch/powerpc/perf/callchain_32.c
>> +++ b/arch/powerpc/perf/callchain_32.c
>> @@ -142,7 +142,6 @@ void perf_callchain_user_32(struct 
>> perf_callchain_entry_ctx *entry,
>> next_ip = perf_arch_instruction_pointer(regs);
>> lr = regs->link;
>> sp = regs->gpr[1];
>> - perf_callchain_store(entry, next_ip);
>> 
>> while (entry->nr < entry->max_stack) {
>> fp = (unsigned int __user *) (unsigned long) sp;
>> diff --git a/arch/powerpc/perf/callchain_64.c 
>> b/arch/powerpc/perf/callchain_64.c
>> index 115d1c105e8a..30fb61c5f0cb 100644
>> --- a/arch/powerpc/perf/callchain_64.c
>> +++ b/arch/powerpc/perf/callchain_64.c
>> @@ -77,7 +77,6 @@ void perf_callchain_user_64(struct 
>> perf_callchain_entry_ctx *entry,
>> next_ip = perf_arch_instruction_pointer(regs);
>> lr = regs->link;
>> sp = regs->gpr[1];
>> - perf_callchain_store(entry, next_ip);
>> 
>> while (entry->nr < entry->max_stack) {
>> fp = (unsigned long __user *) sp;
>> -- 
>> 2.53.0
> LGTM, feel free to add below tag:
> Reviewed-by: Saket Kumar Bhaskar <[email protected]>
>> 


Reply via email to