Hi, Mark
在 2024/9/6 17:39, Mark Rutland 写道:
> On Tue, Aug 27, 2024 at 07:33:55PM +0800, Liao, Chang wrote:
>> Hi, Mark
>>
>> Would you like to discuss this patch further, or do you still believe
>> emulating
>> STP to push FP/LR into the stack in kernel is not a good idea?
>
> I'm happy with the
On Mon, Sep 9, 2024 at 11:04 PM Steven Rostedt wrote:
>
> On Sun, 8 Sep 2024 07:25:44 -0700
> Donglin Peng wrote:
>
> Hi Donglin!
>
> > When using function_graph tracer to analyze the flow of kernel function
> > execution, it is often necessary to quickly locate the exact line of code
> > where
This patch is the second part of a series to improve the selftest bench
of uprobe/uretprobe [0]. The lack of simulating 'stp fp, lr, [sp, #imm]'
significantly impact uprobe/uretprobe performance at function entry in
most user cases. Profiling results below reveals the STP that executes
in the xol s
On Mon, Sep 9, 2024 at 7:52 PM Alexei Starovoitov
wrote:
>
> On Mon, Sep 9, 2024 at 3:49 PM Andrii Nakryiko wrote:
> >
> > Currently put_uprobe() might trigger mutex_lock()/mutex_unlock(), which
> > makes it unsuitable to be called from more restricted context like softirq.
> >
> > Let's make put
On Mon, Sep 9, 2024 at 3:49 PM Andrii Nakryiko wrote:
>
> Currently put_uprobe() might trigger mutex_lock()/mutex_unlock(), which
> makes it unsuitable to be called from more restricted context like softirq.
>
> Let's make put_uprobe() agnostic to the context in which it is called,
> and use work
On Mon, Sep 9, 2024 at 5:35 AM Jann Horn wrote:
>
> On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko wrote:
> > +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq)
> > +{
> > + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */
> > + return seq == smp_load
On Mon, 9 Sep 2024 05:43:15 + Mina Almasry wrote:
> --- a/include/uapi/linux/uio.h
> +++ b/include/uapi/linux/uio.h
> @@ -33,6 +33,10 @@ struct dmabuf_cmsg {
>*/
> };
>
> +struct dmabuf_token {
> + __u32 token_start;
> + __u32 token_count;
> +};
> /*
On Mon, 9 Sep 2024 05:43:13 + Mina Almasry wrote:
> For device memory TCP, we expect the skb headers to be available in host
> memory for access, and we expect the skb frags to be in device memory
> and unaccessible to the host. We expect there to be no mixing and
> matching of device memory f
On Mon, 9 Sep 2024 05:43:11 + Mina Almasry wrote:
> diff --git a/include/net/netmem.h b/include/net/netmem.h
> index 5eccc40df92d..8a6e20be4b9d 100644
> --- a/include/net/netmem.h
> +++ b/include/net/netmem.h
> @@ -8,6 +8,7 @@
> #ifndef _NET_NETMEM_H
> #define _NET_NETMEM_H
>
> +#include
On Sun, Sep 8, 2024 at 10:26 PM Donglin Peng wrote:
>
> When using function_graph tracer to analyze the flow of kernel function
> execution, it is often necessary to quickly locate the exact line of code
> where the call occurs. While this may be easy at times, it can be more
> time-consuming when
On 2024-09-09 19:53, Andrii Nakryiko wrote:
On Mon, Sep 9, 2024 at 1:17 PM Mathieu Desnoyers
wrote:
Wire up the system call tracepoints with Tasks Trace RCU to allow
the ftrace, perf, and eBPF tracers to handle page faults.
This series does the initial wire-up allowing tracers to handle page
On Tue, 10 Sep 2024 00:15:12 +0900
Masami Hiramatsu (Google) wrote:
> > <3> 31 Yes INFO: task hung in blk_trace_ioctl (4)
> >
> > https://syzkaller.appspot.com/bug?extid=ed812ed461471ab17a0c
>
> This is a bug in blk_trace.
[..]
>
> > <5> 11 Yes WARNING in ge
On Mon, Sep 9, 2024 at 1:17 PM Mathieu Desnoyers
wrote:
>
> In preparation for allowing system call enter/exit instrumentation to
> handle page faults, make sure that bpf can handle this change by
> explicitly disabling preemption within the bpf system call tracepoint
> probes to respect the curre
On Mon, Sep 9, 2024 at 1:17 PM Mathieu Desnoyers
wrote:
>
> Add a might_fault() check to validate that the bpf sys_enter/sys_exit
> probe callbacks are indeed called from a context where page faults can
> be handled.
>
> Signed-off-by: Mathieu Desnoyers
> Cc: Michael Jeanson
> Cc: Steven Rostedt
On Mon, Sep 9, 2024 at 1:17 PM Mathieu Desnoyers
wrote:
>
> Wire up the system call tracepoints with Tasks Trace RCU to allow
> the ftrace, perf, and eBPF tracers to handle page faults.
>
> This series does the initial wire-up allowing tracers to handle page
> faults, but leaves out the actual han
On Mon, Sep 9, 2024 at 12:47 AM Jiri Olsa wrote:
>
> Adding uprobe session test and testing that the entry program
> return value controls execution of the return probe program.
>
> Signed-off-by: Jiri Olsa
> ---
> .../bpf/prog_tests/uprobe_multi_test.c| 47
> .../bpf/progs/
On Mon, Sep 9, 2024 at 12:46 AM Jiri Olsa wrote:
>
> Adding support to attach bpf program for entry and return probe
> of the same function. This is common use case which at the moment
> requires to create two uprobe multi links.
>
> Adding new BPF_TRACE_UPROBE_SESSION attach type that instructs
>
On Mon, Sep 9, 2024 at 12:46 AM Jiri Olsa wrote:
>
> Adding support to attach program in uprobe session mode
> with bpf_program__attach_uprobe_multi function.
>
> Adding session bool to bpf_uprobe_multi_opts struct that allows
> to load and attach the bpf program via uprobe session.
> the attachme
On Mon, Sep 9, 2024 at 12:46 AM Jiri Olsa wrote:
>
> Adding support for uprobe consumer to be defined as session and have
> new behaviour for consumer's 'handler' and 'ret_handler' callbacks.
>
> The session means that 'handler' and 'ret_handler' callbacks are
> connected in a way that allows to:
Similarly to how we SRCU-protect uprobe instance (and avoid refcounting
it unnecessarily) when waiting for return probe hit, use hprobe approach
to do the same with single-stepped uprobe. Same hprobe_* primitives are
used. We also reuse ri_timer() callback to expire both pending
single-step uprobe
Avoid taking refcount on uprobe in prepare_uretprobe(), instead take
uretprobe-specific SRCU lock and keep it active as kernel transfers
control back to user space.
Given we can't rely on user space returning from traced function within
reasonable time period, we need to make sure not to keep SRCU
Currently put_uprobe() might trigger mutex_lock()/mutex_unlock(), which
makes it unsuitable to be called from more restricted context like softirq.
Let's make put_uprobe() agnostic to the context in which it is called,
and use work queue to defer the mutex-protected clean up steps.
To avoid unnec
Recently landed changes make uprobe entry hot code path makes use of RCU Tasks
Trace to avoid touching uprobe refcount, which at high frequency of uprobe
triggering leads to excessive cache line bouncing and limited scalability with
increased number of CPUs that simultaneously execute uprobe handle
On Mon, Sep 9, 2024 at 6:13 AM Jann Horn wrote:
>
> On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko wrote:
> > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely
> > access vma->vm_file->f_inode field locklessly under just rcu_read_lock()
>
> No, not every file is SLAB_TYPESAF
missed = true;
+ }
+ if (lh_xsk) {
+ __xsk_map_flush(lh_xsk);
+ missed = true;
+ }
+ } while (redirect > 0);
WARN_ONCE(missed, "Missing xdp_do_flush() invocation after NAPI by
%ps\n",
napi->poll);
---
base-commit: 8e69c96df771ab469cec278edb47009351de4da6
change-id: 20240909-devel-koalo-fix-redirect-684639694951
prerequisite-patch-id: 6928ae7741727e3b2ab4a8c4256b06a861040a01
Best regards,
--
Florian Kauer
Add a might_fault() check to validate that the perf sys_enter/sys_exit
probe callbacks are indeed called from a context where page faults can
be handled.
Signed-off-by: Mathieu Desnoyers
Cc: Michael Jeanson
Cc: Steven Rostedt
Cc: Masami Hiramatsu
Cc: Peter Zijlstra
Cc: Alexei Starovoitov
Cc:
Use Tasks Trace RCU to protect iteration of system call enter/exit
tracepoint probes to allow those probes to handle page faults.
In preparation for this change, all tracers registering to system call
enter/exit tracepoints should expect those to be called with preemption
enabled.
This allows tra
Add a might_fault() check to validate that the ftrace sys_enter/sys_exit
probe callbacks are indeed called from a context where page faults can
be handled.
Signed-off-by: Mathieu Desnoyers
Cc: Michael Jeanson
Cc: Steven Rostedt
Cc: Masami Hiramatsu
Cc: Peter Zijlstra
Cc: Alexei Starovoitov
C
Add a might_fault() check to validate that the bpf sys_enter/sys_exit
probe callbacks are indeed called from a context where page faults can
be handled.
Signed-off-by: Mathieu Desnoyers
Cc: Michael Jeanson
Cc: Steven Rostedt
Cc: Masami Hiramatsu
Cc: Peter Zijlstra
Cc: Alexei Starovoitov
Cc:
In preparation for allowing system call tracepoints to handle page
faults, introduce TRACE_EVENT_SYSCALL to declare the sys_enter/sys_exit
tracepoints.
Emit the static inlines register_trace_syscall_##name for events
declared with TRACE_EVENT_SYSCALL, allowing source-level validation
that only pro
In preparation for allowing system call enter/exit instrumentation to
handle page faults, make sure that bpf can handle this change by
explicitly disabling preemption within the bpf system call tracepoint
probes to respect the current expectations within bpf tracing code.
This change does not yet
In preparation for allowing system call enter/exit instrumentation to
handle page faults, make sure that perf can handle this change by
explicitly disabling preemption within the perf system call tracepoint
probes to respect the current expectations within perf ring buffer code.
This change does n
Wire up the system call tracepoints with Tasks Trace RCU to allow
the ftrace, perf, and eBPF tracers to handle page faults.
This series does the initial wire-up allowing tracers to handle page
faults, but leaves out the actual handling of said page faults as future
work.
This series was compile a
In preparation for allowing system call enter/exit instrumentation to
handle page faults, make sure that ftrace can handle this change by
explicitly disabling preemption within the ftrace system call tracepoint
probes to respect the current expectations within ftrace ring buffer
code.
This change
On Mon, Sep 9, 2024 at 10:22 AM Mathieu Desnoyers
wrote:
>
> On 2024-09-09 12:53, Andrii Nakryiko wrote:
> > On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
> [...]
> >>>
> >>> I wonder if it would be better to just do this, instead of that
> >>> preempt guard. I think we don't strictly need pree
On Sun, Sep 08, 2024 at 11:50:51AM +0900, Masahiro Yamada wrote:
> On Fri, Sep 6, 2024 at 11:45???PM Kris Van Hees
> wrote:
> >
> > Create file module.builtin.ranges that can be used to find where
> > built-in modules are located by their addresses. This will be useful for
> > tracing tools to fi
Hi Kris,
On Fri, Sep 06, 2024 at 10:45:01AM -0400, Kris Van Hees wrote:
> At build time, create the file modules.builtin.ranges that will hold
> address range data of the built-in modules that can be used by tracers.
>
> Especially for tracing applications, it is convenient to be able to
> refer
On 09/09, Jiri Olsa wrote:
>
> On Fri, Sep 06, 2024 at 09:18:15PM +0200, Oleg Nesterov wrote:
> >
> > And btw... Can bpftrace attach to the uprobe tp?
> >
> > # perf probe -x ./test -a func
> > Added new event:
> > probe_test:func (on func in /root/TTT/test)
> >
> > You can n
On 2024-09-09 12:53, Andrii Nakryiko wrote:
On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
[...]
I wonder if it would be better to just do this, instead of that
preempt guard. I think we don't strictly need preemption to be
disabled, we just need to stay on the same CPU, just like we do that
On Mon, Sep 9, 2024 at 3:18 AM Mark Rutland wrote:
>
> On Fri, Sep 06, 2024 at 10:46:00AM -0700, Andrii Nakryiko wrote:
> > On Fri, Sep 6, 2024 at 2:39 AM Mark Rutland wrote:
> > >
> > > On Tue, Aug 27, 2024 at 07:33:55PM +0800, Liao, Chang wrote:
> > > > Hi, Mark
> > > >
> > > > Would you like t
On Mon, Sep 9, 2024 at 4:21 AM Yunsheng Lin wrote:
>
> On 2024/9/9 13:43, Mina Almasry wrote:
>
> >
> > Perf - page-pool benchmark:
> > ---
> >
> > bench_page_pool_simple.ko tests with and without these changes:
> > https://pastebin.com/raw/ncHDwAbn
> >
> > AFAIK the number
On Mon, Sep 9, 2024 at 8:11 AM Mathieu Desnoyers
wrote:
>
> On 2024-09-04 21:21, Andrii Nakryiko wrote:
> > On Wed, Aug 28, 2024 at 7:42 AM Mathieu Desnoyers
> > wrote:
> >>
> >> In preparation for converting system call enter/exit instrumentation
> >> into faultable tracepoints, make sure that b
September 9, 2024 at 11:13 PM, "Masami Hiramatsu" wrote:
Hi Masami,
>
> On Thu, 22 Aug 2024 07:30:21 +0800
>
> Jeff Xie wrote:
>
> >
> > Currently, when using both function tracer and function graph
> > simultaneously,
> >
> > it is found that function tracer sometimes captures a fake p
On Thu, 05 Sep 2024 14:33:07 +0100
Jiaxun Yang wrote:
> Enable rust for linux by implement generate_rust_target.rs
> and select relevant Kconfig options.
>
> We don't use builtin target as there is no sutiable baremetal
> target for us that can cover all ISA variants supported by kernel.
>
> Li
On Thu, 05 Sep 2024 14:33:05 +0100
Jiaxun Yang wrote:
> scripts/generate_rust_target.rs is used by several architectures
> to generate target.json target spec file.
>
> However the enablement of this feature was controlled by target
> specific Makefile pieces spreading everywhere.
>
> Introduce
On Mon, 9 Sep 2024 13:53:20 +
Arnd Bergmann wrote:
> From: Arnd Bergmann
>
> The definition was previously moved into an #ifdef block by
> accident and now causes a build failure when CONFIG_TIMERLAT_TRACER
> is disabled:
>
> In file included from include/linux/seqlock.h:19,
>
On Thu, 22 Aug 2024 07:30:21 +0800
Jeff Xie wrote:
> Currently, when using both function tracer and function graph simultaneously,
> it is found that function tracer sometimes captures a fake parent
> ip(return_to_handler)
> instead of the true parent ip.
>
> This issue is easy to reproduce. Be
On Mon, 09 Sep 2024 01:12:20 -0700
syzbot wrote:
> Hello trace maintainers/developers,
>
> This is a 31-day syzbot report for the trace subsystem.
> All related reports/information can be found at:
> https://syzkaller.appspot.com/upstream/s/trace
>
> During the period, 1 new issues were detecte
On 2024-09-04 21:21, Andrii Nakryiko wrote:
On Wed, Aug 28, 2024 at 7:42 AM Mathieu Desnoyers
wrote:
In preparation for converting system call enter/exit instrumentation
into faultable tracepoints, make sure that bpf can handle registering to
such tracepoints by explicitly disabling preemption
On Sun, 8 Sep 2024 07:25:44 -0700
Donglin Peng wrote:
Hi Donglin!
> When using function_graph tracer to analyze the flow of kernel function
> execution, it is often necessary to quickly locate the exact line of code
> where the call occurs. While this may be easy at times, it can be more
> time
On Mon, 9 Sep 2024 17:34:48 +0300
Mike Rapoport wrote:
> > This is insane, just force BUILDTIME_MCOUNT_SORT
>
> The comment in ftrace.c says "... while mcount loc in modules can not be
> sorted at build time"
>
> I don't know enough about objtool, but I'd presume it's because the sorting
> s
On Mon, Sep 09, 2024 at 11:29:23AM +0200, Peter Zijlstra wrote:
> On Mon, Sep 09, 2024 at 09:47:28AM +0300, Mike Rapoport wrote:
> > diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> > index 8da0e66ca22d..563d9a890ce2 100644
> > --- a/arch/x86/kernel/ftrace.c
> > +++ b/arch/x86/ker
From: Steven Rostedt
To fix some critical section races, the interface_lock was added to a few
locations. One of those locations was above where the interface_lock was
declared, so the declaration was moved up before that usage.
Unfortunately, where it was placed was inside a CONFIG_TIMERLAT_TRAC
August 22, 2024 at 7:30 AM, "Jeff Xie" wrote:
Kindly ping, any comments here? Thanks.
>
> Currently, when using both function tracer and function graph simultaneously,
>
> it is found that function tracer sometimes captures a fake parent
> ip(return_to_handler)
>
> instead of the true pare
On Mon, 09 Sep 2024 10:16:55 +0200
Sven Schnelle wrote:
> Masami Hiramatsu (Google) writes:
>
> > On Fri, 6 Sep 2024 11:36:11 +0800
> > Zheng Yejian wrote:
> >
> >> On 2024/9/4 14:58, Sven Schnelle wrote:
> >> > Add a config option to disable/enable function argument
> >> > printing support du
From: Arnd Bergmann
The definition was previously moved into an #ifdef block by
accident and now causes a build failure when CONFIG_TIMERLAT_TRACER
is disabled:
In file included from include/linux/seqlock.h:19,
from kernel/trace/trace_osnoise.c:20:
kernel/trace/trace_osnoise.c:
On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko wrote:
> Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely
> access vma->vm_file->f_inode field locklessly under just rcu_read_lock()
No, not every file is SLAB_TYPESAFE_BY_RCU - see for example
ovl_mmap(), which uses backing_fi
On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko wrote:
> +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq)
> +{
> + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */
> + return seq == smp_load_acquire(&mm->mm_lock_seq);
> +}
A load-acquire can't provid
On 2024/9/9 13:43, Mina Almasry wrote:
>
> Perf - page-pool benchmark:
> ---
>
> bench_page_pool_simple.ko tests with and without these changes:
> https://pastebin.com/raw/ncHDwAbn
>
> AFAIK the number that really matters in the perf tests is the
> 'tasklet_page_pool01_f
A helper function defined but not used. This, in particular,
prevents kernel builds with clang, `make W=1` and CONFIG_WERROR=y:
kernel/trace/trace.c:2229:19: error: unused function 'run_tracer_selftest'
[-Werror,-Wunused-function]
2229 | static inline int run_tracer_selftest(struct tracer *type)
On Fri, Sep 06, 2024 at 09:18:15PM +0200, Oleg Nesterov wrote:
> On 09/06, Jiri Olsa wrote:
> >
> > On Mon, Sep 02, 2024 at 03:22:25AM +0800, Tianyi Liu wrote:
> > >
> > > For now, please forget the original patch as we need a new solution ;)
> >
> > hi,
> > any chance we could go with your fix unt
On Fri, Sep 06, 2024 at 10:46:00AM -0700, Andrii Nakryiko wrote:
> On Fri, Sep 6, 2024 at 2:39 AM Mark Rutland wrote:
> >
> > On Tue, Aug 27, 2024 at 07:33:55PM +0800, Liao, Chang wrote:
> > > Hi, Mark
> > >
> > > Would you like to discuss this patch further, or do you still believe
> > > emulati
On Mon, Sep 09, 2024 at 09:47:28AM +0300, Mike Rapoport wrote:
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8da0e66ca22d..563d9a890ce2 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -654,4 +656,15 @@ void ftrace_graph_func(unsigned long
Masami Hiramatsu (Google) writes:
> On Fri, 6 Sep 2024 11:36:11 +0800
> Zheng Yejian wrote:
>
>> On 2024/9/4 14:58, Sven Schnelle wrote:
>> > Add a config option to disable/enable function argument
>> > printing support during runtime.
>> >
>> > Signed-off-by: Sven Schnelle
>> > ---
>> > ker
Hello trace maintainers/developers,
This is a 31-day syzbot report for the trace subsystem.
All related reports/information can be found at:
https://syzkaller.appspot.com/upstream/s/trace
During the period, 1 new issues were detected and 0 were fixed.
In total, 8 issues are still open and 37 have
Masami Hiramatsu (Google) writes:
> On Fri, 6 Sep 2024 10:07:38 -0400
> Steven Rostedt wrote:
>
>> On Fri, 06 Sep 2024 08:18:02 +0200
>> Sven Schnelle wrote:
>>
>>
>> > One thing i learned after submitting the series is that struct
>> > ftrace_regs depends on CONFIG_FUNCTION_TRACER, so it can
Adding uprobe session test that verifies the cookie value is stored
properly when single uprobe-ed function is executed recursively.
Acked-by: Andrii Nakryiko
Signed-off-by: Jiri Olsa
---
.../bpf/prog_tests/uprobe_multi_test.c| 57 +++
.../progs/uprobe_multi_session_recu
Adding uprobe session test that verifies the cookie value
get properly propagated from entry to return program.
Acked-by: Andrii Nakryiko
Signed-off-by: Jiri Olsa
---
.../bpf/prog_tests/uprobe_multi_test.c| 31
.../bpf/progs/uprobe_multi_session_cookie.c | 48
Adding uprobe session test and testing that the entry program
return value controls execution of the return probe program.
Signed-off-by: Jiri Olsa
---
.../bpf/prog_tests/uprobe_multi_test.c| 47
.../bpf/progs/uprobe_multi_session.c | 71 +++
2 files
Adding support to attach program in uprobe session mode
with bpf_program__attach_uprobe_multi function.
Adding session bool to bpf_uprobe_multi_opts struct that allows
to load and attach the bpf program via uprobe session.
the attachment to create uprobe multi session.
Also adding new program loa
Placing bpf_session_run_ctx layer in between bpf_run_ctx and
bpf_uprobe_multi_run_ctx, so the session data can be retrieved
from uprobe_multi link.
Plus granting session kfuncs access to uprobe session programs.
Acked-by: Andrii Nakryiko
Signed-off-by: Jiri Olsa
---
kernel/trace/bpf_trace.c |
Adding support to attach bpf program for entry and return probe
of the same function. This is common use case which at the moment
requires to create two uprobe multi links.
Adding new BPF_TRACE_UPROBE_SESSION attach type that instructs
kernel to attach single link program to both entry and exit pr
Adding support for uprobe consumer to be defined as session and have
new behaviour for consumer's 'handler' and 'ret_handler' callbacks.
The session means that 'handler' and 'ret_handler' callbacks are
connected in a way that allows to:
- control execution of 'ret_handler' from 'handler' callba
hi,
this patchset is adding support for session uprobe attachment and
using it through bpf link for bpf programs.
The session means that the uprobe consumer is executed on entry
and return of probed function with additional control:
- entry callback can control execution of the return callback
在 2024/9/6 17:39, Mark Rutland 写道:
> On Tue, Aug 27, 2024 at 07:33:55PM +0800, Liao, Chang wrote:
>> Hi, Mark
>>
>> Would you like to discuss this patch further, or do you still believe
>> emulating
>> STP to push FP/LR into the stack in kernel is not a good idea?
>
> I'm happy with the NOP em
在 2024/9/6 4:17, Andrii Nakryiko 写道:
> On Fri, Aug 30, 2024 at 2:25 AM Liao, Chang wrote:
>>
>>
>>
>> 在 2024/8/30 3:26, Andrii Nakryiko 写道:
>>> On Tue, Aug 27, 2024 at 4:34 AM Liao, Chang wrote:
Hi, Mark
Would you like to discuss this patch further, or do you still believe
在 2024/9/7 1:46, Andrii Nakryiko 写道:
> On Fri, Sep 6, 2024 at 2:39 AM Mark Rutland wrote:
>>
>> On Tue, Aug 27, 2024 at 07:33:55PM +0800, Liao, Chang wrote:
>>> Hi, Mark
>>>
>>> Would you like to discuss this patch further, or do you still believe
>>> emulating
>>> STP to push FP/LR into the s
v2->v1:
1. Remove the simuation of STP and the related bits.
2. Use arm64_skip_faulting_instruction for single-stepping or FEAT_BTI
scenario.
As Andrii pointed out, the uprobe/uretprobe selftest bench run into a
counterintuitive result that nop and push variants are much slower than
ret variant
78 matches
Mail list logo