On Wed, Oct 23, 2019 at 12:23:06PM -0400, Steven Rostedt wrote:
> On Tue, 22 Oct 2019 14:58:43 -0700
> Alexei Starovoitov wrote:
>
> > Neither of two statements are true. The per-function generated trampoline
> > I'm talking about is bpf specific. For a function with two arguments it's
> > just:
On Wed, 23 Oct 2019 12:23:06 -0400
Steven Rostedt wrote:
> All you need to do is:
>
> register_ftrace_direct((unsigned long)func_you_want_to_trace,
> (unsigned long)your_trampoline);
>
>
> Alexei,
>
> Would this work for you?
I just pushed a test branch u
On Tue, 22 Oct 2019 14:58:43 -0700
Alexei Starovoitov wrote:
> Neither of two statements are true. The per-function generated trampoline
> I'm talking about is bpf specific. For a function with two arguments it's
> just:
> push rbp
> mov rbp, rsp
> push rdi
> push rsi
> lea rdi,[rbp-0x10]
> ca
On Tue, Oct 22, 2019 at 09:20:27PM -0700, Andy Lutomirski wrote:
> Also, Alexei, are you testing on a CONFIG_FRAME_POINTER=y kernel? The
> ftrace code has a somewhat nasty special case to make
> CONFIG_FRAME_POINTER=y work right, and your example trampoline does
> not but arguably should have exac
On Tue, Oct 22, 2019 at 4:49 PM Alexei Starovoitov
wrote:
>
> On Tue, Oct 22, 2019 at 03:45:26PM -0700, Andy Lutomirski wrote:
> >
> >
> > >> On Oct 22, 2019, at 2:58 PM, Alexei Starovoitov
> > >> wrote:
> > >>
> > >> On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
> > >> I gave
On Tue, 22 Oct 2019 18:17:40 -0400
Steven Rostedt wrote:
> > your solution is to reduce the overhead.
> > my solution is to remove it competely. See the difference?
>
> You're just trimming it down. I'm curious to what overhead you save by
> not saving all parameter registers, and doing a case
On Tue, Oct 22, 2019 at 03:45:26PM -0700, Andy Lutomirski wrote:
>
>
> >> On Oct 22, 2019, at 2:58 PM, Alexei Starovoitov
> >> wrote:
> >>
> >> On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
> >> I gave a solution for this. And that is to add another flag to allow
> >> for ju
On Tue, 22 Oct 2019 15:45:26 -0700
Andy Lutomirski wrote:
> >> On Oct 22, 2019, at 2:58 PM, Alexei Starovoitov
> >> wrote:
> >>
> >> On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
> >> I gave a solution for this. And that is to add another flag to allow
> >> for just the mini
>> On Oct 22, 2019, at 2:58 PM, Alexei Starovoitov
>> wrote:
>>
>> On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
>> I gave a solution for this. And that is to add another flag to allow
>> for just the minimum to change the ip. And we can even add another flag
>> to allow fo
On Tue, 22 Oct 2019 14:58:43 -0700
Alexei Starovoitov wrote:
> On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
> >
> > I gave a solution for this. And that is to add another flag to allow
> > for just the minimum to change the ip. And we can even add another flag
> > to allow for
On Tue, Oct 22, 2019 at 05:04:30PM -0400, Steven Rostedt wrote:
>
> I gave a solution for this. And that is to add another flag to allow
> for just the minimum to change the ip. And we can even add another flag
> to allow for changing the stack if needed (to emulate a call with the
> same paramete
On Tue, 22 Oct 2019 13:46:23 -0700
Alexei Starovoitov wrote:
> On Tue, Oct 22, 2019 at 02:10:21PM -0400, Steven Rostedt wrote:
> > On Tue, 22 Oct 2019 10:50:56 -0700
> > Alexei Starovoitov wrote:
> >
> > > > +static void my_hijack_func(unsigned long ip, unsigned long pip,
> > > > +
On Tue, Oct 22, 2019 at 02:10:21PM -0400, Steven Rostedt wrote:
> On Tue, 22 Oct 2019 10:50:56 -0700
> Alexei Starovoitov wrote:
>
> > > +static void my_hijack_func(unsigned long ip, unsigned long pip,
> > > +struct ftrace_ops *ops, struct pt_regs *regs)
> >
> > 1.
> > To p
On Tue, 22 Oct 2019 10:50:56 -0700
Alexei Starovoitov wrote:
> > +static void my_hijack_func(unsigned long ip, unsigned long pip,
> > + struct ftrace_ops *ops, struct pt_regs *regs)
>
> 1.
> To pass regs into the callback ftrace_regs_caller() has huge amount
> of stores to
On Tue, Oct 22, 2019 at 09:44:55AM -0400, Steven Rostedt wrote:
> On Tue, 22 Oct 2019 07:19:56 -0400
> Steven Rostedt wrote:
>
> > > I'm not touching dyn_ftrace.
> > > Actually calling my stuff ftrace+bpf is probably not correct either.
> > > I'm reusing code patching of nop into call that ftrace
On Tue, 22 Oct 2019 07:19:56 -0400
Steven Rostedt wrote:
> > I'm not touching dyn_ftrace.
> > Actually calling my stuff ftrace+bpf is probably not correct either.
> > I'm reusing code patching of nop into call that ftrace does. That's it.
> > Turned out I cannot use 99% of ftrace facilities.
> >
On Mon, 21 Oct 2019 21:05:33 -0700
Alexei Starovoitov wrote:
> On Mon, Oct 21, 2019 at 11:19:04PM -0400, Steven Rostedt wrote:
> > On Mon, 21 Oct 2019 23:16:30 -0400
> > Steven Rostedt wrote:
> >
> > > > what bugs you're seeing?
> > > > The IPI frequency that was mentioned in this thread or s
On Mon, Oct 21, 2019 at 11:19:04PM -0400, Steven Rostedt wrote:
> On Mon, 21 Oct 2019 23:16:30 -0400
> Steven Rostedt wrote:
>
> > > what bugs you're seeing?
> > > The IPI frequency that was mentioned in this thread or something else?
> > > I'm hacking ftrace+bpf stuff in the same spot and would
On Mon, Oct 21, 2019 at 11:16:30PM -0400, Steven Rostedt wrote:
> On Mon, 21 Oct 2019 20:10:09 -0700
> Alexei Starovoitov wrote:
>
> > On Mon, Oct 21, 2019 at 5:43 PM Steven Rostedt wrote:
> > >
> > > On Mon, 21 Oct 2019 17:36:54 -0700
> > > Alexei Starovoitov wrote:
> > >
> > >
> > > > What
On Mon, 21 Oct 2019 23:16:30 -0400
Steven Rostedt wrote:
> > what bugs you're seeing?
> > The IPI frequency that was mentioned in this thread or something else?
> > I'm hacking ftrace+bpf stuff in the same spot and would like to
> > base my work on the latest and greatest.
I'm also going to be
On Mon, 21 Oct 2019 20:10:09 -0700
Alexei Starovoitov wrote:
> On Mon, Oct 21, 2019 at 5:43 PM Steven Rostedt wrote:
> >
> > On Mon, 21 Oct 2019 17:36:54 -0700
> > Alexei Starovoitov wrote:
> >
> >
> > > What is the status of this set ?
> > > Steven, did you apply it ?
> >
> > There's still
On Mon, Oct 21, 2019 at 5:43 PM Steven Rostedt wrote:
>
> On Mon, 21 Oct 2019 17:36:54 -0700
> Alexei Starovoitov wrote:
>
>
> > What is the status of this set ?
> > Steven, did you apply it ?
>
> There's still bugs to figure out.
what bugs you're seeing?
The IPI frequency that was mentioned in
On Mon, 21 Oct 2019 17:36:54 -0700
Alexei Starovoitov wrote:
> What is the status of this set ?
> Steven, did you apply it ?
There's still bugs to figure out.
-- Steve
On Fri, Oct 4, 2019 at 6:45 AM Steven Rostedt wrote:
>
> On Fri, 4 Oct 2019 13:22:37 +0200
> Peter Zijlstra wrote:
>
> > On Thu, Oct 03, 2019 at 06:10:45PM -0400, Steven Rostedt wrote:
> > > But still, we are going from 120 to 660 IPIs for every CPU. Not saying
> > > it's a problem, but something
On Fri, 11 Oct 2019 09:37:10 +0200
Daniel Bristot de Oliveira wrote:
> But, yes, we will need [ as an optimization ] to sort the address right before
> inserting them in the batch. Still, having the ftrace_pages ordered seems to
> be
> a good thing, as in many cases, the ftrace_pages are disjoin
On Fri, Oct 11, 2019 at 09:37:10AM +0200, Daniel Bristot de Oliveira wrote:
> On 11/10/2019 09:01, Peter Zijlstra wrote:
> > On Fri, Oct 04, 2019 at 10:10:47AM +0200, Daniel Bristot de Oliveira wrote:
> >> Currently, ftrace_rec entries are ordered inside the group of functions,
> >> but
> >> "grou
On 11/10/2019 09:01, Peter Zijlstra wrote:
> On Fri, Oct 04, 2019 at 10:10:47AM +0200, Daniel Bristot de Oliveira wrote:
>> Currently, ftrace_rec entries are ordered inside the group of functions, but
>> "groups of function" are not ordered. So, the current int3 handler does a
>> (*):
> We can ins
On Fri, Oct 04, 2019 at 10:10:47AM +0200, Daniel Bristot de Oliveira wrote:
> Currently, ftrace_rec entries are ordered inside the group of functions, but
> "groups of function" are not ordered. So, the current int3 handler does a (*):
We can insert a sort() of the vector right before doing
text_p
On Fri, Oct 04, 2019 at 10:10:47AM +0200, Daniel Bristot de Oliveira wrote:
> 1) the enabling/disabling ftrace path
> 2) the int3 path - if a thread/irq is running a kernel function
> 3) the IPI - that affects all CPUs, even those that are not "hitting" trace
> code, e.g., user-space.
>
> The firs
On Fri, 4 Oct 2019 16:44:35 +0200
Daniel Bristot de Oliveira wrote:
> On 04/10/2019 15:40, Steven Rostedt wrote:
> > On Fri, 4 Oct 2019 10:10:47 +0200
> > Daniel Bristot de Oliveira wrote:
> >
> >> [ In addition ]
> >>
> >> Currently, ftrace_rec entries are ordered inside the group of functio
On 04/10/2019 15:40, Steven Rostedt wrote:
> On Fri, 4 Oct 2019 10:10:47 +0200
> Daniel Bristot de Oliveira wrote:
>
>> [ In addition ]
>>
>> Currently, ftrace_rec entries are ordered inside the group of functions, but
>> "groups of function" are not ordered. So, the current int3 handler does a
On Fri, 4 Oct 2019 13:22:37 +0200
Peter Zijlstra wrote:
> On Thu, Oct 03, 2019 at 06:10:45PM -0400, Steven Rostedt wrote:
> > But still, we are going from 120 to 660 IPIs for every CPU. Not saying
> > it's a problem, but something that we should note. Someone (those that
> > don't like kernel int
On Fri, 4 Oct 2019 10:10:47 +0200
Daniel Bristot de Oliveira wrote:
> [ In addition ]
>
> Currently, ftrace_rec entries are ordered inside the group of functions, but
> "groups of function" are not ordered. So, the current int3 handler does a (*):
>
> for_each_group_of_functions:
> check
On Thu, Oct 03, 2019 at 06:10:45PM -0400, Steven Rostedt wrote:
> But still, we are going from 120 to 660 IPIs for every CPU. Not saying
> it's a problem, but something that we should note. Someone (those that
> don't like kernel interference) may complain.
It is machine wide function tracing, int
On 04/10/2019 00:10, Steven Rostedt wrote:
> On Wed, 2 Oct 2019 20:21:06 +0200
> Peter Zijlstra wrote:
>
>> On Wed, Oct 02, 2019 at 06:35:26PM +0200, Daniel Bristot de Oliveira wrote:
>>
>>> ftrace was already batching the updates, for instance, causing 3 IPIs to
>>> enable
>>> all functions. Th
On Wed, 2 Oct 2019 20:21:06 +0200
Peter Zijlstra wrote:
> On Wed, Oct 02, 2019 at 06:35:26PM +0200, Daniel Bristot de Oliveira wrote:
>
> > ftrace was already batching the updates, for instance, causing 3 IPIs to
> > enable
> > all functions. The text_poke() batching also works. But because of
On Wed, 2 Oct 2019 18:35:26 +0200
Daniel Bristot de Oliveira wrote:
> ftrace was already batching the updates, for instance, causing 3 IPIs to
> enable
> all functions. The text_poke() batching also works. But because of the limited
> buffer [ see the reply to the patch 2/3 ], it is flushing the
On Wed, Oct 02, 2019 at 06:35:26PM +0200, Daniel Bristot de Oliveira wrote:
> ftrace was already batching the updates, for instance, causing 3 IPIs to
> enable
> all functions. The text_poke() batching also works. But because of the limited
> buffer [ see the reply to the patch 2/3 ], it is flush
On 27/08/2019 20:06, Peter Zijlstra wrote:
> Move ftrace over to using the generic x86 text_poke functions; this
> avoids having a second/different copy of that code around.
>
> This also avoids ftrace violating the (new) W^X rule and avoids
> fragmenting the kernel text page-tables, due to no lon
Move ftrace over to using the generic x86 text_poke functions; this
avoids having a second/different copy of that code around.
This also avoids ftrace violating the (new) W^X rule and avoids
fragmenting the kernel text page-tables, due to no longer having to
toggle them RW.
Cc: Steven Rostedt
Cc
On Mon, Aug 26, 2019 at 02:51:41PM +0200, Peter Zijlstra wrote:
> Move ftrace over to using the generic x86 text_poke functions; this
> avoids having a second/different copy of that code around.
>
> Cc: Daniel Bristot de Oliveira
> Cc: Steven Rostedt
> Signed-off-by: Peter Zijlstra (Intel)
*si
Move ftrace over to using the generic x86 text_poke functions; this
avoids having a second/different copy of that code around.
Cc: Daniel Bristot de Oliveira
Cc: Steven Rostedt
Signed-off-by: Peter Zijlstra (Intel)
---
arch/x86/include/asm/ftrace.h |2
arch/x86/kernel/ftrace.c | 571
42 matches
Mail list logo