On Wed, 6 Feb 2019 08:59:44 +0000
Julien Thierry <julien.thie...@arm.com> wrote:

> Hi Torsten,
> 
> On 18/01/2019 16:39, Torsten Duwe wrote:
> > Once gcc8 adds 2 NOPs at the beginning of each function, replace the
> > first NOP thus generated with a quick LR saver (move it to scratch reg
> > x9), so the 2nd replacement insn, the call to ftrace, does not clobber
> > the value. Ftrace will then generate the standard stack frames.
> > 
> > Note that patchable-function-entry in GCC disables IPA-RA, which means
> > ABI register calling conventions are obeyed *and* scratch registers
> > such as x9 are available.
> > 
> > Introduce and handle an ftrace_regs_trampoline for module PLTs, right
> > after ftrace_trampoline, and double the size of this special section.
> > 
> > Signed-off-by: Torsten Duwe <d...@suse.de>
> > 
> > ---
> > 
> > Mark, if you see your ftrace entry macro code being represented correctly
> > here, please add your sign-off, As I've initially copied it from your mail.
> > 
> > ---
> >  arch/arm64/include/asm/ftrace.h  |   17 ++++-
> >  arch/arm64/include/asm/module.h  |    3 
> >  arch/arm64/kernel/entry-ftrace.S |  125 
> > +++++++++++++++++++++++++++++++++++++--
> >  arch/arm64/kernel/ftrace.c       |  114 ++++++++++++++++++++++++++---------
> >  arch/arm64/kernel/module-plts.c  |    3 
> >  arch/arm64/kernel/module.c       |    2 
> >  6 files changed, 227 insertions(+), 37 deletions(-)  
> 
> [...]
> 
> > --- a/arch/arm64/kernel/ftrace.c
> > +++ b/arch/arm64/kernel/ftrace.c
> > @@ -133,17 +163,45 @@ int ftrace_make_call(struct dyn_ftrace *
> >     return ftrace_modify_code(pc, old, new, true);
> >  }
> >  
> > +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> > +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
> > +                   unsigned long addr)
> > +{
> > +   unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;
> > +   u32 old, new;
> > +
> > +   old = aarch64_insn_gen_branch_imm(pc, old_addr, true);
> > +   new = aarch64_insn_gen_branch_imm(pc, addr, true);
> > +
> > +   return ftrace_modify_code(pc, old, new, true);
> > +}
> > +#endif
> > +
> >  /*
> >   * Turn off the call to ftrace_caller() in instrumented function
> >   */
> >  int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
> >                 unsigned long addr)
> >  {
> > -   unsigned long pc = rec->ip;
> > +   unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;  
> 
> Sorry to come back on this patch again, but I was looking at the ftrace
> code a bit, and I see that when processing the ftrace call locations,
> ftrace calls ftrace_call_adjust() on every ip registered as mcount
> caller (or in our case patchable entries). This ftrace_call_adjust() is
> arch specific, so I was thinking we could place the offset in here once
> and for all so we don't have to worry about it in the future.

The ftrace_call_adjust() is there in case what is saved in the mcount
table is different than what is needed for the addresses. Which this
looks to be the case here.

-- Steve

Reply via email to