On Mon, Jun 10, 2019 at 01:33:57PM -0500, Josh Poimboeuf wrote:
> On Wed, Jun 05, 2019 at 03:08:06PM +0200, Peter Zijlstra wrote:
> > --- a/arch/x86/include/asm/static_call.h
> > +++ b/arch/x86/include/asm/static_call.h
> > @@ -2,6 +2,20 @@
> >  #ifndef _ASM_STATIC_CALL_H
> >  #define _ASM_STATIC_CALL_H
> >  
> > +#include <asm/asm-offsets.h>
> > +
> > +#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
> > +
> > +/*
> > + * This trampoline is only used during boot / module init, so it's safe to 
> > use
> > + * the indirect branch without a retpoline.
> > + */
> > +#define __ARCH_STATIC_CALL_TRAMP_JMP(key, func)                            
> > \
> > +   ANNOTATE_RETPOLINE_SAFE                                         \
> > +   "jmpq *" __stringify(key) "+" __stringify(SC_KEY_func) "(%rip) \n"
> > +
> > +#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
> 
> I wonder if we can simplify this (and drop the indirect branch) by
> getting rid of the above cruft, and instead just use the out-of-line
> trampoline as the default for inline as well.
> 
> Then the inline case could fall back to the out-of-line implementation
> (by patching the trampoline's jmp dest) before static_call_initialized
> is set.

I think I've got that covered. I changed arch_static_call_transform() to
(always) first rewrite the trampoline and then the in-line site.

That way, when early/module crud comes in that still points at the
trampoline, it will jump to the right place.

I've not even compiled yet, but it ought to work ;-)

Reply via email to