Hi Daniel, On Mon, 4 Feb 2019 20:59:02 +0100 Daniel Bristot de Oliveira <[email protected]> wrote:
> Currently, the jump label of a static key is transformed via the arch > specific function: > > void arch_jump_label_transform(struct jump_entry *entry, > enum jump_label_type type) > > The new approach (batch mode) uses two arch functions, the first has the > same arguments of the arch_jump_label_transform(), and is the function: > > void arch_jump_label_transform_queue(struct jump_entry *entry, > enum jump_label_type type) This function actually returns "int" value. Also, it seems the function returns 0 for failure and 1 for success, but usually we guess "int" returns -errno or 0 for success. So could you update the inferface to return -ENOSPC for vector overflow, and -EINVAL for invalid entry? Thank you, > > Rather than transforming the code, it adds the jump_entry in a queue of > entries to be updated. This functions returns 1 in the case of a > successful enqueue of an entry. If it returns 0, the caller must to > apply the queue and then try to queue again, for instance, because the > queue is full. > > This function expects the caller to sort the entries by the address before > enqueueuing then. This is already done by the arch independent code, though. > > After queuing all jump_entries, the function: > > void arch_jump_label_transform_apply(void) > > Applies the changes in the queue. > > Signed-off-by: Daniel Bristot de Oliveira <[email protected]> > Cc: Thomas Gleixner <[email protected]> > Cc: Ingo Molnar <[email protected]> > Cc: Borislav Petkov <[email protected]> > Cc: "H. Peter Anvin" <[email protected]> > Cc: Greg Kroah-Hartman <[email protected]> > Cc: Masami Hiramatsu <[email protected]> > Cc: "Steven Rostedt (VMware)" <[email protected]> > Cc: Jiri Kosina <[email protected]> > Cc: Josh Poimboeuf <[email protected]> > Cc: "Peter Zijlstra (Intel)" <[email protected]> > Cc: Chris von Recklinghausen <[email protected]> > Cc: Jason Baron <[email protected]> > Cc: Scott Wood <[email protected]> > Cc: Marcelo Tosatti <[email protected]> > Cc: Clark Williams <[email protected]> > Cc: [email protected] > Cc: [email protected] > --- > arch/x86/include/asm/jump_label.h | 2 + > arch/x86/kernel/jump_label.c | 88 +++++++++++++++++++++++++++++++ > 2 files changed, 90 insertions(+) > > diff --git a/arch/x86/include/asm/jump_label.h > b/arch/x86/include/asm/jump_label.h > index 65191ce8e1cf..06c3cc22a058 100644 > --- a/arch/x86/include/asm/jump_label.h > +++ b/arch/x86/include/asm/jump_label.h > @@ -2,6 +2,8 @@ > #ifndef _ASM_X86_JUMP_LABEL_H > #define _ASM_X86_JUMP_LABEL_H > > +#define HAVE_JUMP_LABEL_BATCH > + > #define JUMP_LABEL_NOP_SIZE 5 > > #ifdef CONFIG_X86_64 > diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c > index 2ef687db5a87..3c81cf8f06ca 100644 > --- a/arch/x86/kernel/jump_label.c > +++ b/arch/x86/kernel/jump_label.c > @@ -15,6 +15,7 @@ > #include <asm/kprobes.h> > #include <asm/alternative.h> > #include <asm/text-patching.h> > +#include <linux/slab.h> > > union jump_code_union { > char code[JUMP_LABEL_NOP_SIZE]; > @@ -130,6 +131,93 @@ void arch_jump_label_transform(struct jump_entry *entry, > mutex_unlock(&text_mutex); > } > > +struct text_to_poke *entry_vector; > +unsigned int entry_vector_max_elem __read_mostly; > +unsigned int entry_vector_nr_elem; > + > +void arch_jump_label_init(void) > +{ > + entry_vector = (void *) __get_free_page(GFP_KERNEL); > + > + if (WARN_ON_ONCE(!entry_vector)) > + return; > + > + entry_vector_max_elem = PAGE_SIZE / sizeof(struct text_to_poke); > + return; > +} > + > +int arch_jump_label_transform_queue(struct jump_entry *entry, > + enum jump_label_type type) > +{ > + void *entry_code; > + struct text_to_poke *tp; > + > + /* > + * Batch mode disabled before being able to allocate memory: > + * Fallback to the non-batching mode. > + */ > + if (unlikely(!entry_vector_max_elem)) { > + if (!slab_is_available() || early_boot_irqs_disabled) > + goto fallback; > + > + arch_jump_label_init(); > + } > + > + /* > + * No more space in the vector, tell upper layer to apply > + * the queue before continuing. > + */ > + if (entry_vector_nr_elem == entry_vector_max_elem) > + return 0; > + > + tp = &entry_vector[entry_vector_nr_elem]; > + > + entry_code = (void *)jump_entry_code(entry); > + > + /* > + * The int3 handler will do a bsearch in the queue, so we need entries > + * to be sorted. We can survive an unsorted list by rejecting the entry, > + * forcing the generic jump_label code to apply the queue. Warning once, > + * to raise the attention to the case of an unsorted entry that is > + * better not happen, because, in the worst case we will perform in the > + * same way as we do without batching - with some more overhead. > + */ > + if (entry_vector_nr_elem > 0) { > + int prev_idx = entry_vector_nr_elem - 1; > + struct text_to_poke *prev_tp = &entry_vector[prev_idx]; > + > + if (WARN_ON_ONCE(prev_tp->addr > entry_code)) > + return 0; > + } > + > + __jump_label_set_jump_code(entry, type, > + (union jump_code_union *) &tp->opcode, 0); > + > + tp->addr = entry_code; > + tp->handler = entry_code + JUMP_LABEL_NOP_SIZE; > + tp->len = JUMP_LABEL_NOP_SIZE; > + > + entry_vector_nr_elem++; > + > + return 1; > + > +fallback: > + arch_jump_label_transform(entry, type); > + return 1; > +} > + > +void arch_jump_label_transform_apply(void) > +{ > + if (early_boot_irqs_disabled || !entry_vector_nr_elem) > + return; > + > + mutex_lock(&text_mutex); > + text_poke_bp_batch(entry_vector, entry_vector_nr_elem); > + mutex_unlock(&text_mutex); > + > + entry_vector_nr_elem = 0; > +} > + > static enum { > JL_STATE_START, > JL_STATE_NO_UPDATE, > -- > 2.17.1 > -- Masami Hiramatsu <[email protected]>

