Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Masami Hiramatsu
On Mon, 18 Dec 2017 16:09:30 +0100
Daniel Borkmann  wrote:

> On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> > On Fri, 15 Dec 2017 14:12:54 -0500
> > Josef Bacik  wrote:
> >> From: Josef Bacik 
> >>
> >> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
> >> perfectly with it's kprobe functionality.  We could make sure errors are
> >> only triggered in specific call chains that we care about with very
> >> specific situations.  Accomplish this with the bpf_override_funciton
> >> helper.  This will modify the probe'd callers return value to the
> >> specified value and set the PC to an override function that simply
> >> returns, bypassing the originally probed function.  This gives us a nice
> >> clean way to implement systematic error injection for all of our code
> >> paths.
> > 
> > OK, got it. I think the error_injectable function list should be defined
> > in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
> > the "safeness".
> > 
> > [...]
> >> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
> >> b/arch/x86/kernel/kprobes/ftrace.c
> >> index 8dc0161cec8f..1ea748d682fd 100644
> >> --- a/arch/x86/kernel/kprobes/ftrace.c
> >> +++ b/arch/x86/kernel/kprobes/ftrace.c
> >> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
> >>p->ainsn.boostable = false;
> >>return 0;
> >>  }
> >> +
> >> +asmlinkage void override_func(void);
> >> +asm(
> >> +  ".type override_func, @function\n"
> >> +  "override_func:\n"
> >> +  "   ret\n"
> >> +  ".size override_func, .-override_func\n"
> >> +);
> >> +
> >> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
> >> +{
> >> +  regs->ip = (unsigned long)_func;
> >> +}
> >> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
> > 
> > Calling this as "override_function" is meaningless. This is a function
> > which just return. So I think combination of just_return_func() and
> > arch_bpf_override_func_just_return() will be better.
> > 
> > Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
> > dependent implementation of kprobes, not bpf.
> 
> Josef, please work out any necessary cleanups that would still need
> to be addressed based on Masami's feedback and send them as follow-up
> patches, thanks.
> 
> > Hmm, arch/x86/net/bpf_jit_comp.c will be better place?
> 
> (No, it's JIT only and I'd really prefer to keep it that way, mixing
>  this would result in a huge mess.)

OK, that is same to kprobes. kernel/kprobes.c and arch/x86/kernel/kprobe/*
are for instrumentation code. And kernel/trace/trace_kprobe.c is ftrace's
kprobe user interface, just one implementation of kprobe usage. So please
do not mix it up. It will result in a huge mess to me.

Thank you,

-- 
Masami Hiramatsu 


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Masami Hiramatsu
On Mon, 18 Dec 2017 16:09:30 +0100
Daniel Borkmann  wrote:

> On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> > On Fri, 15 Dec 2017 14:12:54 -0500
> > Josef Bacik  wrote:
> >> From: Josef Bacik 
> >>
> >> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
> >> perfectly with it's kprobe functionality.  We could make sure errors are
> >> only triggered in specific call chains that we care about with very
> >> specific situations.  Accomplish this with the bpf_override_funciton
> >> helper.  This will modify the probe'd callers return value to the
> >> specified value and set the PC to an override function that simply
> >> returns, bypassing the originally probed function.  This gives us a nice
> >> clean way to implement systematic error injection for all of our code
> >> paths.
> > 
> > OK, got it. I think the error_injectable function list should be defined
> > in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
> > the "safeness".
> > 
> > [...]
> >> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
> >> b/arch/x86/kernel/kprobes/ftrace.c
> >> index 8dc0161cec8f..1ea748d682fd 100644
> >> --- a/arch/x86/kernel/kprobes/ftrace.c
> >> +++ b/arch/x86/kernel/kprobes/ftrace.c
> >> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
> >>p->ainsn.boostable = false;
> >>return 0;
> >>  }
> >> +
> >> +asmlinkage void override_func(void);
> >> +asm(
> >> +  ".type override_func, @function\n"
> >> +  "override_func:\n"
> >> +  "   ret\n"
> >> +  ".size override_func, .-override_func\n"
> >> +);
> >> +
> >> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
> >> +{
> >> +  regs->ip = (unsigned long)_func;
> >> +}
> >> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
> > 
> > Calling this as "override_function" is meaningless. This is a function
> > which just return. So I think combination of just_return_func() and
> > arch_bpf_override_func_just_return() will be better.
> > 
> > Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
> > dependent implementation of kprobes, not bpf.
> 
> Josef, please work out any necessary cleanups that would still need
> to be addressed based on Masami's feedback and send them as follow-up
> patches, thanks.
> 
> > Hmm, arch/x86/net/bpf_jit_comp.c will be better place?
> 
> (No, it's JIT only and I'd really prefer to keep it that way, mixing
>  this would result in a huge mess.)

OK, that is same to kprobes. kernel/kprobes.c and arch/x86/kernel/kprobe/*
are for instrumentation code. And kernel/trace/trace_kprobe.c is ftrace's
kprobe user interface, just one implementation of kprobe usage. So please
do not mix it up. It will result in a huge mess to me.

Thank you,

-- 
Masami Hiramatsu 


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Daniel Borkmann
On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> On Fri, 15 Dec 2017 14:12:54 -0500
> Josef Bacik  wrote:
>> From: Josef Bacik 
>>
>> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
>> perfectly with it's kprobe functionality.  We could make sure errors are
>> only triggered in specific call chains that we care about with very
>> specific situations.  Accomplish this with the bpf_override_funciton
>> helper.  This will modify the probe'd callers return value to the
>> specified value and set the PC to an override function that simply
>> returns, bypassing the originally probed function.  This gives us a nice
>> clean way to implement systematic error injection for all of our code
>> paths.
> 
> OK, got it. I think the error_injectable function list should be defined
> in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
> the "safeness".
> 
> [...]
>> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
>> b/arch/x86/kernel/kprobes/ftrace.c
>> index 8dc0161cec8f..1ea748d682fd 100644
>> --- a/arch/x86/kernel/kprobes/ftrace.c
>> +++ b/arch/x86/kernel/kprobes/ftrace.c
>> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
>>  p->ainsn.boostable = false;
>>  return 0;
>>  }
>> +
>> +asmlinkage void override_func(void);
>> +asm(
>> +".type override_func, @function\n"
>> +"override_func:\n"
>> +"   ret\n"
>> +".size override_func, .-override_func\n"
>> +);
>> +
>> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
>> +{
>> +regs->ip = (unsigned long)_func;
>> +}
>> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
> 
> Calling this as "override_function" is meaningless. This is a function
> which just return. So I think combination of just_return_func() and
> arch_bpf_override_func_just_return() will be better.
> 
> Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
> dependent implementation of kprobes, not bpf.

Josef, please work out any necessary cleanups that would still need
to be addressed based on Masami's feedback and send them as follow-up
patches, thanks.

> Hmm, arch/x86/net/bpf_jit_comp.c will be better place?

(No, it's JIT only and I'd really prefer to keep it that way, mixing
 this would result in a huge mess.)


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Daniel Borkmann
On 12/18/2017 10:51 AM, Masami Hiramatsu wrote:
> On Fri, 15 Dec 2017 14:12:54 -0500
> Josef Bacik  wrote:
>> From: Josef Bacik 
>>
>> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
>> perfectly with it's kprobe functionality.  We could make sure errors are
>> only triggered in specific call chains that we care about with very
>> specific situations.  Accomplish this with the bpf_override_funciton
>> helper.  This will modify the probe'd callers return value to the
>> specified value and set the PC to an override function that simply
>> returns, bypassing the originally probed function.  This gives us a nice
>> clean way to implement systematic error injection for all of our code
>> paths.
> 
> OK, got it. I think the error_injectable function list should be defined
> in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
> the "safeness".
> 
> [...]
>> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
>> b/arch/x86/kernel/kprobes/ftrace.c
>> index 8dc0161cec8f..1ea748d682fd 100644
>> --- a/arch/x86/kernel/kprobes/ftrace.c
>> +++ b/arch/x86/kernel/kprobes/ftrace.c
>> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
>>  p->ainsn.boostable = false;
>>  return 0;
>>  }
>> +
>> +asmlinkage void override_func(void);
>> +asm(
>> +".type override_func, @function\n"
>> +"override_func:\n"
>> +"   ret\n"
>> +".size override_func, .-override_func\n"
>> +);
>> +
>> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
>> +{
>> +regs->ip = (unsigned long)_func;
>> +}
>> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
> 
> Calling this as "override_function" is meaningless. This is a function
> which just return. So I think combination of just_return_func() and
> arch_bpf_override_func_just_return() will be better.
> 
> Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
> dependent implementation of kprobes, not bpf.

Josef, please work out any necessary cleanups that would still need
to be addressed based on Masami's feedback and send them as follow-up
patches, thanks.

> Hmm, arch/x86/net/bpf_jit_comp.c will be better place?

(No, it's JIT only and I'd really prefer to keep it that way, mixing
 this would result in a huge mess.)


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Masami Hiramatsu
On Fri, 15 Dec 2017 14:12:54 -0500
Josef Bacik  wrote:

> From: Josef Bacik 
> 
> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
> perfectly with it's kprobe functionality.  We could make sure errors are
> only triggered in specific call chains that we care about with very
> specific situations.  Accomplish this with the bpf_override_funciton
> helper.  This will modify the probe'd callers return value to the
> specified value and set the PC to an override function that simply
> returns, bypassing the originally probed function.  This gives us a nice
> clean way to implement systematic error injection for all of our code
> paths.

OK, got it. I think the error_injectable function list should be defined
in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
the "safeness".

[...]
> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
> b/arch/x86/kernel/kprobes/ftrace.c
> index 8dc0161cec8f..1ea748d682fd 100644
> --- a/arch/x86/kernel/kprobes/ftrace.c
> +++ b/arch/x86/kernel/kprobes/ftrace.c
> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
>   p->ainsn.boostable = false;
>   return 0;
>  }
> +
> +asmlinkage void override_func(void);
> +asm(
> + ".type override_func, @function\n"
> + "override_func:\n"
> + "   ret\n"
> + ".size override_func, .-override_func\n"
> +);
> +
> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
> +{
> + regs->ip = (unsigned long)_func;
> +}
> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);

Calling this as "override_function" is meaningless. This is a function
which just return. So I think combination of just_return_func() and
arch_bpf_override_func_just_return() will be better.

Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
dependent implementation of kprobes, not bpf.

Hmm, arch/x86/net/bpf_jit_comp.c will be better place?
(why don't we have arch/x86/kernel/bpf.c?)

[..]
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 492700c5fb4d..91f4b57dab82 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -42,6 +42,7 @@ struct trace_kprobe {
>   (offsetof(struct trace_kprobe, tp.args) +   \
>   (sizeof(struct probe_arg) * (n)))
>  
> +DEFINE_PER_CPU(int, bpf_kprobe_override);
>  
>  static nokprobe_inline bool trace_kprobe_is_return(struct trace_kprobe *tk)
>  {
> @@ -87,6 +88,27 @@ static nokprobe_inline unsigned long 
> trace_kprobe_nhit(struct trace_kprobe *tk)
>   return nhit;
>  }
>  
> +int trace_kprobe_ftrace(struct trace_event_call *call)
> +{
> + struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> + return kprobe_ftrace(>rp.kp);
> +}
> +
> +int trace_kprobe_error_injectable(struct trace_event_call *call)
> +{
> + struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> + unsigned long addr;
> +
> + if (tk->symbol) {
> + addr = (unsigned long)
> + kallsyms_lookup_name(trace_kprobe_symbol(tk));
> + addr += tk->rp.kp.offset;

If the tk is already registered, you don't need to get address,
you can use kp.addr. Anyway, kprobe_ftrace() also requires the
kprobe already registered.

> + } else {
> + addr = (unsigned long)tk->rp.kp.addr;
> + }
> + return within_kprobe_error_injection_list(addr);
> +}
> +
>  static int register_kprobe_event(struct trace_kprobe *tk);
>  static int unregister_kprobe_event(struct trace_kprobe *tk);
>  
> @@ -1170,7 +1192,7 @@ static int kretprobe_event_define_fields(struct 
> trace_event_call *event_call)
>  #ifdef CONFIG_PERF_EVENTS
>  
>  /* Kprobe profile handler */
> -static void
> +static int
>  kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
>  {
>   struct trace_event_call *call = >tp.call;
> @@ -1179,12 +1201,29 @@ kprobe_perf_func(struct trace_kprobe *tk, struct 
> pt_regs *regs)
>   int size, __size, dsize;
>   int rctx;
>  
> - if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> - return;
> + if (bpf_prog_array_valid(call)) {
> + int ret;
> +
> + ret = trace_call_bpf(call, regs);
> +
> + /*
> +  * We need to check and see if we modified the pc of the
> +  * pt_regs, and if so clear the kprobe and return 1 so that we
> +  * don't do the instruction skipping.  Also reset our state so
> +  * we are clean the next pass through.
> +  */
> + if (__this_cpu_read(bpf_kprobe_override)) {
> + __this_cpu_write(bpf_kprobe_override, 0);
> + reset_current_kprobe();

OK, I will fix this issue(reset kprobe and preempt-enable) by removing
jprobe soon.
(currently waiting for removing {tcp,sctp,dccp}_probe code, which are
 only users of jprobe in the kernel)

Thank you,


> + return 1;
> +   

Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-18 Thread Masami Hiramatsu
On Fri, 15 Dec 2017 14:12:54 -0500
Josef Bacik  wrote:

> From: Josef Bacik 
> 
> Error injection is sloppy and very ad-hoc.  BPF could fill this niche
> perfectly with it's kprobe functionality.  We could make sure errors are
> only triggered in specific call chains that we care about with very
> specific situations.  Accomplish this with the bpf_override_funciton
> helper.  This will modify the probe'd callers return value to the
> specified value and set the PC to an override function that simply
> returns, bypassing the originally probed function.  This gives us a nice
> clean way to implement systematic error injection for all of our code
> paths.

OK, got it. I think the error_injectable function list should be defined
in kernel/trace/bpf_trace.c because only bpf calls it and needs to care
the "safeness".

[...]
> diff --git a/arch/x86/kernel/kprobes/ftrace.c 
> b/arch/x86/kernel/kprobes/ftrace.c
> index 8dc0161cec8f..1ea748d682fd 100644
> --- a/arch/x86/kernel/kprobes/ftrace.c
> +++ b/arch/x86/kernel/kprobes/ftrace.c
> @@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
>   p->ainsn.boostable = false;
>   return 0;
>  }
> +
> +asmlinkage void override_func(void);
> +asm(
> + ".type override_func, @function\n"
> + "override_func:\n"
> + "   ret\n"
> + ".size override_func, .-override_func\n"
> +);
> +
> +void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
> +{
> + regs->ip = (unsigned long)_func;
> +}
> +NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);

Calling this as "override_function" is meaningless. This is a function
which just return. So I think combination of just_return_func() and
arch_bpf_override_func_just_return() will be better.

Moreover, this arch/x86/kernel/kprobes/ftrace.c is an archtecture
dependent implementation of kprobes, not bpf.

Hmm, arch/x86/net/bpf_jit_comp.c will be better place?
(why don't we have arch/x86/kernel/bpf.c?)

[..]
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 492700c5fb4d..91f4b57dab82 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -42,6 +42,7 @@ struct trace_kprobe {
>   (offsetof(struct trace_kprobe, tp.args) +   \
>   (sizeof(struct probe_arg) * (n)))
>  
> +DEFINE_PER_CPU(int, bpf_kprobe_override);
>  
>  static nokprobe_inline bool trace_kprobe_is_return(struct trace_kprobe *tk)
>  {
> @@ -87,6 +88,27 @@ static nokprobe_inline unsigned long 
> trace_kprobe_nhit(struct trace_kprobe *tk)
>   return nhit;
>  }
>  
> +int trace_kprobe_ftrace(struct trace_event_call *call)
> +{
> + struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> + return kprobe_ftrace(>rp.kp);
> +}
> +
> +int trace_kprobe_error_injectable(struct trace_event_call *call)
> +{
> + struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> + unsigned long addr;
> +
> + if (tk->symbol) {
> + addr = (unsigned long)
> + kallsyms_lookup_name(trace_kprobe_symbol(tk));
> + addr += tk->rp.kp.offset;

If the tk is already registered, you don't need to get address,
you can use kp.addr. Anyway, kprobe_ftrace() also requires the
kprobe already registered.

> + } else {
> + addr = (unsigned long)tk->rp.kp.addr;
> + }
> + return within_kprobe_error_injection_list(addr);
> +}
> +
>  static int register_kprobe_event(struct trace_kprobe *tk);
>  static int unregister_kprobe_event(struct trace_kprobe *tk);
>  
> @@ -1170,7 +1192,7 @@ static int kretprobe_event_define_fields(struct 
> trace_event_call *event_call)
>  #ifdef CONFIG_PERF_EVENTS
>  
>  /* Kprobe profile handler */
> -static void
> +static int
>  kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
>  {
>   struct trace_event_call *call = >tp.call;
> @@ -1179,12 +1201,29 @@ kprobe_perf_func(struct trace_kprobe *tk, struct 
> pt_regs *regs)
>   int size, __size, dsize;
>   int rctx;
>  
> - if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> - return;
> + if (bpf_prog_array_valid(call)) {
> + int ret;
> +
> + ret = trace_call_bpf(call, regs);
> +
> + /*
> +  * We need to check and see if we modified the pc of the
> +  * pt_regs, and if so clear the kprobe and return 1 so that we
> +  * don't do the instruction skipping.  Also reset our state so
> +  * we are clean the next pass through.
> +  */
> + if (__this_cpu_read(bpf_kprobe_override)) {
> + __this_cpu_write(bpf_kprobe_override, 0);
> + reset_current_kprobe();

OK, I will fix this issue(reset kprobe and preempt-enable) by removing
jprobe soon.
(currently waiting for removing {tcp,sctp,dccp}_probe code, which are
 only users of jprobe in the kernel)

Thank you,


> + return 1;
> + }
> + if (!ret)
> 

Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Daniel Borkmann
On 12/15/2017 09:34 PM, Alexei Starovoitov wrote:
[...]
> Also how big is the v9-v10 change ?
> May be do it as separate patch, since previous set already sitting
> in bpf-next and there are patches on top?

+1


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Daniel Borkmann
On 12/15/2017 09:34 PM, Alexei Starovoitov wrote:
[...]
> Also how big is the v9-v10 change ?
> May be do it as separate patch, since previous set already sitting
> in bpf-next and there are patches on top?

+1


Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Alexei Starovoitov

On 12/15/17 11:12 AM, Josef Bacik wrote:

+#ifdef CONFIG_BPF_KPROBE_OVERRIDE
+BPF_CALL_2(bpf_override_return, struct pt_regs *, regs, unsigned long, rc)
+{
+   __this_cpu_write(bpf_kprobe_override, 1);
+   regs_set_return_value(regs, rc);
+   arch_ftrace_kprobe_override_function(regs);
+   return 0;
+}


since you're doing a respin can you adopt the change I did to make
this helper fail at load time if that #config is not set
instead of runtime?

Also how big is the v9-v10 change ?
May be do it as separate patch, since previous set already sitting
in bpf-next and there are patches on top?



Re: [PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Alexei Starovoitov

On 12/15/17 11:12 AM, Josef Bacik wrote:

+#ifdef CONFIG_BPF_KPROBE_OVERRIDE
+BPF_CALL_2(bpf_override_return, struct pt_regs *, regs, unsigned long, rc)
+{
+   __this_cpu_write(bpf_kprobe_override, 1);
+   regs_set_return_value(regs, rc);
+   arch_ftrace_kprobe_override_function(regs);
+   return 0;
+}


since you're doing a respin can you adopt the change I did to make
this helper fail at load time if that #config is not set
instead of runtime?

Also how big is the v9-v10 change ?
May be do it as separate patch, since previous set already sitting
in bpf-next and there are patches on top?



[PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Josef Bacik
From: Josef Bacik 

Error injection is sloppy and very ad-hoc.  BPF could fill this niche
perfectly with it's kprobe functionality.  We could make sure errors are
only triggered in specific call chains that we care about with very
specific situations.  Accomplish this with the bpf_override_funciton
helper.  This will modify the probe'd callers return value to the
specified value and set the PC to an override function that simply
returns, bypassing the originally probed function.  This gives us a nice
clean way to implement systematic error injection for all of our code
paths.

Acked-by: Alexei Starovoitov 
Acked-by: Ingo Molnar 
Signed-off-by: Josef Bacik 
---
 arch/Kconfig |  3 ++
 arch/x86/Kconfig |  1 +
 arch/x86/include/asm/kprobes.h   |  4 +++
 arch/x86/include/asm/ptrace.h|  5 
 arch/x86/kernel/kprobes/ftrace.c | 14 +
 include/linux/filter.h   |  3 +-
 include/linux/trace_events.h |  1 +
 include/uapi/linux/bpf.h |  7 -
 kernel/bpf/core.c|  3 ++
 kernel/bpf/verifier.c|  2 ++
 kernel/events/core.c |  7 +
 kernel/trace/Kconfig | 11 +++
 kernel/trace/bpf_trace.c | 38 
 kernel/trace/trace_kprobe.c  | 64 +++-
 kernel/trace/trace_probe.h   | 12 
 15 files changed, 165 insertions(+), 10 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..d3f4aaf9cb7a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -196,6 +196,9 @@ config HAVE_OPTPROBES
 config HAVE_KPROBES_ON_FTRACE
bool
 
+config HAVE_KPROBE_OVERRIDE
+   bool
+
 config HAVE_NMI
bool
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8eed3f94bfc7..04d66e6fa447 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -154,6 +154,7 @@ config X86
select HAVE_KERNEL_XZ
select HAVE_KPROBES
select HAVE_KPROBES_ON_FTRACE
+   select HAVE_KPROBE_OVERRIDE
select HAVE_KRETPROBES
select HAVE_KVM
select HAVE_LIVEPATCH   if X86_64
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 9f2e3102e0bb..36abb23a7a35 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -67,6 +67,10 @@ extern const int kretprobe_blacklist_size;
 void arch_remove_kprobe(struct kprobe *p);
 asmlinkage void kretprobe_trampoline(void);
 
+#ifdef CONFIG_KPROBES_ON_FTRACE
+extern void arch_ftrace_kprobe_override_function(struct pt_regs *regs);
+#endif
+
 /* Architecture specific copy of original instruction*/
 struct arch_specific_insn {
/* copy of the original instruction */
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 14131dd06b29..6de1fd3d0097 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -109,6 +109,11 @@ static inline unsigned long regs_return_value(struct 
pt_regs *regs)
return regs->ax;
 }
 
+static inline void regs_set_return_value(struct pt_regs *regs, unsigned long 
rc)
+{
+   regs->ax = rc;
+}
+
 /*
  * user_mode(regs) determines whether a register set came from user
  * mode.  On x86_32, this is true if V8086 mode was enabled OR if the
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 8dc0161cec8f..1ea748d682fd 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
p->ainsn.boostable = false;
return 0;
 }
+
+asmlinkage void override_func(void);
+asm(
+   ".type override_func, @function\n"
+   "override_func:\n"
+   "   ret\n"
+   ".size override_func, .-override_func\n"
+);
+
+void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
+{
+   regs->ip = (unsigned long)_func;
+}
+NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 0062302e1285..5feb441d3dd9 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -458,7 +458,8 @@ struct bpf_prog {
locked:1,   /* Program image locked? */
gpl_compatible:1, /* Is filter GPL compatible? 
*/
cb_access:1,/* Is control block accessed? */
-   dst_needed:1;   /* Do we need dst entry? */
+   dst_needed:1,   /* Do we need dst entry? */
+   kprobe_override:1; /* Do we override a kprobe? 
*/
enum bpf_prog_type  type;   /* Type of BPF program */
u32 len;/* Number of filter blocks */
u32 jited_len;  /* Size of jited insns in bytes 
*/
diff --git a/include/linux/trace_events.h 

[PATCH v10 3/5] bpf: add a bpf_override_function helper

2017-12-15 Thread Josef Bacik
From: Josef Bacik 

Error injection is sloppy and very ad-hoc.  BPF could fill this niche
perfectly with it's kprobe functionality.  We could make sure errors are
only triggered in specific call chains that we care about with very
specific situations.  Accomplish this with the bpf_override_funciton
helper.  This will modify the probe'd callers return value to the
specified value and set the PC to an override function that simply
returns, bypassing the originally probed function.  This gives us a nice
clean way to implement systematic error injection for all of our code
paths.

Acked-by: Alexei Starovoitov 
Acked-by: Ingo Molnar 
Signed-off-by: Josef Bacik 
---
 arch/Kconfig |  3 ++
 arch/x86/Kconfig |  1 +
 arch/x86/include/asm/kprobes.h   |  4 +++
 arch/x86/include/asm/ptrace.h|  5 
 arch/x86/kernel/kprobes/ftrace.c | 14 +
 include/linux/filter.h   |  3 +-
 include/linux/trace_events.h |  1 +
 include/uapi/linux/bpf.h |  7 -
 kernel/bpf/core.c|  3 ++
 kernel/bpf/verifier.c|  2 ++
 kernel/events/core.c |  7 +
 kernel/trace/Kconfig | 11 +++
 kernel/trace/bpf_trace.c | 38 
 kernel/trace/trace_kprobe.c  | 64 +++-
 kernel/trace/trace_probe.h   | 12 
 15 files changed, 165 insertions(+), 10 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..d3f4aaf9cb7a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -196,6 +196,9 @@ config HAVE_OPTPROBES
 config HAVE_KPROBES_ON_FTRACE
bool
 
+config HAVE_KPROBE_OVERRIDE
+   bool
+
 config HAVE_NMI
bool
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8eed3f94bfc7..04d66e6fa447 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -154,6 +154,7 @@ config X86
select HAVE_KERNEL_XZ
select HAVE_KPROBES
select HAVE_KPROBES_ON_FTRACE
+   select HAVE_KPROBE_OVERRIDE
select HAVE_KRETPROBES
select HAVE_KVM
select HAVE_LIVEPATCH   if X86_64
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 9f2e3102e0bb..36abb23a7a35 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -67,6 +67,10 @@ extern const int kretprobe_blacklist_size;
 void arch_remove_kprobe(struct kprobe *p);
 asmlinkage void kretprobe_trampoline(void);
 
+#ifdef CONFIG_KPROBES_ON_FTRACE
+extern void arch_ftrace_kprobe_override_function(struct pt_regs *regs);
+#endif
+
 /* Architecture specific copy of original instruction*/
 struct arch_specific_insn {
/* copy of the original instruction */
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 14131dd06b29..6de1fd3d0097 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -109,6 +109,11 @@ static inline unsigned long regs_return_value(struct 
pt_regs *regs)
return regs->ax;
 }
 
+static inline void regs_set_return_value(struct pt_regs *regs, unsigned long 
rc)
+{
+   regs->ax = rc;
+}
+
 /*
  * user_mode(regs) determines whether a register set came from user
  * mode.  On x86_32, this is true if V8086 mode was enabled OR if the
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 8dc0161cec8f..1ea748d682fd 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -97,3 +97,17 @@ int arch_prepare_kprobe_ftrace(struct kprobe *p)
p->ainsn.boostable = false;
return 0;
 }
+
+asmlinkage void override_func(void);
+asm(
+   ".type override_func, @function\n"
+   "override_func:\n"
+   "   ret\n"
+   ".size override_func, .-override_func\n"
+);
+
+void arch_ftrace_kprobe_override_function(struct pt_regs *regs)
+{
+   regs->ip = (unsigned long)_func;
+}
+NOKPROBE_SYMBOL(arch_ftrace_kprobe_override_function);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 0062302e1285..5feb441d3dd9 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -458,7 +458,8 @@ struct bpf_prog {
locked:1,   /* Program image locked? */
gpl_compatible:1, /* Is filter GPL compatible? 
*/
cb_access:1,/* Is control block accessed? */
-   dst_needed:1;   /* Do we need dst entry? */
+   dst_needed:1,   /* Do we need dst entry? */
+   kprobe_override:1; /* Do we override a kprobe? 
*/
enum bpf_prog_type  type;   /* Type of BPF program */
u32 len;/* Number of filter blocks */
u32 jited_len;  /* Size of jited insns in bytes 
*/
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index af44e7c2d577..5fea451f6e28 100644