Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
On 10/17/2017 12:10 AM, Alexei Starovoitov wrote: On Mon, Oct 16, 2017 at 2:10 PM, Richard Weinberger wrote: Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann: On 10/16/2017 10:55 PM, Richard Weinberger wrote: Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann: struct task_struct *task = current; + task_lock(task); strncpy(buf, task->comm, size); + task_unlock(task); Wouldn't this potentially lead to a deadlock? E.g. you attach yourself to task_lock() / spin_lock() / etc, and then the BPF prog triggers the bpf_get_current_comm() taking the lock again ... Yes, but doesn't the same apply to the use case when I attach to strncpy() and run bpf_get_current_comm()? You mean due to recursion? In that case trace_call_bpf() would bail out due to the bpf_prog_active counter. Ah, that's true. So, when someone wants to use bpf_get_current_comm() while tracing task_lock, we have a problem. I agree. On the other hand, without locking the function may return wrong results. it will surely race with somebody else setting task comm and it's fine. all of bpf tracing is read-only, so locks are only allowed inside bpf core bits like maps. Taking core locks like task_lock() is quite scary. bpf scripts rely on bpf_probe_read() of all sorts of kernel fields so reading comm here w/o lock is fine. Yeah, and perf_event_comm() -> perf_event_comm_event() out of __set_task_comm() is having same approach wrt comm read-out.
Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
On Mon, Oct 16, 2017 at 2:10 PM, Richard Weinberger wrote: > Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann: >> On 10/16/2017 10:55 PM, Richard Weinberger wrote: >> > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann: >> >>> struct task_struct *task = current; >> >>> >> >>> + task_lock(task); >> >>> >> >>> strncpy(buf, task->comm, size); >> >>> >> >>> + task_unlock(task); >> >> >> >> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself >> >> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the >> >> bpf_get_current_comm() taking the lock again ... >> > >> > Yes, but doesn't the same apply to the use case when I attach to strncpy() >> > and run bpf_get_current_comm()? >> >> You mean due to recursion? In that case trace_call_bpf() would bail out >> due to the bpf_prog_active counter. > > Ah, that's true. > So, when someone wants to use bpf_get_current_comm() while tracing task_lock, > we have a problem. I agree. > On the other hand, without locking the function may return wrong results. it will surely race with somebody else setting task comm and it's fine. all of bpf tracing is read-only, so locks are only allowed inside bpf core bits like maps. Taking core locks like task_lock() is quite scary. bpf scripts rely on bpf_probe_read() of all sorts of kernel fields so reading comm here w/o lock is fine.
Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann: > On 10/16/2017 10:55 PM, Richard Weinberger wrote: > > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann: > >>> struct task_struct *task = current; > >>> > >>> + task_lock(task); > >>> > >>> strncpy(buf, task->comm, size); > >>> > >>> + task_unlock(task); > >> > >> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself > >> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the > >> bpf_get_current_comm() taking the lock again ... > > > > Yes, but doesn't the same apply to the use case when I attach to strncpy() > > and run bpf_get_current_comm()? > > You mean due to recursion? In that case trace_call_bpf() would bail out > due to the bpf_prog_active counter. Ah, that's true. So, when someone wants to use bpf_get_current_comm() while tracing task_lock, we have a problem. I agree. On the other hand, without locking the function may return wrong results. Thanks, //richard
Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
On 10/16/2017 10:55 PM, Richard Weinberger wrote: Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann: struct task_struct *task = current; + task_lock(task); strncpy(buf, task->comm, size); + task_unlock(task); Wouldn't this potentially lead to a deadlock? E.g. you attach yourself to task_lock() / spin_lock() / etc, and then the BPF prog triggers the bpf_get_current_comm() taking the lock again ... Yes, but doesn't the same apply to the use case when I attach to strncpy() and run bpf_get_current_comm()? You mean due to recursion? In that case trace_call_bpf() would bail out due to the bpf_prog_active counter.
Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann: > > struct task_struct *task = current; > > > > + task_lock(task); > > > > strncpy(buf, task->comm, size); > > > > + task_unlock(task); > > Wouldn't this potentially lead to a deadlock? E.g. you attach yourself > to task_lock() / spin_lock() / etc, and then the BPF prog triggers the > bpf_get_current_comm() taking the lock again ... Yes, but doesn't the same apply to the use case when I attach to strncpy() and run bpf_get_current_comm()? Thanks, //richard
Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
On 10/16/2017 08:18 PM, Richard Weinberger wrote: Sadly we cannot use get_task_comm() since bpf_get_current_comm() allows truncation. Signed-off-by: Richard Weinberger --- kernel/bpf/helpers.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 511c9d522cfc..4b042b24524d 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -18,6 +18,7 @@ #include #include #include +#include /* If kernel subsystem is allowing eBPF programs to call this function, * inside its own verifier_ops->get_func_proto() callback it should return @@ -149,7 +150,9 @@ BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size) { struct task_struct *task = current; + task_lock(task); strncpy(buf, task->comm, size); + task_unlock(task); Wouldn't this potentially lead to a deadlock? E.g. you attach yourself to task_lock() / spin_lock() / etc, and then the BPF prog triggers the bpf_get_current_comm() taking the lock again ... /* Verifier guarantees that size > 0. For task->comm exceeding * size, guarantee that buf is %NUL-terminated. Unconditionally
[PATCH 3/3] bpf: Make sure that ->comm does not change under us.
Sadly we cannot use get_task_comm() since bpf_get_current_comm() allows truncation. Signed-off-by: Richard Weinberger --- kernel/bpf/helpers.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 511c9d522cfc..4b042b24524d 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -18,6 +18,7 @@ #include #include #include +#include /* If kernel subsystem is allowing eBPF programs to call this function, * inside its own verifier_ops->get_func_proto() callback it should return @@ -149,7 +150,9 @@ BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size) { struct task_struct *task = current; + task_lock(task); strncpy(buf, task->comm, size); + task_unlock(task); /* Verifier guarantees that size > 0. For task->comm exceeding * size, guarantee that buf is %NUL-terminated. Unconditionally -- 2.13.6