diff mbox series

[3/3] bpf: Make sure that ->comm does not change under us.

Message ID 20171016181856.12497-3-richard@nod.at
State Changes Requested, archived
Delegated to: David Miller
Headers show
Series [1/3] bpf: Don't check for current being NULL | expand

Commit Message

Richard Weinberger Oct. 16, 2017, 6:18 p.m. UTC
Sadly we cannot use get_task_comm() since bpf_get_current_comm()
allows truncation.

Signed-off-by: Richard Weinberger <richard@nod.at>
---
 kernel/bpf/helpers.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Daniel Borkmann Oct. 16, 2017, 8:50 p.m. UTC | #1
On 10/16/2017 08:18 PM, Richard Weinberger wrote:
> Sadly we cannot use get_task_comm() since bpf_get_current_comm()
> allows truncation.
>
> Signed-off-by: Richard Weinberger <richard@nod.at>
> ---
>   kernel/bpf/helpers.c | 3 +++
>   1 file changed, 3 insertions(+)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 511c9d522cfc..4b042b24524d 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -18,6 +18,7 @@
>   #include <linux/sched.h>
>   #include <linux/uidgid.h>
>   #include <linux/filter.h>
> +#include <linux/sched/task.h>
>
>   /* If kernel subsystem is allowing eBPF programs to call this function,
>    * inside its own verifier_ops->get_func_proto() callback it should return
> @@ -149,7 +150,9 @@ BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size)
>   {
>   	struct task_struct *task = current;
>
> +	task_lock(task);
>   	strncpy(buf, task->comm, size);
> +	task_unlock(task);

Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
bpf_get_current_comm() taking the lock again ...

>   	/* Verifier guarantees that size > 0. For task->comm exceeding
>   	 * size, guarantee that buf is %NUL-terminated. Unconditionally
>
Richard Weinberger Oct. 16, 2017, 8:55 p.m. UTC | #2
Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
> >   	struct task_struct *task = current;
> > 
> > +	task_lock(task);
> > 
> >   	strncpy(buf, task->comm, size);
> > 
> > +	task_unlock(task);
> 
> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
> bpf_get_current_comm() taking the lock again ...

Yes, but doesn't the same apply to the use case when I attach to strncpy()
and run bpf_get_current_comm()?

Thanks,
//richard
Daniel Borkmann Oct. 16, 2017, 9:02 p.m. UTC | #3
On 10/16/2017 10:55 PM, Richard Weinberger wrote:
> Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
>>>    	struct task_struct *task = current;
>>>
>>> +	task_lock(task);
>>>
>>>    	strncpy(buf, task->comm, size);
>>>
>>> +	task_unlock(task);
>>
>> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
>> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
>> bpf_get_current_comm() taking the lock again ...
>
> Yes, but doesn't the same apply to the use case when I attach to strncpy()
> and run bpf_get_current_comm()?

You mean due to recursion? In that case trace_call_bpf() would bail out
due to the bpf_prog_active counter.
Richard Weinberger Oct. 16, 2017, 9:10 p.m. UTC | #4
Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann:
> On 10/16/2017 10:55 PM, Richard Weinberger wrote:
> > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
> >>>    	struct task_struct *task = current;
> >>> 
> >>> +	task_lock(task);
> >>> 
> >>>    	strncpy(buf, task->comm, size);
> >>> 
> >>> +	task_unlock(task);
> >> 
> >> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
> >> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
> >> bpf_get_current_comm() taking the lock again ...
> > 
> > Yes, but doesn't the same apply to the use case when I attach to strncpy()
> > and run bpf_get_current_comm()?
> 
> You mean due to recursion? In that case trace_call_bpf() would bail out
> due to the bpf_prog_active counter.

Ah, that's true.
So, when someone wants to use bpf_get_current_comm() while tracing task_lock,
we have a problem. I agree.
On the other hand, without locking the function may return wrong results.

Thanks,
//richard
diff mbox series

Patch

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 511c9d522cfc..4b042b24524d 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -18,6 +18,7 @@ 
 #include <linux/sched.h>
 #include <linux/uidgid.h>
 #include <linux/filter.h>
+#include <linux/sched/task.h>
 
 /* If kernel subsystem is allowing eBPF programs to call this function,
  * inside its own verifier_ops->get_func_proto() callback it should return
@@ -149,7 +150,9 @@  BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size)
 {
 	struct task_struct *task = current;
 
+	task_lock(task);
 	strncpy(buf, task->comm, size);
+	task_unlock(task);
 
 	/* Verifier guarantees that size > 0. For task->comm exceeding
 	 * size, guarantee that buf is %NUL-terminated. Unconditionally