Message ID | 20200904095904.612390-2-lmb@cloudflare.com |
---|---|
State | Changes Requested |
Delegated to: | BPF Maintainers |
Headers | show |
Series | Sockmap iterator | expand |
On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote: > Tracing programs can derive struct sock pointers from a variety > of sources, e.g. a bpf_iter for sk_storage maps receives one as > part of the context. It's desirable to be able to pass these to > functions that expect PTR_TO_SOCKET. For example, it enables us > to insert such a socket into a sockmap via map_elem_update. > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is > equivalent to PTR_TO_SOCKET. There is one hazard here: > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be > refcounted. Since this isn't the case for pointers derived from > BTF we must prevent them from being passed to the function. > Luckily, we can simply check that the ref_obj_id is not zero > in release_reference, and return an error otherwise. > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> > --- > kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------ > 1 file changed, 36 insertions(+), 25 deletions(-) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index b4e9c56b8b32..509754c3aa7d 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, > return 0; > } > > +BTF_ID_LIST(btf_fullsock_ids) > +BTF_ID(struct, sock) It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID as a fullsock (i.e. PTR_TO_SOCKET). This is a generic verifier change though. For tracing, it is not always the case. It cannot always assume that the "struct sock *" in the function being traced is always a fullsock. Currently, the func_proto taking ARG_PTR_TO_SOCKET can safely assume it must be a fullsock. If it is allowed to also take BTF_ID "struct sock" in verification time, I think the sk_fullsock() check in runtime is needed and this check should be pretty cheap to do. > + > static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > struct bpf_call_arg_meta *meta, > const struct bpf_func_proto *fn) > @@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > expected_type = PTR_TO_SOCKET; > if (!(register_is_null(reg) && > arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) { > - if (type != expected_type) > + if (type != expected_type && > + type != PTR_TO_BTF_ID) > goto err_type; > } > + meta->btf_id = btf_fullsock_ids[0]; > } else if (arg_type == ARG_PTR_TO_BTF_ID) { > - bool ids_match = false; > - > expected_type = PTR_TO_BTF_ID; > if (type != expected_type) > goto err_type; > - if (!fn->check_btf_id) { > - if (reg->btf_id != meta->btf_id) { > - ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, > - meta->btf_id); > - if (!ids_match) { > - verbose(env, "Helper has type %s got %s in R%d\n", > - kernel_type_name(meta->btf_id), > - kernel_type_name(reg->btf_id), regno); > - return -EACCES; > - } > - } > - } else if (!fn->check_btf_id(reg->btf_id, arg)) { > - verbose(env, "Helper does not support %s in R%d\n", > - kernel_type_name(reg->btf_id), regno); > - > - return -EACCES; > - } > - if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { > - verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", > - regno); > - return -EACCES; > - } > } else if (arg_type == ARG_PTR_TO_SPIN_LOCK) { > if (meta->func_id == BPF_FUNC_spin_lock) { > if (process_spin_lock(env, regno, true)) > @@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > return -EFAULT; > } > > + if (type == PTR_TO_BTF_ID) { > + bool ids_match = false; > + > + if (!fn->check_btf_id) { > + if (reg->btf_id != meta->btf_id) { > + ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, > + meta->btf_id); > + if (!ids_match) { > + verbose(env, "Helper has type %s got %s in R%d\n", > + kernel_type_name(meta->btf_id), > + kernel_type_name(reg->btf_id), regno); > + return -EACCES; > + } > + } > + } else if (!fn->check_btf_id(reg->btf_id, arg)) { > + verbose(env, "Helper does not support %s in R%d\n", > + kernel_type_name(reg->btf_id), regno); > + > + return -EACCES; > + } > + if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { > + verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", > + regno); > + return -EACCES; > + } > + } > + > if (arg_type == ARG_CONST_MAP_PTR) { > /* bpf_map_xxx(map_ptr) call: remember that map_ptr */ > meta->map_ptr = reg->map_ptr; > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env, > int err; > int i; > > + if (!ref_obj_id) > + return -EINVAL; hmm...... Is it sure this is needed? If it was, it seems there was an existing bug in release_reference_state() below which could not catch the case where "bpf_sk_release()" is called on a pointer that has no reference acquired before. Can you write a verifier test to demonstrate the issue? > + > err = release_reference_state(cur_func(env), ref_obj_id); > if (err) > return err; > -- > 2.25.1 >
On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote: > > On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote: > > Tracing programs can derive struct sock pointers from a variety > > of sources, e.g. a bpf_iter for sk_storage maps receives one as > > part of the context. It's desirable to be able to pass these to > > functions that expect PTR_TO_SOCKET. For example, it enables us > > to insert such a socket into a sockmap via map_elem_update. > > > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is > > equivalent to PTR_TO_SOCKET. There is one hazard here: > > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be > > refcounted. Since this isn't the case for pointers derived from > > BTF we must prevent them from being passed to the function. > > Luckily, we can simply check that the ref_obj_id is not zero > > in release_reference, and return an error otherwise. > > > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> > > --- > > kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------ > > 1 file changed, 36 insertions(+), 25 deletions(-) > > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > index b4e9c56b8b32..509754c3aa7d 100644 > > --- a/kernel/bpf/verifier.c > > +++ b/kernel/bpf/verifier.c > > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, > > return 0; > > } > > > > +BTF_ID_LIST(btf_fullsock_ids) > > +BTF_ID(struct, sock) > It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID > as a fullsock (i.e. PTR_TO_SOCKET). I think it's unsafe even for the sockmap iter. Since it's a tracing prog there might be other ways for it to obtain a struct sock * in the future. > This is a generic verifier change though. For tracing, it is not always the > case. It cannot always assume that the "struct sock *" in the function being > traced is always a fullsock. Yes, I see, thanks for reminding me. What a footgun. I think the problem boils down to the fact that we can't express "this is a full socket" in BTF, since there is no such type in the kernel. Which makes me wonder: how do tracing programs deal with struct sock* that really is a request sock or something? > Currently, the func_proto taking ARG_PTR_TO_SOCKET can safely > assume it must be a fullsock. If it is allowed to also take BTF_ID > "struct sock" in verification time, I think the sk_fullsock() > check in runtime is needed and this check should be pretty > cheap to do. Can you elaborate a little? Where do you think the check could happen? I could change the patch to treat struct sock * as PTR_TO_SOCK_COMMON, and adjust the sockmap helpers accordingly. The implication is that over time, helpers will migrate to PTR_TO_SOCK_COMMON because that is compatible with BTF. PTR_TO_SOCKET will become unused except to maintain the ABI for access to struct bpf_sock. Maybe that's OK though? > > + > > static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > > struct bpf_call_arg_meta *meta, > > const struct bpf_func_proto *fn) > > @@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > > expected_type = PTR_TO_SOCKET; > > if (!(register_is_null(reg) && > > arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) { > > - if (type != expected_type) > > + if (type != expected_type && > > + type != PTR_TO_BTF_ID) > > goto err_type; > > } > > + meta->btf_id = btf_fullsock_ids[0]; > > } else if (arg_type == ARG_PTR_TO_BTF_ID) { > > - bool ids_match = false; > > - > > expected_type = PTR_TO_BTF_ID; > > if (type != expected_type) > > goto err_type; > > - if (!fn->check_btf_id) { > > - if (reg->btf_id != meta->btf_id) { > > - ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, > > - meta->btf_id); > > - if (!ids_match) { > > - verbose(env, "Helper has type %s got %s in R%d\n", > > - kernel_type_name(meta->btf_id), > > - kernel_type_name(reg->btf_id), regno); > > - return -EACCES; > > - } > > - } > > - } else if (!fn->check_btf_id(reg->btf_id, arg)) { > > - verbose(env, "Helper does not support %s in R%d\n", > > - kernel_type_name(reg->btf_id), regno); > > - > > - return -EACCES; > > - } > > - if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { > > - verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", > > - regno); > > - return -EACCES; > > - } > > } else if (arg_type == ARG_PTR_TO_SPIN_LOCK) { > > if (meta->func_id == BPF_FUNC_spin_lock) { > > if (process_spin_lock(env, regno, true)) > > @@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, > > return -EFAULT; > > } > > > > + if (type == PTR_TO_BTF_ID) { > > + bool ids_match = false; > > + > > + if (!fn->check_btf_id) { > > + if (reg->btf_id != meta->btf_id) { > > + ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, > > + meta->btf_id); > > + if (!ids_match) { > > + verbose(env, "Helper has type %s got %s in R%d\n", > > + kernel_type_name(meta->btf_id), > > + kernel_type_name(reg->btf_id), regno); > > + return -EACCES; > > + } > > + } > > + } else if (!fn->check_btf_id(reg->btf_id, arg)) { > > + verbose(env, "Helper does not support %s in R%d\n", > > + kernel_type_name(reg->btf_id), regno); > > + > > + return -EACCES; > > + } > > + if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { > > + verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", > > + regno); > > + return -EACCES; > > + } > > + } > > + > > if (arg_type == ARG_CONST_MAP_PTR) { > > /* bpf_map_xxx(map_ptr) call: remember that map_ptr */ > > meta->map_ptr = reg->map_ptr; > > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env, > > int err; > > int i; > > > > + if (!ref_obj_id) > > + return -EINVAL; > hmm...... Is it sure this is needed? If it was, it seems there was > an existing bug in release_reference_state() below which could not catch > the case where "bpf_sk_release()" is called on a pointer that has no > reference acquired before. Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing struct sock * to it after this patch. Adding this check prevents the release from succeeding. > > Can you write a verifier test to demonstrate the issue? There is a selftest in this series that ensures calling sk_release doesn't work, which exercises this. > > > + > > err = release_reference_state(cur_func(env), ref_obj_id); > > if (err) > > return err; > > -- > > 2.25.1 > >
On Mon, Sep 07, 2020 at 09:57:06AM +0100, Lorenz Bauer wrote: > On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote: > > > > On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote: > > > Tracing programs can derive struct sock pointers from a variety > > > of sources, e.g. a bpf_iter for sk_storage maps receives one as > > > part of the context. It's desirable to be able to pass these to > > > functions that expect PTR_TO_SOCKET. For example, it enables us > > > to insert such a socket into a sockmap via map_elem_update. > > > > > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is > > > equivalent to PTR_TO_SOCKET. There is one hazard here: > > > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be > > > refcounted. Since this isn't the case for pointers derived from > > > BTF we must prevent them from being passed to the function. > > > Luckily, we can simply check that the ref_obj_id is not zero > > > in release_reference, and return an error otherwise. > > > > > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> > > > --- > > > kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------ > > > 1 file changed, 36 insertions(+), 25 deletions(-) > > > > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > > index b4e9c56b8b32..509754c3aa7d 100644 > > > --- a/kernel/bpf/verifier.c > > > +++ b/kernel/bpf/verifier.c > > > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, > > > return 0; > > > } > > > > > > +BTF_ID_LIST(btf_fullsock_ids) > > > +BTF_ID(struct, sock) > > It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID > > as a fullsock (i.e. PTR_TO_SOCKET). > > I think it's unsafe even for the sockmap iter. Since it's a tracing > prog there might > be other ways for it to obtain a struct sock * in the future. > > > This is a generic verifier change though. For tracing, it is not always the > > case. It cannot always assume that the "struct sock *" in the function being > > traced is always a fullsock. > > Yes, I see, thanks for reminding me. What a footgun. I think the > problem boils down > to the fact that we can't express "this is a full socket" in BTF, > since there is no such > type in the kernel. > > Which makes me wonder: how do tracing programs deal with struct sock* > that really > is a request sock or something? PTR_TO_BTF_ID is handled differently, by BPF_PROBE_MEM, to take care of cases like this. bpf_jit_comp.c has some more details. [ ... ] > > > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env, > > > int err; > > > int i; > > > > > > + if (!ref_obj_id) > > > + return -EINVAL; > > hmm...... Is it sure this is needed? If it was, it seems there was > > an existing bug in release_reference_state() below which could not catch > > the case where "bpf_sk_release()" is called on a pointer that has no > > reference acquired before. > > Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing > struct sock * to it after this patch. Adding this check prevents the > release from > succeeding. Not all existing PTR_TO_SOCK_COMMON takes a reference also. Does it mean all these existing cases are broken? For example, bpf_sk_release(__sk_buff->sk) is allowed now? > > > > > Can you write a verifier test to demonstrate the issue? > > There is a selftest in this series that ensures calling sk_release > doesn't work, which exercises this.b I am not sure what Patch 4 of this series is testing. bpf_sk_release is not even available in bpf tracing iter program. There are ref tracking tests in tools/testing/selftests/bpf/verifier/ref_tracking.c. Please add all ref count related test there to catch the issue.
On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote: > > On Mon, Sep 07, 2020 at 09:57:06AM +0100, Lorenz Bauer wrote: > > On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote: > > > > > > On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote: > > > > Tracing programs can derive struct sock pointers from a variety > > > > of sources, e.g. a bpf_iter for sk_storage maps receives one as > > > > part of the context. It's desirable to be able to pass these to > > > > functions that expect PTR_TO_SOCKET. For example, it enables us > > > > to insert such a socket into a sockmap via map_elem_update. > > > > > > > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is > > > > equivalent to PTR_TO_SOCKET. There is one hazard here: > > > > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be > > > > refcounted. Since this isn't the case for pointers derived from > > > > BTF we must prevent them from being passed to the function. > > > > Luckily, we can simply check that the ref_obj_id is not zero > > > > in release_reference, and return an error otherwise. > > > > > > > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> > > > > --- > > > > kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------ > > > > 1 file changed, 36 insertions(+), 25 deletions(-) > > > > > > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > > > index b4e9c56b8b32..509754c3aa7d 100644 > > > > --- a/kernel/bpf/verifier.c > > > > +++ b/kernel/bpf/verifier.c > > > > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, > > > > return 0; > > > > } > > > > > > > > +BTF_ID_LIST(btf_fullsock_ids) > > > > +BTF_ID(struct, sock) > > > It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID > > > as a fullsock (i.e. PTR_TO_SOCKET). > > > > I think it's unsafe even for the sockmap iter. Since it's a tracing > > prog there might > > be other ways for it to obtain a struct sock * in the future. > > > > > This is a generic verifier change though. For tracing, it is not always the > > > case. It cannot always assume that the "struct sock *" in the function being > > > traced is always a fullsock. > > > > Yes, I see, thanks for reminding me. What a footgun. I think the > > problem boils down > > to the fact that we can't express "this is a full socket" in BTF, > > since there is no such > > type in the kernel. > > > > Which makes me wonder: how do tracing programs deal with struct sock* > > that really > > is a request sock or something? > PTR_TO_BTF_ID is handled differently, by BPF_PROBE_MEM, to take care > of cases like this. bpf_jit_comp.c has some more details. Thanks, that helps a lot. I also dug into the BTF pointer patchset as well, and now your comment about PTR_TO_BTF_ID being NULL makes sense as well. Sigh, I should've looked at this from the start. What I still don't understand is how PTR_TO_BTF_ID is safe for a struct sock* that points at a smaller reqsk for example. How do we prevent a valid, non-faulting BPF read from accessing memory beyond the reqsk? > > [ ... ] > > > > > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env, > > > > int err; > > > > int i; > > > > > > > > + if (!ref_obj_id) > > > > + return -EINVAL; > > > hmm...... Is it sure this is needed? If it was, it seems there was > > > an existing bug in release_reference_state() below which could not catch > > > the case where "bpf_sk_release()" is called on a pointer that has no > > > reference acquired before. > > > > Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing > > struct sock * to it after this patch. Adding this check prevents the > > release from > > succeeding. > Not all existing PTR_TO_SOCK_COMMON takes a reference also. > Does it mean all these existing cases are broken? > For example, bpf_sk_release(__sk_buff->sk) is allowed now? I'll look into this. It's very possible I got the refcounting logic wrong, again. > > > > > > > > > Can you write a verifier test to demonstrate the issue? > > > > There is a selftest in this series that ensures calling sk_release > > doesn't work, which exercises this.b > I am not sure what Patch 4 of this series is testing. > bpf_sk_release is not even available in bpf tracing iter program. I built a patched kernel where sk_release is available, and verified the behaviour that way. My idea was that as long as the test fails we've proven that releasing the sk is not possible. I realize this is counterintuitive and kind of brittle. Maybe your point about __sk_buff->sk will allow me to write a better test. > > There are ref tracking tests in tools/testing/selftests/bpf/verifier/ref_tracking.c. > Please add all ref count related test there to catch the issue. Ack.
On Wed, 9 Sep 2020 at 10:16, Lorenz Bauer <lmb@cloudflare.com> wrote: > > On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote: [...] > > Not all existing PTR_TO_SOCK_COMMON takes a reference also. > > Does it mean all these existing cases are broken? > > For example, bpf_sk_release(__sk_buff->sk) is allowed now? > > I'll look into this. It's very possible I got the refcounting logic > wrong, again. bpf_sk_release(__sk_buff->sk) is fine, and there is a test from Martin in verifier/sock.c that exercises this. The case I was worried about can't happen since release_reference_state returns EINVAL if it can't find a reference for the given ref_obj_id. Since we never allocate a reference with id 0 this ends up as the same thing, just less explicit than checking for id == 0.
On Wed, Sep 09, 2020 at 04:42:57PM +0100, Lorenz Bauer wrote: > On Wed, 9 Sep 2020 at 10:16, Lorenz Bauer <lmb@cloudflare.com> wrote: > > > > On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote: > > [...] > > > > Not all existing PTR_TO_SOCK_COMMON takes a reference also. > > > Does it mean all these existing cases are broken? > > > For example, bpf_sk_release(__sk_buff->sk) is allowed now? > > > > I'll look into this. It's very possible I got the refcounting logic > > wrong, again. > > bpf_sk_release(__sk_buff->sk) is fine, and there is a test from Martin > in verifier/sock.c that exercises this. The case I was worried about > can't happen since release_reference_state returns EINVAL if it can't > find a reference for the given ref_obj_id. Since we never allocate a > reference with id 0 this ends up as the same thing, just less explicit > than checking for id == 0. Thanks for checking!
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b4e9c56b8b32..509754c3aa7d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, return 0; } +BTF_ID_LIST(btf_fullsock_ids) +BTF_ID(struct, sock) + static int check_func_arg(struct bpf_verifier_env *env, u32 arg, struct bpf_call_arg_meta *meta, const struct bpf_func_proto *fn) @@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, expected_type = PTR_TO_SOCKET; if (!(register_is_null(reg) && arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) { - if (type != expected_type) + if (type != expected_type && + type != PTR_TO_BTF_ID) goto err_type; } + meta->btf_id = btf_fullsock_ids[0]; } else if (arg_type == ARG_PTR_TO_BTF_ID) { - bool ids_match = false; - expected_type = PTR_TO_BTF_ID; if (type != expected_type) goto err_type; - if (!fn->check_btf_id) { - if (reg->btf_id != meta->btf_id) { - ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, - meta->btf_id); - if (!ids_match) { - verbose(env, "Helper has type %s got %s in R%d\n", - kernel_type_name(meta->btf_id), - kernel_type_name(reg->btf_id), regno); - return -EACCES; - } - } - } else if (!fn->check_btf_id(reg->btf_id, arg)) { - verbose(env, "Helper does not support %s in R%d\n", - kernel_type_name(reg->btf_id), regno); - - return -EACCES; - } - if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { - verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", - regno); - return -EACCES; - } } else if (arg_type == ARG_PTR_TO_SPIN_LOCK) { if (meta->func_id == BPF_FUNC_spin_lock) { if (process_spin_lock(env, regno, true)) @@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, return -EFAULT; } + if (type == PTR_TO_BTF_ID) { + bool ids_match = false; + + if (!fn->check_btf_id) { + if (reg->btf_id != meta->btf_id) { + ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id, + meta->btf_id); + if (!ids_match) { + verbose(env, "Helper has type %s got %s in R%d\n", + kernel_type_name(meta->btf_id), + kernel_type_name(reg->btf_id), regno); + return -EACCES; + } + } + } else if (!fn->check_btf_id(reg->btf_id, arg)) { + verbose(env, "Helper does not support %s in R%d\n", + kernel_type_name(reg->btf_id), regno); + + return -EACCES; + } + if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) { + verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", + regno); + return -EACCES; + } + } + if (arg_type == ARG_CONST_MAP_PTR) { /* bpf_map_xxx(map_ptr) call: remember that map_ptr */ meta->map_ptr = reg->map_ptr; @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env, int err; int i; + if (!ref_obj_id) + return -EINVAL; + err = release_reference_state(cur_func(env), ref_obj_id); if (err) return err;
Tracing programs can derive struct sock pointers from a variety of sources, e.g. a bpf_iter for sk_storage maps receives one as part of the context. It's desirable to be able to pass these to functions that expect PTR_TO_SOCKET. For example, it enables us to insert such a socket into a sockmap via map_elem_update. Teach the verifier that a PTR_TO_BTF_ID for a struct sock is equivalent to PTR_TO_SOCKET. There is one hazard here: bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be refcounted. Since this isn't the case for pointers derived from BTF we must prevent them from being passed to the function. Luckily, we can simply check that the ref_obj_id is not zero in release_reference, and return an error otherwise. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> --- kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------ 1 file changed, 36 insertions(+), 25 deletions(-)