Message ID | 20200912045930.2993219-1-kafai@fb.com |
---|---|
State | RFC |
Delegated to: | BPF Maintainers |
Headers | show |
Series | bpf: Enable bpf_skc_to_* sock casting helper to networking prog type | expand |
On Sat, 12 Sep 2020 at 05:59, Martin KaFai Lau <kafai@fb.com> wrote: > > There is a constant need to add more fields into the bpf_tcp_sock > for the bpf programs running at tc, sock_ops...etc. > > A current workaround could be to use bpf_probe_read_kernel(). However, > other than making another helper call for reading each field and missing > CO-RE, it is also not as intuitive to use as directly reading > "tp->lsndtime" for example. While already having perfmon cap to do > bpf_probe_read_kernel(), it will be much easier if the bpf prog can > directly read from the tcp_sock. > > This patch tries to do that by using the existing casting-helpers > bpf_skc_to_*() whose func_proto returns a btf_id. For example, the > func_proto of bpf_skc_to_tcp_sock returns the btf_id of the > kernel "struct tcp_sock". > > [ One approach is to make a separate copy of the bpf_skc_to_* > func_proto and use ARG_PTR_TO_SOCK_COMMON instead of ARG_PTR_TO_BTF_ID. > More on this later (1). ] > > This patch modifies the existing bpf_skc_to_* func_proto to take > ARG_PTR_TO_SOCK_COMMON instead of taking > "ARG_PTR_TO_BTF_ID + &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON]". > That will allow tc, sock_ops,...etc to call these casting helpers > because they already hold the PTR_TO_SOCK_COMMON (or its > equivalent). For example: > > sk = sock_ops->sk; > if (!sk) > return; > tp = bpf_skc_to_tcp_sock(sk); > if (!tp) > return; > /* Read tp as a PTR_TO_BTF_ID */ > lsndtime = tp->lsndtime; > > To ensure the current bpf prog passing a PTR_TO_BTF_ID to > bpf_skc_to_*() still works as is, the verifier is modified such that > ARG_PTR_TO_SOCK_COMMON can accept a reg with reg->type == PTR_TO_BTF_ID > and reg->btf_id is btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON] > > To do that, an idea is borrowed from one of the Lorenz's patch: > https://lore.kernel.org/bpf/20200904112401.667645-12-lmb@cloudflare.com/ . > It adds PTR_TO_BTF_ID as one of the acceptable reg->type for > ARG_PTR_TO_SOCK_COMMON and also specifies what btf_id it can take. > By doing this, the bpf_skc_to_* will work as before and can still > take PTR_TO_BTF_ID as the arg. e.g. The bpf tcp iter will work > as is. > > This will also make other existing helper taking ARG_PTR_TO_SOCK_COMMON > works with the pointer obtained from bpf_skc_to_*(). For example: Unfortunately, I think that we need to introduce a new ARG_PTR_TO_SOCK_COMMON_OR_NULL for this to work. This is because dereferencing a "tracing" pointer can yield NULL at runtime due to a page fault, so the helper has to deal with this. Other than that I think this is a really nice approach: we can gradually move helpers to PTR_TO_SOCK_COMMON_OR_NULL and in doing so make them compatible with BTF pointers. [...] > @@ -4014,7 +4022,17 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 arg, > > found: > if (type == PTR_TO_BTF_ID) { > - u32 *expected_btf_id = fn->arg_btf_id[arg]; > + u32 *expected_btf_id; > + > + if (arg_type == ARG_PTR_TO_BTF_ID) { > + expected_btf_id = fn->arg_btf_id[arg]; Personal preference, but what do you think about moving this to after the assignment of compatible? btf_id = compatible->btf_id; if (arg_type == ARG_PTR_TO_BTF_ID) btf_id = fn->fn->arg_btf_id[arg]; That makes it clearer that we have to special case ARG_PTR_TO_BTF_ID since it doesn't make sense to use compatible->btf_id in that case.
On Mon, Sep 14, 2020 at 10:26:18AM +0100, Lorenz Bauer wrote: > On Sat, 12 Sep 2020 at 05:59, Martin KaFai Lau <kafai@fb.com> wrote: > > > > There is a constant need to add more fields into the bpf_tcp_sock > > for the bpf programs running at tc, sock_ops...etc. > > > > A current workaround could be to use bpf_probe_read_kernel(). However, > > other than making another helper call for reading each field and missing > > CO-RE, it is also not as intuitive to use as directly reading > > "tp->lsndtime" for example. While already having perfmon cap to do > > bpf_probe_read_kernel(), it will be much easier if the bpf prog can > > directly read from the tcp_sock. > > > > This patch tries to do that by using the existing casting-helpers > > bpf_skc_to_*() whose func_proto returns a btf_id. For example, the > > func_proto of bpf_skc_to_tcp_sock returns the btf_id of the > > kernel "struct tcp_sock". > > > > [ One approach is to make a separate copy of the bpf_skc_to_* > > func_proto and use ARG_PTR_TO_SOCK_COMMON instead of ARG_PTR_TO_BTF_ID. > > More on this later (1). ] > > > > This patch modifies the existing bpf_skc_to_* func_proto to take > > ARG_PTR_TO_SOCK_COMMON instead of taking > > "ARG_PTR_TO_BTF_ID + &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON]". > > That will allow tc, sock_ops,...etc to call these casting helpers > > because they already hold the PTR_TO_SOCK_COMMON (or its > > equivalent). For example: > > > > sk = sock_ops->sk; > > if (!sk) > > return; > > tp = bpf_skc_to_tcp_sock(sk); > > if (!tp) > > return; > > /* Read tp as a PTR_TO_BTF_ID */ > > lsndtime = tp->lsndtime; > > > > To ensure the current bpf prog passing a PTR_TO_BTF_ID to > > bpf_skc_to_*() still works as is, the verifier is modified such that > > ARG_PTR_TO_SOCK_COMMON can accept a reg with reg->type == PTR_TO_BTF_ID > > and reg->btf_id is btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON] > > > > To do that, an idea is borrowed from one of the Lorenz's patch: > > https://lore.kernel.org/bpf/20200904112401.667645-12-lmb@cloudflare.com/ . > > It adds PTR_TO_BTF_ID as one of the acceptable reg->type for > > ARG_PTR_TO_SOCK_COMMON and also specifies what btf_id it can take. > > By doing this, the bpf_skc_to_* will work as before and can still > > take PTR_TO_BTF_ID as the arg. e.g. The bpf tcp iter will work > > as is. > > > > This will also make other existing helper taking ARG_PTR_TO_SOCK_COMMON > > works with the pointer obtained from bpf_skc_to_*(). For example: > > Unfortunately, I think that we need to introduce a new > ARG_PTR_TO_SOCK_COMMON_OR_NULL for this to work. This is because > dereferencing a "tracing" pointer can yield NULL at runtime due to a > page fault, so the helper has to deal with this. Other than that I > think this is a really nice approach: we can gradually move helpers to > PTR_TO_SOCK_COMMON_OR_NULL and in doing so make them compatible with > BTF pointers. Good point on the sk could be NULL. I think the existing bpf_skc_to_*() helper is missing the "!sk" check regardless of this patch. For other ARG_PTR_TO_SOCK_COMMON helpers, they are not available to the tracing prog type. Hence, they are fine to accept PTR_TO_BTF_ID as ARG_PTR_TO_SOCK_COMMON since the only way for non tracing prog to get a PTR_TO_BTF_ID is from casting helpers bpf_skc_to_* and the NULL check on return value must be done first. If these ARG_PTR_TO_* helpers were ever made available to tracing prog, it might be better off to have another func_proto taking ARG_PTR_TO_BTF_ID instead. For the verifier, I think the PTR_TO_BTF_ID should only be accepted as ARG_TO_* for non tracing program. That means the bpf_skc_to_* proto has to be duplicated to take ARG_PTR_TO_SOCK_COMMON. I think that may be cleaner going forward. Then the verifier does not need to worry about how to deal with what btf_id can be taken as fullsock ARG_PTR_TO_SOCKET. The helper taking ARG_PTR_TO_BTF_ID will decide where it could be called from and see how it wants to treat "struct sock *sk". For example, the sk_storage_get_btf_proto is taking &btf_sock_ids[BTF_SOCK_TYPE_SOCK] and is only used from the LSM context that is holding a fullsock. The same goes for the sock_map iter, how about the map_update and map_lookup use a ARG_PTR_TO_BTF_ID and PTR_TO_BTF_ID instead? For other prog types, they can keep using ARG_PTR_TO_SOCKET and PTR_TO_SOCKET. > > [...] > > > @@ -4014,7 +4022,17 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 arg, > > > > found: > > if (type == PTR_TO_BTF_ID) { > > - u32 *expected_btf_id = fn->arg_btf_id[arg]; > > + u32 *expected_btf_id; > > + > > + if (arg_type == ARG_PTR_TO_BTF_ID) { > > + expected_btf_id = fn->arg_btf_id[arg]; > > Personal preference, but what do you think about moving this to after > the assignment of compatible? > > btf_id = compatible->btf_id; > if (arg_type == ARG_PTR_TO_BTF_ID) > btf_id = fn->fn->arg_btf_id[arg]; > > That makes it clearer that we have to special case ARG_PTR_TO_BTF_ID since > it doesn't make sense to use compatible->btf_id in that case. Sure. I don't mind moving this assignment out of the else. However, I think resorting to "compatible" btf_id is the exception instead.
On Mon, Sep 14, 2020 at 12:43:04PM -0700, Martin KaFai Lau wrote: > For other ARG_PTR_TO_SOCK_COMMON helpers, they are not available to > the tracing prog type. Hence, they are fine to accept PTR_TO_BTF_ID > as ARG_PTR_TO_SOCK_COMMON since the only way for non tracing prog to > get a PTR_TO_BTF_ID is from casting helpers bpf_skc_to_* and > the NULL check on return value must be done first. After looking more, it may not work well. For example, reading request_sock->sk. Need to think a bit. May try out another idea tomorrow or consider adding the OR_NULL arg as you mentioned.
On Mon, 14 Sep 2020 at 20:43, Martin KaFai Lau <kafai@fb.com> wrote: [...] > > For other ARG_PTR_TO_SOCK_COMMON helpers, they are not available to > the tracing prog type. Hence, they are fine to accept PTR_TO_BTF_ID > as ARG_PTR_TO_SOCK_COMMON since the only way for non tracing prog to > get a PTR_TO_BTF_ID is from casting helpers bpf_skc_to_* and > the NULL check on return value must be done first. If these > ARG_PTR_TO_* helpers were ever made available to tracing prog, > it might be better off to have another func_proto taking > ARG_PTR_TO_BTF_ID instead. I think such special cases increase the maintenance burden going forward, I'd prefer it if we had one set of rules that applies to all program types. > For the verifier, I think the PTR_TO_BTF_ID should only be accepted > as ARG_TO_* for non tracing program. That means the bpf_skc_to_* > proto has to be duplicated to take ARG_PTR_TO_SOCK_COMMON. I think > that may be cleaner going forward. Then the verifier does not need > to worry about how to deal with what btf_id can be taken as fullsock > ARG_PTR_TO_SOCKET. The helper taking ARG_PTR_TO_BTF_ID will decide > where it could be called from and see how it wants to treat > "struct sock *sk". So basically, we allow function prototypes to be specialised according to context type? > For example, the sk_storage_get_btf_proto > is taking &btf_sock_ids[BTF_SOCK_TYPE_SOCK] and is only used from > the LSM context that is holding a fullsock. That is a tempting simplification, but it makes future extensions of LSM context harder. Also, how could we enforce that we only have fullsocks in LSM? By looking at the list of helpers? I worry that this is very brittle. > > The same goes for the sock_map iter, how about the map_update > and map_lookup use a ARG_PTR_TO_BTF_ID and PTR_TO_BTF_ID instead? > For other prog types, they can keep using ARG_PTR_TO_SOCKET and > PTR_TO_SOCKET. Yeah, I've thought about that approach as well. The upside is that it's a much more limited change, and therefore I know that it is fairly safe to do. It relies on maps being able to override the "common" map_lookup_elem function proto, which is something we already do for sockmap and which could use some cleaning up. So if PTR_TO_BTF_ID aliasing with ARG_PTR_TO_SOCK_COMMON doesn't work out I'll propose this. The downside of specialised function protos is that we'll have to do it for every helper, context type, etc. To me it sounds like we want to use BTF to define context objects going forward as much as possible, so having a general solution to unify socket helpers across old-style and BTF contexts seems really useful to me. If we introduce ARG_PTR_TO_SOCK_COMMON_OR_NULL we can do this in a gradual way, by changing helpers that we want to be cross-compatible to take the new arg type and adding a few NULL and fullsock checks here and there.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 3a5932bd7c22..9eace789e88c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -486,7 +486,12 @@ static bool is_acquire_function(enum bpf_func_id func_id, static bool is_ptr_cast_function(enum bpf_func_id func_id) { return func_id == BPF_FUNC_tcp_sock || - func_id == BPF_FUNC_sk_fullsock; + func_id == BPF_FUNC_sk_fullsock || + func_id == BPF_FUNC_skc_to_tcp_sock || + func_id == BPF_FUNC_skc_to_tcp6_sock || + func_id == BPF_FUNC_skc_to_udp6_sock || + func_id == BPF_FUNC_skc_to_tcp_timewait_sock || + func_id == BPF_FUNC_skc_to_tcp_request_sock; } /* string representation of 'enum bpf_reg_type' */ @@ -3906,6 +3911,7 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env, struct bpf_reg_types { const enum bpf_reg_type types[10]; + u32 *btf_id; }; static const struct bpf_reg_types map_key_value_types = { @@ -3923,7 +3929,9 @@ static const struct bpf_reg_types sock_types = { PTR_TO_SOCKET, PTR_TO_TCP_SOCK, PTR_TO_XDP_SOCK, + PTR_TO_BTF_ID, }, + .btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], }; static const struct bpf_reg_types mem_types = { @@ -4014,7 +4022,17 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 arg, found: if (type == PTR_TO_BTF_ID) { - u32 *expected_btf_id = fn->arg_btf_id[arg]; + u32 *expected_btf_id; + + if (arg_type == ARG_PTR_TO_BTF_ID) { + expected_btf_id = fn->arg_btf_id[arg]; + } else { + expected_btf_id = compatible->btf_id; + if (!expected_btf_id) { + verbose(env, "verifier internal error: missing arg compatible BTF ID\n"); + return -EFAULT; + } + } if (!btf_struct_ids_match(&env->log, reg->off, reg->btf_id, *expected_btf_id)) { diff --git a/net/core/filter.c b/net/core/filter.c index 6014e5f40c58..115cde5aaf90 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -77,6 +77,9 @@ #include <net/transp_v6.h> #include <linux/btf_ids.h> +static const struct bpf_func_proto * +bpf_sk_base_func_proto(enum bpf_func_id func_id); + int copy_bpf_fprog_from_user(struct sock_fprog *dst, sockptr_t src, int len) { if (in_compat_syscall()) { @@ -6619,7 +6622,7 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return NULL; } default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6638,7 +6641,7 @@ sk_filter_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_perf_event_output: return &bpf_skb_event_output_proto; default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6799,7 +6802,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_sk_assign_proto; #endif default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6840,7 +6843,7 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_tcp_gen_syncookie_proto; #endif default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6882,7 +6885,7 @@ sock_ops_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_tcp_sock_proto; #endif /* CONFIG_INET */ default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6928,7 +6931,7 @@ sk_msg_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_get_cgroup_classid_curr_proto; #endif default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6970,7 +6973,7 @@ sk_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_skc_lookup_tcp_proto; #endif default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -6981,7 +6984,7 @@ flow_dissector_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_skb_load_bytes: return &bpf_flow_dissector_load_bytes_proto; default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -7008,7 +7011,7 @@ lwt_out_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_skb_under_cgroup: return &bpf_skb_under_cgroup_proto; default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -9745,7 +9748,7 @@ sk_lookup_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_sk_release: return &bpf_sk_release_proto; default: - return bpf_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id); } } @@ -9912,8 +9915,7 @@ const struct bpf_func_proto bpf_skc_to_tcp6_sock_proto = { .func = bpf_skc_to_tcp6_sock, .gpl_only = false, .ret_type = RET_PTR_TO_BTF_ID_OR_NULL, - .arg1_type = ARG_PTR_TO_BTF_ID, - .arg1_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg1_type = ARG_PTR_TO_SOCK_COMMON, .ret_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_TCP6], }; @@ -9929,8 +9931,7 @@ const struct bpf_func_proto bpf_skc_to_tcp_sock_proto = { .func = bpf_skc_to_tcp_sock, .gpl_only = false, .ret_type = RET_PTR_TO_BTF_ID_OR_NULL, - .arg1_type = ARG_PTR_TO_BTF_ID, - .arg1_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg1_type = ARG_PTR_TO_SOCK_COMMON, .ret_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_TCP], }; @@ -9953,8 +9954,7 @@ const struct bpf_func_proto bpf_skc_to_tcp_timewait_sock_proto = { .func = bpf_skc_to_tcp_timewait_sock, .gpl_only = false, .ret_type = RET_PTR_TO_BTF_ID_OR_NULL, - .arg1_type = ARG_PTR_TO_BTF_ID, - .arg1_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg1_type = ARG_PTR_TO_SOCK_COMMON, .ret_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_TCP_TW], }; @@ -9977,8 +9977,7 @@ const struct bpf_func_proto bpf_skc_to_tcp_request_sock_proto = { .func = bpf_skc_to_tcp_request_sock, .gpl_only = false, .ret_type = RET_PTR_TO_BTF_ID_OR_NULL, - .arg1_type = ARG_PTR_TO_BTF_ID, - .arg1_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg1_type = ARG_PTR_TO_SOCK_COMMON, .ret_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_TCP_REQ], }; @@ -9999,7 +9998,37 @@ const struct bpf_func_proto bpf_skc_to_udp6_sock_proto = { .func = bpf_skc_to_udp6_sock, .gpl_only = false, .ret_type = RET_PTR_TO_BTF_ID_OR_NULL, - .arg1_type = ARG_PTR_TO_BTF_ID, - .arg1_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg1_type = ARG_PTR_TO_SOCK_COMMON, .ret_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_UDP6], }; + +static const struct bpf_func_proto * +bpf_sk_base_func_proto(enum bpf_func_id func_id) +{ + const struct bpf_func_proto *func; + + switch (func_id) { + case BPF_FUNC_skc_to_tcp6_sock: + func = &bpf_skc_to_tcp6_sock_proto; + break; + case BPF_FUNC_skc_to_tcp_sock: + func = &bpf_skc_to_tcp_sock_proto; + break; + case BPF_FUNC_skc_to_tcp_timewait_sock: + func = &bpf_skc_to_tcp_timewait_sock_proto; + break; + case BPF_FUNC_skc_to_tcp_request_sock: + func = &bpf_skc_to_tcp_request_sock_proto; + break; + case BPF_FUNC_skc_to_udp6_sock: + func = &bpf_skc_to_udp6_sock_proto; + break; + default: + return bpf_base_func_proto(func_id); + } + + if (!perfmon_capable()) + return NULL; + + return func; +}
There is a constant need to add more fields into the bpf_tcp_sock for the bpf programs running at tc, sock_ops...etc. A current workaround could be to use bpf_probe_read_kernel(). However, other than making another helper call for reading each field and missing CO-RE, it is also not as intuitive to use as directly reading "tp->lsndtime" for example. While already having perfmon cap to do bpf_probe_read_kernel(), it will be much easier if the bpf prog can directly read from the tcp_sock. This patch tries to do that by using the existing casting-helpers bpf_skc_to_*() whose func_proto returns a btf_id. For example, the func_proto of bpf_skc_to_tcp_sock returns the btf_id of the kernel "struct tcp_sock". [ One approach is to make a separate copy of the bpf_skc_to_* func_proto and use ARG_PTR_TO_SOCK_COMMON instead of ARG_PTR_TO_BTF_ID. More on this later (1). ] This patch modifies the existing bpf_skc_to_* func_proto to take ARG_PTR_TO_SOCK_COMMON instead of taking "ARG_PTR_TO_BTF_ID + &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON]". That will allow tc, sock_ops,...etc to call these casting helpers because they already hold the PTR_TO_SOCK_COMMON (or its equivalent). For example: sk = sock_ops->sk; if (!sk) return; tp = bpf_skc_to_tcp_sock(sk); if (!tp) return; /* Read tp as a PTR_TO_BTF_ID */ lsndtime = tp->lsndtime; To ensure the current bpf prog passing a PTR_TO_BTF_ID to bpf_skc_to_*() still works as is, the verifier is modified such that ARG_PTR_TO_SOCK_COMMON can accept a reg with reg->type == PTR_TO_BTF_ID and reg->btf_id is btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON] To do that, an idea is borrowed from one of the Lorenz's patch: https://lore.kernel.org/bpf/20200904112401.667645-12-lmb@cloudflare.com/ . It adds PTR_TO_BTF_ID as one of the acceptable reg->type for ARG_PTR_TO_SOCK_COMMON and also specifies what btf_id it can take. By doing this, the bpf_skc_to_* will work as before and can still take PTR_TO_BTF_ID as the arg. e.g. The bpf tcp iter will work as is. This will also make other existing helper taking ARG_PTR_TO_SOCK_COMMON works with the pointer obtained from bpf_skc_to_*(). For example: sk = bpf_skc_lookup_tcp(skb, tuple, tuplen, BPF_F_CURRENT_NETNS, 0); if (!sk) return; tp = bpf_skc_to_tcp_sock(sk); if (!tp) { bpf_sk_release(sk); return; } lsndtime = tp->lsndtime; /* Pass tp to bpf_sk_release() will also work */ bpf_sk_release(tp); [ (1) Since we need to make the existing helpers taking ARG_PTR_TO_SOCK_COMMON to work with PTR_TO_BTF_ID, the existing bpf_skc_to_*() func_proto are modified instead of making a separate copy. ] A similar change has not been done to the ARG_PTR_TO_SOCKET yet. Thus, ARG_PTR_TO_SOCKET cannot take PTR_TO_BTF_ID for now. The BPF_FUNC_skc_to_* func_id is added to is_ptr_cast_function(). It ensures the returning reg (BPF_REF_0) which is a PTR_TO_BTF_ID_OR_NULL also carries the ref_obj_id. That will keep the ref-tracking works properly. It also allow bpf_sk_release() to be called on the ptr returned by bpf_skc_to_*(), which is shown in the previous example. The bpf_skc_to_* helpers are made available to most of the bpf prog types in filter.c. They are limited by perfmon cap. Signed-off-by: Martin KaFai Lau <kafai@fb.com> --- kernel/bpf/verifier.c | 22 ++++++++++++-- net/core/filter.c | 69 ++++++++++++++++++++++++++++++------------- 2 files changed, 69 insertions(+), 22 deletions(-)