diff mbox series

[bpf-next] xsk: fix possible segfault at xskmap entry insertion

Message ID 1599037569-26690-1-git-send-email-magnus.karlsson@intel.com
State Accepted
Delegated to: BPF Maintainers
Headers show
Series [bpf-next] xsk: fix possible segfault at xskmap entry insertion | expand

Commit Message

Magnus Karlsson Sept. 2, 2020, 9:06 a.m. UTC
Fix possible segfault when entry is inserted into xskmap. This can
happen if the socket is in a state where the umem has been set up, the
Rx ring created but it has yet to be bound to a device. In this case
the pool has not yet been created and we cannot reference it for the
existence of the fill ring. Fix this by removing the whole
xsk_is_setup_for_bpf_map function. Once upon a time, it was used to
make sure that the Rx and fill rings where set up before the driver
could call xsk_rcv, since there are no tests for the existence of
these rings in the data path. But these days, we have a state variable
that we test instead. When it is XSK_BOUND, everything has been set up
correctly and the socket has been bound. So no reason to have the
xsk_is_setup_for_bpf_map function anymore.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Reported-by: syzbot+febe51d44243fbc564ee@syzkaller.appspotmail.com
Fixes: 7361f9c3d719 ("xsk: move fill and completion rings to buffer pool")
---
 net/xdp/xsk.c    | 6 ------
 net/xdp/xsk.h    | 1 -
 net/xdp/xskmap.c | 5 -----
 3 files changed, 12 deletions(-)

Comments

Daniel Borkmann Sept. 2, 2020, 2:58 p.m. UTC | #1
On 9/2/20 11:06 AM, Magnus Karlsson wrote:
> Fix possible segfault when entry is inserted into xskmap. This can
> happen if the socket is in a state where the umem has been set up, the
> Rx ring created but it has yet to be bound to a device. In this case
> the pool has not yet been created and we cannot reference it for the
> existence of the fill ring. Fix this by removing the whole
> xsk_is_setup_for_bpf_map function. Once upon a time, it was used to
> make sure that the Rx and fill rings where set up before the driver
> could call xsk_rcv, since there are no tests for the existence of
> these rings in the data path. But these days, we have a state variable
> that we test instead. When it is XSK_BOUND, everything has been set up
> correctly and the socket has been bound. So no reason to have the
> xsk_is_setup_for_bpf_map function anymore.
> 
> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
> Reported-by: syzbot+febe51d44243fbc564ee@syzkaller.appspotmail.com
> Fixes: 7361f9c3d719 ("xsk: move fill and completion rings to buffer pool")

Applied & corrected Fixes tag, thanks!
diff mbox series

Patch

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 5eb6662..07c3227 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -33,12 +33,6 @@ 
 
 static DEFINE_PER_CPU(struct list_head, xskmap_flush_list);
 
-bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
-{
-	return READ_ONCE(xs->rx) &&  READ_ONCE(xs->umem) &&
-		(xs->pool->fq || READ_ONCE(xs->fq_tmp));
-}
-
 void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool)
 {
 	if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
diff --git a/net/xdp/xsk.h b/net/xdp/xsk.h
index da1f73e..b9e896c 100644
--- a/net/xdp/xsk.h
+++ b/net/xdp/xsk.h
@@ -39,7 +39,6 @@  static inline struct xdp_sock *xdp_sk(struct sock *sk)
 	return (struct xdp_sock *)sk;
 }
 
-bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs);
 void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
 			     struct xdp_sock **map_entry);
 int xsk_map_inc(struct xsk_map *map);
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 2a4fd66..0c5df59 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -185,11 +185,6 @@  static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
 
 	xs = (struct xdp_sock *)sock->sk;
 
-	if (!xsk_is_setup_for_bpf_map(xs)) {
-		sockfd_put(sock);
-		return -EOPNOTSUPP;
-	}
-
 	map_entry = &m->xsk_map[i];
 	node = xsk_map_node_alloc(m, map_entry);
 	if (IS_ERR(node)) {