Message ID | 20180104024711.257600-2-soheil.kdev@gmail.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
Series | [net-next,1/2] ip: do not set RFS core on error queue reads | expand |
From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com> Date: Wed, 3 Jan 2018 21:47:11 -0500 > From: Soheil Hassas Yeganeh <soheil@google.com> > > On multi-threaded processes, one common architecture is to have > one (or a small number of) threads polling sockets, and a > considerably larger pool of threads reading form and writing to the > sockets. When we set RPS core on tcp_poll() or udp_poll() we essentially > steer all packets of all the polled FDs to one (or small number of) > cores, creaing a bottleneck and/or RPS misprediction. > > Another common architecture is to shard FDs among threads pinned > to cores. In such a setting, setting RPS core in tcp_poll() and > udp_poll() is redundant because the RFS core is correctly > set in recvmsg and sendmsg. > > Thus, revert the following commit: > c3f1dbaf6e28 ("net: Update RFS target at poll for tcp/udp"). > > Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> > Signed-off-by: Willem de Bruijn <willemb@google.com> > Signed-off-by: Eric Dumazet <edumazet@google.com> > Signed-off-by: Neal Cardwell <ncardwell@google.com> Applied.
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 7ac583a2b9fe..f68cb33d50d1 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -498,8 +498,6 @@ unsigned int tcp_poll(struct file *file, struct socket *sock, poll_table *wait) const struct tcp_sock *tp = tcp_sk(sk); int state; - sock_rps_record_flow(sk); - sock_poll_wait(file, sk_sleep(sk), wait); state = inet_sk_state_load(sk); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index e9c0d1e1772e..db72619e07e4 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2490,8 +2490,6 @@ unsigned int udp_poll(struct file *file, struct socket *sock, poll_table *wait) if (!skb_queue_empty(&udp_sk(sk)->reader_queue)) mask |= POLLIN | POLLRDNORM; - sock_rps_record_flow(sk); - /* Check for false positives due to checksum errors */ if ((mask & POLLRDNORM) && !(file->f_flags & O_NONBLOCK) && !(sk->sk_shutdown & RCV_SHUTDOWN) && first_packet_length(sk) == -1)