Message ID | 20180427185733.36855-1-soheil.kdev@gmail.com |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
Series | [V2,net-next,1/2] tcp: send in-queue bytes in cmsg upon read | expand |
From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com> Date: Fri, 27 Apr 2018 14:57:32 -0400 > Since the socket lock is not held when calculating the size of > receive queue, TCP_INQ is a hint. For example, it can overestimate > the queue size by one byte, if FIN is received. I think it is even worse than that. If another application comes in and does a recvmsg() in parallel with these calculations, you could even report a negative value. These READ_ONCE() make it look like some of these issues are being addressed but they are not. You could freeze the values just by taking sk->sk_lock.slock, but I don't know if that cost is considered acceptable or not. Another idea is to sample both values in a loop, similar to a sequence lock sequence: again: tmp1 = A; tmp2 = B; barrier(); tmp3 = A; if (tmp1 != tmp3) goto again; But the current state of affairs is not going to work well.
On 04/30/2018 08:38 AM, David Miller wrote: > From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com> > Date: Fri, 27 Apr 2018 14:57:32 -0400 > >> Since the socket lock is not held when calculating the size of >> receive queue, TCP_INQ is a hint. For example, it can overestimate >> the queue size by one byte, if FIN is received. > > I think it is even worse than that. > > If another application comes in and does a recvmsg() in parallel with > these calculations, you could even report a negative value. > > These READ_ONCE() make it look like some of these issues are being > addressed but they are not. > > You could freeze the values just by taking sk->sk_lock.slock, but I > don't know if that cost is considered acceptable or not. > > Another idea is to sample both values in a loop, similar to a sequence > lock sequence: > > again: > tmp1 = A; > tmp2 = B; > barrier(); > tmp3 = A; > if (tmp1 != tmp3) > goto again; > > But the current state of affairs is not going to work well. > We want a hint, and max_t(int, 0, ....) does not return a negative value ? If the hint is wrong in 0.1 % of the cases, we really do not care, it is not meant to replace the existing precise ( well, sort of ) mechanism. I say sort of, because by the time we have any number, TCP might have received more packets anyway.
From: Eric Dumazet <eric.dumazet@gmail.com> Date: Mon, 30 Apr 2018 08:43:50 -0700 > I say sort of, because by the time we have any number, TCP might > have received more packets anyway. That's fine. However, the number reported should have been true at least at some finite point in time. If you allow overlapping changes to either of the two variables during the sampling, then you are reporting a number which was never true at any point in time. It is essentially garbage.
On Mon, Apr 30, 2018 at 11:43 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On 04/30/2018 08:38 AM, David Miller wrote: >> From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com> >> Date: Fri, 27 Apr 2018 14:57:32 -0400 >> >>> Since the socket lock is not held when calculating the size of >>> receive queue, TCP_INQ is a hint. For example, it can overestimate >>> the queue size by one byte, if FIN is received. >> >> I think it is even worse than that. >> >> If another application comes in and does a recvmsg() in parallel with >> these calculations, you could even report a negative value. Thanks you David. In addition to Eric's point, for TCP specifically, it is quite uncommon to have multiple threads calling recvmsg() for the same socket in parallel, because the application is interested in the streamed, in-sequence bytes. Except when the application just wants to discard the incoming stream or has a predefined frame sizes, this wouldn't be an issue. For such cases, the proposed INQ hint is not going to be useful. Could you please let me know whether you have any other example in mind? Thanks! Soheil
On 04/30/2018 08:56 AM, David Miller wrote: > From: Eric Dumazet <eric.dumazet@gmail.com> > Date: Mon, 30 Apr 2018 08:43:50 -0700 > >> I say sort of, because by the time we have any number, TCP might >> have received more packets anyway. > > That's fine. > > However, the number reported should have been true at least at some > finite point in time. > > If you allow overlapping changes to either of the two variables during > the sampling, then you are reporting a number which was never true at > any point in time. > > It is essentially garbage. Correct. TCP sockets are read by a single thread really (or synchronized threads), or garbage is ensured, regardless of how the kernel ensures locking while reporting "queue length"
From: Eric Dumazet <eric.dumazet@gmail.com> Date: Mon, 30 Apr 2018 09:01:47 -0700 > TCP sockets are read by a single thread really (or synchronized > threads), or garbage is ensured, regardless of how the kernel > ensures locking while reporting "queue length" Whatever applications "typically do", we should never return garbage, and that is what this code allowing to happen. Everything else in recvmsg() operates on state under the proper socket lock, to ensure consistency. The only reason we are releasing the socket lock first it to make sure the backlog is processed and we have the most update information available. It seems like one is striving for correctness and better accuracy, no? :-) Look, this can be fixed really simply. And if you are worried about unbounded loops if two apps maliciously do recvmsg() in parallel, then don't even loop, just fallback to full socket locking and make the "non-typical" application pay the price: tmp1 = A; tmp2 = B; barrier(); tmp3 = A; if (unlikely(tmp1 != tmp3)) { lock_sock(sk); tmp1 = A; tmp2 = B; release_sock(sk); } I'm seriously not applying the patch as-is, sorry. This issue must be addressed somehow. Thank you.
On Mon, Apr 30, 2018 at 12:10 PM, David Miller <davem@davemloft.net> wrote: > From: Eric Dumazet <eric.dumazet@gmail.com> > Date: Mon, 30 Apr 2018 09:01:47 -0700 > >> TCP sockets are read by a single thread really (or synchronized >> threads), or garbage is ensured, regardless of how the kernel >> ensures locking while reporting "queue length" > > Whatever applications "typically do", we should never return > garbage, and that is what this code allowing to happen. > > Everything else in recvmsg() operates on state under the proper socket > lock, to ensure consistency. > > The only reason we are releasing the socket lock first it to make sure > the backlog is processed and we have the most update information > available. > > It seems like one is striving for correctness and better accuracy, no? > :-) > > Look, this can be fixed really simply. And if you are worried about > unbounded loops if two apps maliciously do recvmsg() in parallel, > then don't even loop, just fallback to full socket locking and make > the "non-typical" application pay the price: > > tmp1 = A; > tmp2 = B; > barrier(); > tmp3 = A; > if (unlikely(tmp1 != tmp3)) { > lock_sock(sk); > tmp1 = A; > tmp2 = B; > release_sock(sk); > } > > I'm seriously not applying the patch as-is, sorry. This issue > must be addressed somehow. Thank you David for the suggestion. Sure, I'll send a V3 with what you suggested above. Thanks, Soheil
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 20585d5c4e1c3..807776928cb86 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -228,7 +228,7 @@ struct tcp_sock { unused:2; u8 nonagle : 4,/* Disable Nagle algorithm? */ thin_lto : 1,/* Use linear timeouts for thin streams */ - unused1 : 1, + recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */ repair : 1, frto : 1;/* F-RTO (RFC5682) activated in CA_Loss */ u8 repair_queue; diff --git a/include/net/tcp.h b/include/net/tcp.h index 833154e3df173..0986836b5df5b 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1951,6 +1951,14 @@ static inline int tcp_inq(struct sock *sk) return answ; } +static inline int tcp_inq_hint(const struct sock *sk) +{ + const struct tcp_sock *tp = tcp_sk(sk); + + return max_t(int, 0, + READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq)); +} + int tcp_peek_len(struct socket *sock); static inline void tcp_segs_in(struct tcp_sock *tp, const struct sk_buff *skb) diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index 379b08700a542..d4cdd25a7bd48 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -122,6 +122,9 @@ enum { #define TCP_MD5SIG_EXT 32 /* TCP MD5 Signature with extensions */ #define TCP_FASTOPEN_KEY 33 /* Set the key for Fast Open (cookie) */ #define TCP_FASTOPEN_NO_COOKIE 34 /* Enable TFO without a TFO cookie */ +#define TCP_INQ 35 /* Notify bytes available to read as a cmsg on read */ + +#define TCP_CM_INQ TCP_INQ struct tcp_repair_opt { __u32 opt_code; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index dfd090ea54ad4..5a7056980f730 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1910,13 +1910,14 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, u32 peek_seq; u32 *seq; unsigned long used; - int err; + int err, inq; int target; /* Read at least this many bytes */ long timeo; struct sk_buff *skb, *last; u32 urg_hole = 0; struct scm_timestamping tss; bool has_tss = false; + bool has_cmsg; if (unlikely(flags & MSG_ERRQUEUE)) return inet_recv_error(sk, msg, len, addr_len); @@ -1931,6 +1932,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, if (sk->sk_state == TCP_LISTEN) goto out; + has_cmsg = tp->recvmsg_inq; timeo = sock_rcvtimeo(sk, nonblock); /* Urgent data needs to be handled specially. */ @@ -2117,6 +2119,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, &tss); has_tss = true; + has_cmsg = true; } if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) goto found_fin_ok; @@ -2136,13 +2139,20 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, * on connected socket. I was just happy when found this 8) --ANK */ - if (has_tss) - tcp_recv_timestamp(msg, sk, &tss); - /* Clean up data we have read: This will do ACK frames. */ tcp_cleanup_rbuf(sk, copied); release_sock(sk); + + if (has_cmsg) { + if (has_tss) + tcp_recv_timestamp(msg, sk, &tss); + if (tp->recvmsg_inq) { + inq = tcp_inq_hint(sk); + put_cmsg(msg, SOL_TCP, TCP_CM_INQ, sizeof(inq), &inq); + } + } + return copied; out: @@ -3011,6 +3021,12 @@ static int do_tcp_setsockopt(struct sock *sk, int level, tp->notsent_lowat = val; sk->sk_write_space(sk); break; + case TCP_INQ: + if (val > 1 || val < 0) + err = -EINVAL; + else + tp->recvmsg_inq = val; + break; default: err = -ENOPROTOOPT; break; @@ -3436,6 +3452,9 @@ static int do_tcp_getsockopt(struct sock *sk, int level, case TCP_NOTSENT_LOWAT: val = tp->notsent_lowat; break; + case TCP_INQ: + val = tp->recvmsg_inq; + break; case TCP_SAVE_SYN: val = tp->save_syn; break;