From patchwork Tue May 1 14:11:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Soheil Hassas Yeganeh X-Patchwork-Id: 907069 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="czzO4TIk"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40b3Hb730fz9s2t for ; Wed, 2 May 2018 00:11:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755857AbeEAOLe (ORCPT ); Tue, 1 May 2018 10:11:34 -0400 Received: from mail-qk0-f195.google.com ([209.85.220.195]:45402 "EHLO mail-qk0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755539AbeEAOLb (ORCPT ); Tue, 1 May 2018 10:11:31 -0400 Received: by mail-qk0-f195.google.com with SMTP id x22so8936084qkb.12 for ; Tue, 01 May 2018 07:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ObY9xHgVhw9rxFnsnahPlJcTq2laOorN9AgnUdOfpg8=; b=czzO4TIk+fD1EMxvbKxmUF6PywOitO5aMXjdOU9N9ALsU4cA9o4YxsZaXmOtnVnIsr YYG/Cqgk6nAyQq6/5FoQjdg3LyCNmNP8uuqRkzlDVTb1hVBKaSFlc9IK/JPNvgCCFG5L 2AKecEv+akr0iCqsRVT2O5zWxhNlZtp/TFdMxdR+jatMqojt4olma3/f1mB73ecbgjZ+ 58/x5DGugvXML0oGWRiMqufsSH7aucZhFtweehvNPwbMcWdgnUTHrHy6oHMpFnv03P+Z ycSVrvgu4ZfKXFxxE7YsiXPNZ7y61v4ma1ND9YmD6NvmMdYFjm6Snhm+Q4L7m7eVRZV/ q6BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ObY9xHgVhw9rxFnsnahPlJcTq2laOorN9AgnUdOfpg8=; b=kWCODgCscof3/PFU4p1C9/IuT5MzYF/oV88JkJomsQ4x9SVVDBnpAExg/d27xs/E6d XYytbAqPKLi1/dVR2NRnUkrpkAl7iD2FOyfIypNpLHE//npz/37bd3Pt5MQobURGITLk 6JBvB94/4U6KLRHIOEFGD1deEOrXAZG0Uv5QriAEyA09wpXZRj4r/xAqpNU7ogJ39pVA JB7iUplotCz6BUQo2OTxMleyRMzS50CZbYLhSTFy3KBZ4BgDSKOsXfrqimJgJT0Jj3Ea ZLownJCc5CKeeM9x3kh9PPzQSATk6/T3BxDqmdEyX5mrxlem8c59LbnFT9Ey4CJc3MOY SJwA== X-Gm-Message-State: ALQs6tCSLnKZSq4oUKW+mazQFBrcVGsHr5Fl2KR9AvGWzgogmM5MAHe8 XY8uXDhLWiy1UEv3UEFuOf8= X-Google-Smtp-Source: AB8JxZos7wq+cQu21FRv+D7k5zKF8naLRy2MN3bAik9f0tUKBFOTQwpwXPDaMQkD+MAufBj7I0W4Iw== X-Received: by 10.55.182.5 with SMTP id g5mr12463907qkf.136.1525183891039; Tue, 01 May 2018 07:11:31 -0700 (PDT) Received: from z.nyc.corp.google.com ([2620:0:1003:315:9c67:ffa0:44c0:d273]) by smtp.gmail.com with ESMTPSA id x125sm2117974qkb.84.2018.05.01.07.11.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 May 2018 07:11:30 -0700 (PDT) From: Soheil Hassas Yeganeh To: davem@davemloft.net, netdev@vger.kernel.org Cc: ycheng@google.com, ncardwell@google.com, edumazet@google.com, willemb@google.com, Soheil Hassas Yeganeh Subject: [PATCH V3 net-next 1/2] tcp: send in-queue bytes in cmsg upon read Date: Tue, 1 May 2018 10:11:27 -0400 Message-Id: <20180501141128.208705-1-soheil.kdev@gmail.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Soheil Hassas Yeganeh Applications with many concurrent connections, high variance in receive queue length and tight memory bounds cannot allocate worst-case buffer size to drain sockets. Knowing the size of receive queue length, applications can optimize how they allocate buffers to read from the socket. The number of bytes pending on the socket is directly available through ioctl(FIONREAD/SIOCINQ) and can be approximated using getsockopt(MEMINFO) (rmem_alloc includes skb overheads in addition to application data). But, both of these options add an extra syscall per recvmsg. Moreover, ioctl(FIONREAD/SIOCINQ) takes the socket lock. Add the TCP_INQ socket option to TCP. When this socket option is set, recvmsg() relays the number of bytes available on the socket for reading to the application via the TCP_CM_INQ control message. Calculate the number of bytes after releasing the socket lock to include the processed backlog, if any. To avoid an extra branch in the hot path of recvmsg() for this new control message, move all cmsg processing inside an existing branch for processing receive timestamps. Since the socket lock is not held when calculating the size of receive queue, TCP_INQ is a hint. For example, it can overestimate the queue size by one byte, if FIN is received. With this method, applications can start reading from the socket using a small buffer, and then use larger buffers based on the remaining data when needed. V3 change-log: As suggested by David Miller, added loads with barrier to check whether we have multiple threads calling recvmsg in parallel. When that happens we lock the socket to calculate inq. Signed-off-by: Soheil Hassas Yeganeh Signed-off-by: Yuchung Cheng Signed-off-by: Willem de Bruijn Reviewed-by: Eric Dumazet Reviewed-by: Neal Cardwell Suggested-by: David Miller --- include/linux/tcp.h | 2 +- include/uapi/linux/tcp.h | 3 +++ net/ipv4/tcp.c | 43 ++++++++++++++++++++++++++++++++++++---- 3 files changed, 43 insertions(+), 5 deletions(-) diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 20585d5c4e1c3..807776928cb86 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -228,7 +228,7 @@ struct tcp_sock { unused:2; u8 nonagle : 4,/* Disable Nagle algorithm? */ thin_lto : 1,/* Use linear timeouts for thin streams */ - unused1 : 1, + recvmsg_inq : 1,/* Indicate # of bytes in queue upon recvmsg */ repair : 1, frto : 1;/* F-RTO (RFC5682) activated in CA_Loss */ u8 repair_queue; diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index e9e8373b34b9d..29eb659aa77a1 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -123,6 +123,9 @@ enum { #define TCP_FASTOPEN_KEY 33 /* Set the key for Fast Open (cookie) */ #define TCP_FASTOPEN_NO_COOKIE 34 /* Enable TFO without a TFO cookie */ #define TCP_ZEROCOPY_RECEIVE 35 +#define TCP_INQ 36 /* Notify bytes available to read as a cmsg on read */ + +#define TCP_CM_INQ TCP_INQ struct tcp_repair_opt { __u32 opt_code; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 4028ddd14dd5a..ca7365db59dff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1889,6 +1889,22 @@ static void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk, } } +static inline int tcp_inq_hint(struct sock *sk) +{ + const struct tcp_sock *tp = tcp_sk(sk); + u32 copied_seq = READ_ONCE(tp->copied_seq); + u32 rcv_nxt = READ_ONCE(tp->rcv_nxt); + int inq; + + inq = rcv_nxt - copied_seq; + if (unlikely(inq < 0 || copied_seq != READ_ONCE(tp->copied_seq))) { + lock_sock(sk); + inq = tp->rcv_nxt - tp->copied_seq; + release_sock(sk); + } + return inq; +} + /* * This routine copies from a sock struct into the user buffer. * @@ -1905,13 +1921,14 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, u32 peek_seq; u32 *seq; unsigned long used; - int err; + int err, inq; int target; /* Read at least this many bytes */ long timeo; struct sk_buff *skb, *last; u32 urg_hole = 0; struct scm_timestamping tss; bool has_tss = false; + bool has_cmsg; if (unlikely(flags & MSG_ERRQUEUE)) return inet_recv_error(sk, msg, len, addr_len); @@ -1926,6 +1943,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, if (sk->sk_state == TCP_LISTEN) goto out; + has_cmsg = tp->recvmsg_inq; timeo = sock_rcvtimeo(sk, nonblock); /* Urgent data needs to be handled specially. */ @@ -2112,6 +2130,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, &tss); has_tss = true; + has_cmsg = true; } if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) goto found_fin_ok; @@ -2131,13 +2150,20 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, * on connected socket. I was just happy when found this 8) --ANK */ - if (has_tss) - tcp_recv_timestamp(msg, sk, &tss); - /* Clean up data we have read: This will do ACK frames. */ tcp_cleanup_rbuf(sk, copied); release_sock(sk); + + if (has_cmsg) { + if (has_tss) + tcp_recv_timestamp(msg, sk, &tss); + if (tp->recvmsg_inq) { + inq = tcp_inq_hint(sk); + put_cmsg(msg, SOL_TCP, TCP_CM_INQ, sizeof(inq), &inq); + } + } + return copied; out: @@ -3006,6 +3032,12 @@ static int do_tcp_setsockopt(struct sock *sk, int level, tp->notsent_lowat = val; sk->sk_write_space(sk); break; + case TCP_INQ: + if (val > 1 || val < 0) + err = -EINVAL; + else + tp->recvmsg_inq = val; + break; default: err = -ENOPROTOOPT; break; @@ -3431,6 +3463,9 @@ static int do_tcp_getsockopt(struct sock *sk, int level, case TCP_NOTSENT_LOWAT: val = tp->notsent_lowat; break; + case TCP_INQ: + val = tp->recvmsg_inq; + break; case TCP_SAVE_SYN: val = tp->save_syn; break;