From patchwork Sat Sep 17 17:35:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 671248 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sbzqh6X3Pz9s9c for ; Sun, 18 Sep 2016 03:37:20 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=kQaRn+DO; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932184AbcIQRhO (ORCPT ); Sat, 17 Sep 2016 13:37:14 -0400 Received: from mail-qk0-f181.google.com ([209.85.220.181]:34279 "EHLO mail-qk0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754274AbcIQRgb (ORCPT ); Sat, 17 Sep 2016 13:36:31 -0400 Received: by mail-qk0-f181.google.com with SMTP id h8so109673363qka.1 for ; Sat, 17 Sep 2016 10:36:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YCoCaGZeTMpgNbVX3et70PTw275s/T/8R7Cv2lzyvwQ=; b=kQaRn+DOzOneLwFhGpC//Nh538DrO86Z3GLk/67LceSgfp6NNjWFNgCEDUxY43XdHm UHek4/CdAzkb4iPvCdTtn4A+TBqjpJGe5rOKubBMf8JSBwIP2RqW0Tkprtp7Yx9oQiaP fc0JvHu36HpRd2pC9kJX4RY6dFun4UxsBzZzZxpH5IMukfJKvrZmqj+UeK0cfUNCWLuJ KVwrb4hne5P9+T5fWTMmENv2sUP/oLN4Pa4pvqPcWJhyRvgAxnw5uLYY6Ba5Tg69UgTL /rkbHTAxI3hRLWmPkQJiXMkKDa/Fl3ZAnOvOEjfN3RL2HfY1DTFDtBvoWJ2Hoe3nPijB EVcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YCoCaGZeTMpgNbVX3et70PTw275s/T/8R7Cv2lzyvwQ=; b=Y+xgQVwpUEGIErqvQuVQSfekFCgovniW9aBiprZEzVH8UwbSzqP7yNC7oeFT3ayawO wnY0C0l0LSZ2YPyPA5rKHZ9SDIg3vomydEDc4HBucarMdqFOcBrQluN7t+FBKBKBtGT/ Q5lPBKV/lbuD4LqdWxgxuneshhmnMKz6SnH/XWkhhj5kolkeF3VYxErX3xNnBjRrJC9X RUMviBjkluZclhJYMomN7sNl/KsWmuTWJ+SXwelBML32+c1Q0oZyAxvxc57ckewn/Hgp lqHBKIVDuAzw45Og9M1l7xwpUL+YwZJO7FNnfIzuAv9+ZkdpxRn1nDqNDEr49hFFiPTo +8Wg== X-Gm-Message-State: AE9vXwPoCSa1HOO3Ec7ACH0piPNUko7rulWiqn2MfxtDq0lolvRHAURej1olmabFUM8theCy X-Received: by 10.55.113.197 with SMTP id m188mr20833200qkc.55.1474133790780; Sat, 17 Sep 2016 10:36:30 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.230.104]) by smtp.gmail.com with ESMTPSA id t21sm8068625qkg.4.2016.09.17.10.36.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 17 Sep 2016 10:36:30 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Yuchung Cheng , Van Jacobson , Neal Cardwell , Nandita Dukkipati , Eric Dumazet , Soheil Hassas Yeganeh Subject: [PATCH v2 net-next 13/16] tcp: allow congestion control to expand send buffer differently Date: Sat, 17 Sep 2016 13:35:46 -0400 Message-Id: <1474133749-12895-14-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1474133749-12895-1-git-send-email-ncardwell@google.com> References: <1474133749-12895-1-git-send-email-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yuchung Cheng Currently the TCP send buffer expands to twice cwnd, in order to allow limited transmits in the CA_Recovery state. This assumes that cwnd does not increase in the CA_Recovery. For some congestion control algorithms, like the upcoming BBR module, if the losses in recovery do not indicate congestion then we may continue to raise cwnd multiplicatively in recovery. In such cases the current multiplier will falsely limit the sending rate, much as if it were limited by the application. This commit adds an optional congestion control callback to use a different multiplier to expand the TCP send buffer. For congestion control modules that do not specificy this callback, TCP continues to use the previous default of 2. Signed-off-by: Van Jacobson Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Nandita Dukkipati Signed-off-by: Eric Dumazet Signed-off-by: Soheil Hassas Yeganeh --- include/net/tcp.h | 2 ++ net/ipv4/tcp_input.c | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 3492041..1aa9628 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -917,6 +917,8 @@ struct tcp_congestion_ops { void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample); /* suggest number of segments for each skb to transmit (optional) */ u32 (*tso_segs_goal)(struct sock *sk); + /* returns the multiplier used in tcp_sndbuf_expand (optional) */ + u32 (*sndbuf_expand)(struct sock *sk); /* get info for inet_diag (optional) */ size_t (*get_info)(struct sock *sk, u32 ext, int *attr, union tcp_cc_info *info); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index df26af0..a134e66 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -289,6 +289,7 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr static void tcp_sndbuf_expand(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); + const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; int sndmem, per_mss; u32 nr_segs; @@ -309,7 +310,8 @@ static void tcp_sndbuf_expand(struct sock *sk) * Cubic needs 1.7 factor, rounded to 2 to include * extra cushion (application might react slowly to POLLOUT) */ - sndmem = 2 * nr_segs * per_mss; + sndmem = ca_ops->sndbuf_expand ? ca_ops->sndbuf_expand(sk) : 2; + sndmem *= nr_segs * per_mss; if (sk->sk_sndbuf < sndmem) sk->sk_sndbuf = min(sndmem, sysctl_tcp_wmem[2]);