From patchwork Sun Sep 18 22:03:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 671496 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3scjk9418lz9t2G for ; Mon, 19 Sep 2016 08:05:05 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=RSInm5b2; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936378AbcIRWFB (ORCPT ); Sun, 18 Sep 2016 18:05:01 -0400 Received: from mail-qt0-f173.google.com ([209.85.216.173]:32983 "EHLO mail-qt0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932119AbcIRWEX (ORCPT ); Sun, 18 Sep 2016 18:04:23 -0400 Received: by mail-qt0-f173.google.com with SMTP id 11so64695463qtc.0 for ; Sun, 18 Sep 2016 15:04:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/TTbCgn/a4A7opPNKH0WYb+BA+kMR4cD2BPFBPKOpLE=; b=RSInm5b2p1GPlNM2nS4jDm7NYwHGMmUZSaVoSmDsENOMmjkKtojR2SKFndQXn3FfCt PGmFmwn85T9VXMM+89VfR2WKTryBop2PGAJxBoy/s3sHi62qEM4TZUWPtl7EZSBRy9j1 hnpjd75kVG242NexbvDDI31TQVMfi9xSssYGHSV3Nvy81JQmlniyAQTNkduInDboqwMP /47scrTp/gJb8uzaz+HHrwFPh1sqCgOitxxIMUbmhQBxXhKKr7sSdvj17jqu35+PFtyF 9WirVEFYXdfT1y3CjbDayn6ah1EiJ4T2j5B0XQjvzN+AAtcbs1RyyifAzbb9lz7XbY0/ +zXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/TTbCgn/a4A7opPNKH0WYb+BA+kMR4cD2BPFBPKOpLE=; b=NCYFsVafACwlMax9ihC2bi1T3PBKKsr0Ebok5NwWs97btoEMn3+RsnI4NGJEGPkmyg PsGODz6SOWMOdSk8wfhkHte7PeO9Q7J1Ztrm1IDjpAKLb30phAzoL03kvSIu/Q49nv3T dhfvUhX/7BPB7Htbc+q3HVr6BOAkA4FAThSXcMGHQbrmerbcHYtsTQfXL/CLs2izkHSv tSddFU25OVXXV7T8aJBJwgqdvNYnxvvlkSrowNuufLM/aP2cuODvtzAqB+y2nZiKa9PS tVjFpKqXvuJowxd5PcTAYOucOQQc+lc/H2FLybt/bCR24v3PsX878qs11hgQORXJ4wh4 FjmQ== X-Gm-Message-State: AE9vXwMvx9VWtG0YxFo5X+lsa1hDEr19fm8psBEn9eSM6q2DiPCq/yyhp73PtudV4sd0SLf3 X-Received: by 10.200.51.87 with SMTP id u23mr26438639qta.98.1474236262261; Sun, 18 Sep 2016 15:04:22 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.230.104]) by smtp.gmail.com with ESMTPSA id v43sm11313014qtv.15.2016.09.18.15.04.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 18 Sep 2016 15:04:21 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Yuchung Cheng , Van Jacobson , Neal Cardwell , Nandita Dukkipati , Eric Dumazet , Soheil Hassas Yeganeh Subject: [PATCH v3 net-next 13/16] tcp: allow congestion control to expand send buffer differently Date: Sun, 18 Sep 2016 18:03:50 -0400 Message-Id: <1474236233-28511-14-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1474236233-28511-1-git-send-email-ncardwell@google.com> References: <1474236233-28511-1-git-send-email-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yuchung Cheng Currently the TCP send buffer expands to twice cwnd, in order to allow limited transmits in the CA_Recovery state. This assumes that cwnd does not increase in the CA_Recovery. For some congestion control algorithms, like the upcoming BBR module, if the losses in recovery do not indicate congestion then we may continue to raise cwnd multiplicatively in recovery. In such cases the current multiplier will falsely limit the sending rate, much as if it were limited by the application. This commit adds an optional congestion control callback to use a different multiplier to expand the TCP send buffer. For congestion control modules that do not specificy this callback, TCP continues to use the previous default of 2. Signed-off-by: Van Jacobson Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Nandita Dukkipati Signed-off-by: Eric Dumazet Signed-off-by: Soheil Hassas Yeganeh --- include/net/tcp.h | 2 ++ net/ipv4/tcp_input.c | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 3492041..1aa9628 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -917,6 +917,8 @@ struct tcp_congestion_ops { void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample); /* suggest number of segments for each skb to transmit (optional) */ u32 (*tso_segs_goal)(struct sock *sk); + /* returns the multiplier used in tcp_sndbuf_expand (optional) */ + u32 (*sndbuf_expand)(struct sock *sk); /* get info for inet_diag (optional) */ size_t (*get_info)(struct sock *sk, u32 ext, int *attr, union tcp_cc_info *info); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 17de77d..5af0bf3 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -289,6 +289,7 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr static void tcp_sndbuf_expand(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); + const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; int sndmem, per_mss; u32 nr_segs; @@ -309,7 +310,8 @@ static void tcp_sndbuf_expand(struct sock *sk) * Cubic needs 1.7 factor, rounded to 2 to include * extra cushion (application might react slowly to POLLOUT) */ - sndmem = 2 * nr_segs * per_mss; + sndmem = ca_ops->sndbuf_expand ? ca_ops->sndbuf_expand(sk) : 2; + sndmem *= nr_segs * per_mss; if (sk->sk_sndbuf < sndmem) sk->sk_sndbuf = min(sndmem, sysctl_tcp_wmem[2]);