From patchwork Sun Sep 18 22:03:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 671498 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3scjkK6sg7z9t2G for ; Mon, 19 Sep 2016 08:05:13 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=VTxrPBDB; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936491AbcIRWFJ (ORCPT ); Sun, 18 Sep 2016 18:05:09 -0400 Received: from mail-qt0-f178.google.com ([209.85.216.178]:35969 "EHLO mail-qt0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936308AbcIRWEV (ORCPT ); Sun, 18 Sep 2016 18:04:21 -0400 Received: by mail-qt0-f178.google.com with SMTP id l91so64342767qte.3 for ; Sun, 18 Sep 2016 15:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=VTxrPBDBLSRMdqKWnEwFgf2wLjgIHQTQ5cI3mgBxbcAGVDKH87egB54PCNMBcYOx1Z n7l98YNvunUWfHUIcIlM/Pvw64bOLUEJtI/Sbn/fDN0DjYAZnTb5WbGZ50vn2vEjxRuK vRoW23yte0uOIQt79F69v6qiQpno+zXIhMYediXbPrXrpnZvVKBirNB7DXsb/qBXhH4F C/1d7nja6vhAVUeo66iGqH5yikPx9QgwBn8sYk0CbdMXy4Ht6euLohLpSsIq9nOFk/Ft vjSaF9FW8LofzH8C8YuOUEiL8gHed+6tcEeDTjjOzhfflagdmDpBQONviJNFHOpxnM0g JbvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=dgvpiQUQEWuP63qDZnrYjt5c33VwX73uEMoZIUp/dGMTvg2rVdJKHtRKWq8wc8NklI 5CZalekfAFluSujJp15t/E1BSfZbT927GpzAs7mOdSKotC12WYs0iDOF8QgVT63S5S4X bd86v/+jx6McXG7COCsR3WI2oet6xIX5qY4NhSKp/ATz684RMOeSom6NgYaiCUJN4Op+ cz4nTJuRShaVYWGsvnaS2eOzgIuUYDmt+XTuFHouJAbM1VzggRN8+Uw/kUKEH5ohurHV 5EuAOEOaoTA1NBWwPSNs8ayH0GLPlvajWuTlOT7LkOVRXFW6FGHQ+Oum1YTHtpFks2ia d/iA== X-Gm-Message-State: AE9vXwPPzMI/7tCI8qDdrlY42gRD72WT8ivz3sVImlA5dcOiTCEPD2M9rr4+IlctrhmRHJFN X-Received: by 10.200.54.187 with SMTP id a56mr25999665qtc.44.1474236259822; Sun, 18 Sep 2016 15:04:19 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.230.104]) by smtp.gmail.com with ESMTPSA id v43sm11313014qtv.15.2016.09.18.15.04.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 18 Sep 2016 15:04:19 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Van Jacobson , Yuchung Cheng , Nandita Dukkipati , Eric Dumazet , Soheil Hassas Yeganeh Subject: [PATCH v3 net-next 11/16] tcp: export tcp_tso_autosize() and parameterize minimum number of TSO segments Date: Sun, 18 Sep 2016 18:03:48 -0400 Message-Id: <1474236233-28511-12-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1474236233-28511-1-git-send-email-ncardwell@google.com> References: <1474236233-28511-1-git-send-email-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To allow congestion control modules to use the default TSO auto-sizing algorithm as one of the ingredients in their own decision about TSO sizing: 1) Export tcp_tso_autosize() so that CC modules can use it. 2) Change tcp_tso_autosize() to allow callers to specify a minimum number of segments per TSO skb, in case the congestion control module has a different notion of the best floor for TSO skbs for the connection right now. For very low-rate paths or policed connections it can be appropriate to use smaller TSO skbs. Signed-off-by: Van Jacobson Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Nandita Dukkipati Signed-off-by: Eric Dumazet Signed-off-by: Soheil Hassas Yeganeh --- include/net/tcp.h | 2 ++ net/ipv4/tcp_output.c | 9 ++++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index f8f581f..3492041 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -533,6 +533,8 @@ __u32 cookie_v6_init_sequence(const struct sk_buff *skb, __u16 *mss); #endif /* tcp_output.c */ +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs); void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss, int nonagle); bool tcp_may_send_now(struct sock *sk); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 0137956..0bf3d48 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1549,7 +1549,8 @@ static bool tcp_nagle_check(bool partial, const struct tcp_sock *tp, /* Return how many segs we'd like on a TSO packet, * to send one TSO packet per ms */ -static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs) { u32 bytes, segs; @@ -1561,10 +1562,11 @@ static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) * This preserves ACK clocking and is consistent * with tcp_tso_should_defer() heuristic. */ - segs = max_t(u32, bytes / mss_now, sysctl_tcp_min_tso_segs); + segs = max_t(u32, bytes / mss_now, min_tso_segs); return min_t(u32, segs, sk->sk_gso_max_segs); } +EXPORT_SYMBOL(tcp_tso_autosize); /* Return the number of segments we want in the skb we are transmitting. * See if congestion control module wants to decide; otherwise, autosize. @@ -1574,7 +1576,8 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now) const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0; - return tso_segs ? : tcp_tso_autosize(sk, mss_now); + return tso_segs ? : + tcp_tso_autosize(sk, mss_now, sysctl_tcp_min_tso_segs); } /* Returns the portion of skb which can be sent right away */