From patchwork Sat Sep 17 17:35:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 671250 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sbzqs0Qzyz9s2Q for ; Sun, 18 Sep 2016 03:37:29 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=G9XuDXa2; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932273AbcIQRhV (ORCPT ); Sat, 17 Sep 2016 13:37:21 -0400 Received: from mail-qt0-f176.google.com ([209.85.216.176]:35461 "EHLO mail-qt0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754255AbcIQRg3 (ORCPT ); Sat, 17 Sep 2016 13:36:29 -0400 Received: by mail-qt0-f176.google.com with SMTP id 93so56185295qtg.2 for ; Sat, 17 Sep 2016 10:36:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=G9XuDXa2G6/a0BCI5iAZTRcfFWQ1JNxj9IBC8sT/0B+O/LUNAdyxCwyaj1V/b7lH+S Z2ZNM3v7jJD+G9kJjW+/BR93y9Mv7fnbv9u6MF9zz50eoKAURgEJYQRegZtYJ9ymL+7a lHE8M1hPmpCGR6Kg7GmwfR45ndviURbtryyKiql9/9uwuCdLUML04kz4N6ZAlcGKWtSV r+Z01dR43MrFXJBVhhF2ofqWskemmA222anB2DaiWqwcgMQCIeqSA6nmUt3FzvQCZJzo 2lE9H2ND1YmneHJG/hQGoYI4ntIWXz4QZFzJN0fwlYyr5hfDsBQc6wK5pD/TX1mTAXko mVEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=PblCNtLUtyhMqJus5jdezeIch+XcmYFg9oXxl4aO4jU6jxepMBfugj0fqebTtKQ8+R TDLjgnhaPgD+ygz/5viJAyHDnhIOABb8IfIi7Lf12LKSytDxgu2OVGiCHJbhxvl4DWbd U/cUUtkl5h/DPn4Nwy4H5Am5/StU8EHIOoeF6YG+3rPiRTA57SI0NmJ9UcEmN7Bqn00a XrajNMktE//zMpnlR8JKZ75TPpSFljikQNkdBBldk2JtcTdylS7AAUqCdcgtwPrb5lZK RzdKvvGsukvyPfOpz5/sqy3XGGMxxT6W6h1T0A7zhOFFrGQ6lBpvT/FR5bLmH/8vGUHy yRaA== X-Gm-Message-State: AE9vXwObpvarIA0KUhtw0T96LcEDo3kmMtlb1J5qhoqDx7uCje89P47H4ZXmNGj6xVYFjcH8 X-Received: by 10.200.52.124 with SMTP id v57mr21161816qtb.137.1474133788141; Sat, 17 Sep 2016 10:36:28 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.230.104]) by smtp.gmail.com with ESMTPSA id t21sm8068625qkg.4.2016.09.17.10.36.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 17 Sep 2016 10:36:27 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Van Jacobson , Yuchung Cheng , Nandita Dukkipati , Eric Dumazet , Soheil Hassas Yeganeh Subject: [PATCH v2 net-next 11/16] tcp: export tcp_tso_autosize() and parameterize minimum number of TSO segments Date: Sat, 17 Sep 2016 13:35:44 -0400 Message-Id: <1474133749-12895-12-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1474133749-12895-1-git-send-email-ncardwell@google.com> References: <1474133749-12895-1-git-send-email-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To allow congestion control modules to use the default TSO auto-sizing algorithm as one of the ingredients in their own decision about TSO sizing: 1) Export tcp_tso_autosize() so that CC modules can use it. 2) Change tcp_tso_autosize() to allow callers to specify a minimum number of segments per TSO skb, in case the congestion control module has a different notion of the best floor for TSO skbs for the connection right now. For very low-rate paths or policed connections it can be appropriate to use smaller TSO skbs. Signed-off-by: Van Jacobson Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Nandita Dukkipati Signed-off-by: Eric Dumazet Signed-off-by: Soheil Hassas Yeganeh --- include/net/tcp.h | 2 ++ net/ipv4/tcp_output.c | 9 ++++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index f8f581f..3492041 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -533,6 +533,8 @@ __u32 cookie_v6_init_sequence(const struct sk_buff *skb, __u16 *mss); #endif /* tcp_output.c */ +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs); void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss, int nonagle); bool tcp_may_send_now(struct sock *sk); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 0137956..0bf3d48 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1549,7 +1549,8 @@ static bool tcp_nagle_check(bool partial, const struct tcp_sock *tp, /* Return how many segs we'd like on a TSO packet, * to send one TSO packet per ms */ -static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs) { u32 bytes, segs; @@ -1561,10 +1562,11 @@ static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) * This preserves ACK clocking and is consistent * with tcp_tso_should_defer() heuristic. */ - segs = max_t(u32, bytes / mss_now, sysctl_tcp_min_tso_segs); + segs = max_t(u32, bytes / mss_now, min_tso_segs); return min_t(u32, segs, sk->sk_gso_max_segs); } +EXPORT_SYMBOL(tcp_tso_autosize); /* Return the number of segments we want in the skb we are transmitting. * See if congestion control module wants to decide; otherwise, autosize. @@ -1574,7 +1576,8 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now) const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0; - return tso_segs ? : tcp_tso_autosize(sk, mss_now); + return tso_segs ? : + tcp_tso_autosize(sk, mss_now, sysctl_tcp_min_tso_segs); } /* Returns the portion of skb which can be sent right away */