From patchwork Tue Sep 20 03:39:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 672070 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sdT6513Lbz9sD6 for ; Tue, 20 Sep 2016 13:39:57 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=f3wCr8x+; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754161AbcITDjw (ORCPT ); Mon, 19 Sep 2016 23:39:52 -0400 Received: from mail-qk0-f174.google.com ([209.85.220.174]:36142 "EHLO mail-qk0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754060AbcITDjq (ORCPT ); Mon, 19 Sep 2016 23:39:46 -0400 Received: by mail-qk0-f174.google.com with SMTP id z190so4288899qkc.3 for ; Mon, 19 Sep 2016 20:39:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=f3wCr8x+Kl6Id/kbBpTRPrNurUyxfTVim4UUgvd8BvWxfCOj3Dxd1pVXQ0q8EjF0uQ Ln1ZWwhLDqdbFfKNHMMPywMDX8qTyP1KCwE37OZkTb9gMTiUQBPI8BeIp5n28VXsKhUP O+yRDJ406I1bnXO/voa2eVVX2SyXrKpZQqTk/WH78NOETyj5eXDwi276lf1DeiovSibY QEsKOn/EVb9JzELbVAXMQpJEdVx35SMmcJyySwOUpBKJ5SJcK2nMf0yXcfta8yEkIisk 278R3+TxTJNOelK6SIfDZ1XN2UYlzzDI9LJhT34g8YCn6cjSuEhT6vk9r0WULsxzPucu aclQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=olxdyr8fjz9npSOxC6SSRL0lFaiJW4TGAlMF6mfOWM0=; b=YnXbwXBXUsUl3N2MzjrS1LbSIJRtPN5r82b68NHpurSk34pOU8cCitm8xmqHveN7nk VJDaDwUu0csDnH3ZOMjQah7YzE7XsvJgCvlOSf1aqqCNibPJBroQWjCYfi1l145Ywj0n zce8FVH7OFAI/NRSvfbBhROZiH5cgTq5GxDIB9RizZQvxz+4s5UagA6IDdgUY0rodFXx aeDLs0stgvvetwq/lCBn+E+loqw+k/YX87T22vnRhIeljhWg5pv5h47Y8yi7WGkdOCyi pLkD7YHuntv/3GapeD+opIC99Il7LlV7gGg6HQtVMHDj86zpI5nEKzUIyLivaC1sM3B/ nm1g== X-Gm-Message-State: AE9vXwMj1dYUPDTmk0FB9PAMbgb0FR1jCPJNCXEvr4gK2kkrl0YFkKo5FCkKNci96vui3yEN X-Received: by 10.55.145.129 with SMTP id t123mr33277884qkd.130.1474342785061; Mon, 19 Sep 2016 20:39:45 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.230.104]) by smtp.gmail.com with ESMTPSA id m4sm14901942qkf.29.2016.09.19.20.39.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 Sep 2016 20:39:44 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Van Jacobson , Yuchung Cheng , Nandita Dukkipati , Eric Dumazet , Soheil Hassas Yeganeh Subject: [PATCH v4 net-next 11/16] tcp: export tcp_tso_autosize() and parameterize minimum number of TSO segments Date: Mon, 19 Sep 2016 23:39:18 -0400 Message-Id: <1474342763-16715-12-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1474342763-16715-1-git-send-email-ncardwell@google.com> References: <1474342763-16715-1-git-send-email-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To allow congestion control modules to use the default TSO auto-sizing algorithm as one of the ingredients in their own decision about TSO sizing: 1) Export tcp_tso_autosize() so that CC modules can use it. 2) Change tcp_tso_autosize() to allow callers to specify a minimum number of segments per TSO skb, in case the congestion control module has a different notion of the best floor for TSO skbs for the connection right now. For very low-rate paths or policed connections it can be appropriate to use smaller TSO skbs. Signed-off-by: Van Jacobson Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Nandita Dukkipati Signed-off-by: Eric Dumazet Signed-off-by: Soheil Hassas Yeganeh --- include/net/tcp.h | 2 ++ net/ipv4/tcp_output.c | 9 ++++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index f8f581f..3492041 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -533,6 +533,8 @@ __u32 cookie_v6_init_sequence(const struct sk_buff *skb, __u16 *mss); #endif /* tcp_output.c */ +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs); void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss, int nonagle); bool tcp_may_send_now(struct sock *sk); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 0137956..0bf3d48 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1549,7 +1549,8 @@ static bool tcp_nagle_check(bool partial, const struct tcp_sock *tp, /* Return how many segs we'd like on a TSO packet, * to send one TSO packet per ms */ -static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) +u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + int min_tso_segs) { u32 bytes, segs; @@ -1561,10 +1562,11 @@ static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now) * This preserves ACK clocking and is consistent * with tcp_tso_should_defer() heuristic. */ - segs = max_t(u32, bytes / mss_now, sysctl_tcp_min_tso_segs); + segs = max_t(u32, bytes / mss_now, min_tso_segs); return min_t(u32, segs, sk->sk_gso_max_segs); } +EXPORT_SYMBOL(tcp_tso_autosize); /* Return the number of segments we want in the skb we are transmitting. * See if congestion control module wants to decide; otherwise, autosize. @@ -1574,7 +1576,8 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now) const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0; - return tso_segs ? : tcp_tso_autosize(sk, mss_now); + return tso_segs ? : + tcp_tso_autosize(sk, mss_now, sysctl_tcp_min_tso_segs); } /* Returns the portion of skb which can be sent right away */