From patchwork Fri Jul 14 21:49:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 788816 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3x8RDc0cMqz9s7F for ; Sat, 15 Jul 2017 07:49:52 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="syrWR40w"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751195AbdGNVtv (ORCPT ); Fri, 14 Jul 2017 17:49:51 -0400 Received: from mail-qt0-f172.google.com ([209.85.216.172]:36606 "EHLO mail-qt0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751118AbdGNVto (ORCPT ); Fri, 14 Jul 2017 17:49:44 -0400 Received: by mail-qt0-f172.google.com with SMTP id i2so72289972qta.3 for ; Fri, 14 Jul 2017 14:49:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=v4wINefIQ6KJ9vzXaho5AO6bjTT5b8M2U5wjAPygDW0=; b=syrWR40wIu/PeJFTJ70/Jq4PjZZJh37GLtnL4NnjO0z+AsBR+lAS5EgoMhj3upyV2O wtQWy9jWB0651UO8kzSycBA0DLl+sbqRdPh0KymK+FUDx2cp4BObCam0jNcFosXkqvOu IvTMFWWWpxWxc0pbsrA9MpIKDEznyMTxEsZRJ0Bx2x7lzSJYjy4IIBj1uboLBymSOfQs 7KbizlgSaT9+pV5wV9wfUEivMmAzyHbpCnsqoRqQyTnO8T3zEnCEfNE12W7yg+Kgfi4V nUHGKf722wtDd1qIc25+VoPxKaocB3Bzzf/t4eqlNOjBLhToh5Lw4Qi7UZ98Q1X/9Cru jifA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v4wINefIQ6KJ9vzXaho5AO6bjTT5b8M2U5wjAPygDW0=; b=TYzM/7JDtC18VfNvRstWBHPNxFDQQO5a+KdCMvk6N2flcz/DdKd/b7dHQsjO4eXQuN JyCx4j0BUUa4hB74+zmg8OEfOdtXrhbPTxirsieCExNIimkmEyfBVcnsv/8Cpz7U6KVX abLJZu6F7MlLE4Ma/PNG/d5zGF5xyAuZYhH7RurMSvNfBOZ735oKtrPVX2S47aGxpDUQ e1xzClIS6vc/KTbPhGlJRJiDF5GMzkIVlRiSJKSuxJd0ZbhGDnupY6/UsHX4aF642C2p NbVvdi5fwlmWbazhEo18zO2w+3jTACo/qmgn6SmhqYtC23KK5dhvF75HzaL+e+Gb7RrU 9ZLA== X-Gm-Message-State: AIVw11249MLiTcQd7JmHfZ9RyVFBmfEWn0mcfC9F45WrP1vFnJmtn9P2 n5+4GJwq2G4p5auw X-Received: by 10.200.51.125 with SMTP id u58mr13817040qta.12.1500068983729; Fri, 14 Jul 2017 14:49:43 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.212.71]) by smtp.gmail.com with ESMTPSA id f124sm6940655qkj.57.2017.07.14.14.49.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 14 Jul 2017 14:49:43 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Yuchung Cheng , Soheil Hassas Yeganeh Subject: [PATCH net 3/5] tcp_bbr: introduce bbr_init_pacing_rate_from_rtt() helper Date: Fri, 14 Jul 2017 17:49:23 -0400 Message-Id: <20170714214925.30720-3-ncardwell@google.com> X-Mailer: git-send-email 2.13.2.932.g7449e964c-goog In-Reply-To: <20170714214925.30720-1-ncardwell@google.com> References: <20170714214925.30720-1-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce a helper to initialize the BBR pacing rate unconditionally, based on the current cwnd and RTT estimate. This is a pure refactor, but is needed for two following fixes. Fixes: 0f8782ea1497 ("tcp_bbr: add BBR congestion control") Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Soheil Hassas Yeganeh --- net/ipv4/tcp_bbr.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index 29e23b851b97..3276140c2506 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -221,6 +221,23 @@ static u32 bbr_bw_to_pacing_rate(struct sock *sk, u32 bw, int gain) return rate; } +/* Initialize pacing rate to: high_gain * init_cwnd / RTT. */ +static void bbr_init_pacing_rate_from_rtt(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + u64 bw; + u32 rtt_us; + + if (tp->srtt_us) { /* any RTT sample yet? */ + rtt_us = max(tp->srtt_us >> 3, 1U); + } else { /* no RTT sample yet */ + rtt_us = USEC_PER_MSEC; /* use nominal default RTT */ + } + bw = (u64)tp->snd_cwnd * BW_UNIT; + do_div(bw, rtt_us); + sk->sk_pacing_rate = bbr_bw_to_pacing_rate(sk, bw, bbr_high_gain); +} + /* Pace using current bw estimate and a gain factor. In order to help drive the * network toward lower queues while maintaining high utilization and low * latency, the average pacing rate aims to be slightly (~1%) lower than the @@ -805,7 +822,6 @@ static void bbr_init(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); struct bbr *bbr = inet_csk_ca(sk); - u64 bw; bbr->prior_cwnd = 0; bbr->tso_segs_goal = 0; /* default segs per skb until first ACK */ @@ -821,11 +837,8 @@ static void bbr_init(struct sock *sk) minmax_reset(&bbr->bw, bbr->rtt_cnt, 0); /* init max bw to 0 */ - /* Initialize pacing rate to: high_gain * init_cwnd / RTT. */ - bw = (u64)tp->snd_cwnd * BW_UNIT; - do_div(bw, (tp->srtt_us >> 3) ? : USEC_PER_MSEC); sk->sk_pacing_rate = 0; /* force an update of sk_pacing_rate */ - bbr_set_pacing_rate(sk, bw, bbr_high_gain); + bbr_init_pacing_rate_from_rtt(sk); bbr->restore_cwnd = 0; bbr->round_start = 0;