From patchwork Wed Oct 17 00:16:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 985071 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="l2BoN86c"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42ZXmb4XBlz9s9J for ; Wed, 17 Oct 2018 11:17:03 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727546AbeJQIJ6 (ORCPT ); Wed, 17 Oct 2018 04:09:58 -0400 Received: from mail-qt1-f202.google.com ([209.85.160.202]:49877 "EHLO mail-qt1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727088AbeJQIJ6 (ORCPT ); Wed, 17 Oct 2018 04:09:58 -0400 Received: by mail-qt1-f202.google.com with SMTP id f20-v6so26585097qta.16 for ; Tue, 16 Oct 2018 17:17:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/7FhtSdBRNZ2CVXVsdemY1JDwE+ZOkFq+m+vhktyxGg=; b=l2BoN86cNn77gY9Rhm2BghCl7Fm6A1YmzTE04HLBP2dZRsl+mkBFBVq5PD12dasRAy BIMjTpMw+tKiPfB9TWQOBa2h/7UV7KxHH63Qm13/MkLPdWynS0kJlsLgkpU1YlwL5u49 L+w5snehnFSdNib3t27mUKeHHwPJr88phd/V0OdN+xLLSm4ef0Y1FC5nOg0zCqr2bwxl ThCBHemAtybnXrOsAkoJTjKaHkf7Tzn/ycgjGQHAG4Mxk77rpxe7oEsBztSgSFquRwJO gEbZb6PrKtk0SQoXdRIzObjQZvSzC1KPwK7qRB+zFSBBtGouqoAB+BszVx+L535zERsl MATw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/7FhtSdBRNZ2CVXVsdemY1JDwE+ZOkFq+m+vhktyxGg=; b=jL6viHUSj3wZOrdqq84yWfgDuYv9wlVGD+YFlFmOZEWOwU/0wK9jgOK9GEGSFsX9FQ uqp5/lvcURVbn/RvMsB58+h2pU6OBz+Fi9GHCd54v+m6X85Sq9qP1iqlXPWNlkc6FxKb nXUdsG1XQHanIngZfQsVVxcQ8frmpz03bcoPfIo+PTP2k86kXEhjqSBCN06TO0pdOO6Y c52N52PMFAZ6uuf2kZ80V6xnKYEOAo6jmxRauAlcBDannS3EFKYwHMLRLWW8v2FlATVE nZMNe1whXq3NGyY60zTUhl3lCvpf0CZScSfCAqUTlFKNl2iYdFGrGBV7MDDHg957qFJc LwuQ== X-Gm-Message-State: ABuFfojSUus3XhlEv8fZu0WPOTV1Db5hgqdB9eEyLOy9U6pJ/lZMXEli hiowP2y31tQ8PiWEfxpapbhLhHA3lsNXaUE= X-Google-Smtp-Source: ACcGV63qYGEazRtSvURm+7bZ28PmzKcGZojhIYAmIkAetPA4urowpBoobQUsQam7gGhQ73Lr+c7xjhWXUZ1g2C0= X-Received: by 2002:a0c:88b2:: with SMTP id 47mr19436619qvn.58.1539735421175; Tue, 16 Oct 2018 17:17:01 -0700 (PDT) Date: Tue, 16 Oct 2018 20:16:45 -0400 In-Reply-To: <20181017001645.261770-1-ncardwell@google.com> Message-Id: <20181017001645.261770-3-ncardwell@google.com> Mime-Version: 1.0 References: <20181017001645.261770-1-ncardwell@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [PATCH net-next 2/2] tcp_bbr: centralize code to set gains From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Yuchung Cheng , Soheil Hassas Yeganeh , Priyaranjan Jha , Eric Dumazet Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Centralize the code that sets gains used for computing cwnd and pacing rate. This simplifies the code and makes it easier to change the state machine or (in the future) dynamically change the gain values and ensure that the correct gain values are always used. Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Soheil Hassas Yeganeh Signed-off-by: Priyaranjan Jha Signed-off-by: Eric Dumazet --- net/ipv4/tcp_bbr.c | 40 ++++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-) diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index 4cc2223d2cd54..9277abdd822a0 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -521,8 +521,6 @@ static void bbr_advance_cycle_phase(struct sock *sk) bbr->cycle_idx = (bbr->cycle_idx + 1) & (CYCLE_LEN - 1); bbr->cycle_mstamp = tp->delivered_mstamp; - bbr->pacing_gain = bbr->lt_use_bw ? BBR_UNIT : - bbr_pacing_gain[bbr->cycle_idx]; } /* Gain cycling: cycle pacing gain to converge to fair share of available bw. */ @@ -540,8 +538,6 @@ static void bbr_reset_startup_mode(struct sock *sk) struct bbr *bbr = inet_csk_ca(sk); bbr->mode = BBR_STARTUP; - bbr->pacing_gain = bbr_high_gain; - bbr->cwnd_gain = bbr_high_gain; } static void bbr_reset_probe_bw_mode(struct sock *sk) @@ -549,8 +545,6 @@ static void bbr_reset_probe_bw_mode(struct sock *sk) struct bbr *bbr = inet_csk_ca(sk); bbr->mode = BBR_PROBE_BW; - bbr->pacing_gain = BBR_UNIT; - bbr->cwnd_gain = bbr_cwnd_gain; bbr->cycle_idx = CYCLE_LEN - 1 - prandom_u32_max(bbr_cycle_rand); bbr_advance_cycle_phase(sk); /* flip to next phase of gain cycle */ } @@ -768,8 +762,6 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs) if (bbr->mode == BBR_STARTUP && bbr_full_bw_reached(sk)) { bbr->mode = BBR_DRAIN; /* drain queue we created */ - bbr->pacing_gain = bbr_drain_gain; /* pace slow to drain */ - bbr->cwnd_gain = bbr_high_gain; /* maintain cwnd */ tcp_sk(sk)->snd_ssthresh = bbr_target_cwnd(sk, bbr_max_bw(sk), BBR_UNIT); } /* fall through to check if in-flight is already small: */ @@ -831,8 +823,6 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs) if (bbr_probe_rtt_mode_ms > 0 && filter_expired && !bbr->idle_restart && bbr->mode != BBR_PROBE_RTT) { bbr->mode = BBR_PROBE_RTT; /* dip, drain queue */ - bbr->pacing_gain = BBR_UNIT; - bbr->cwnd_gain = BBR_UNIT; bbr_save_cwnd(sk); /* note cwnd so we can restore it */ bbr->probe_rtt_done_stamp = 0; } @@ -860,6 +850,35 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs) bbr->idle_restart = 0; } +static void bbr_update_gains(struct sock *sk) +{ + struct bbr *bbr = inet_csk_ca(sk); + + switch (bbr->mode) { + case BBR_STARTUP: + bbr->pacing_gain = bbr_high_gain; + bbr->cwnd_gain = bbr_high_gain; + break; + case BBR_DRAIN: + bbr->pacing_gain = bbr_drain_gain; /* slow, to drain */ + bbr->cwnd_gain = bbr_high_gain; /* keep cwnd */ + break; + case BBR_PROBE_BW: + bbr->pacing_gain = (bbr->lt_use_bw ? + BBR_UNIT : + bbr_pacing_gain[bbr->cycle_idx]); + bbr->cwnd_gain = bbr_cwnd_gain; + break; + case BBR_PROBE_RTT: + bbr->pacing_gain = BBR_UNIT; + bbr->cwnd_gain = BBR_UNIT; + break; + default: + WARN_ONCE(1, "BBR bad mode: %u\n", bbr->mode); + break; + } +} + static void bbr_update_model(struct sock *sk, const struct rate_sample *rs) { bbr_update_bw(sk, rs); @@ -867,6 +886,7 @@ static void bbr_update_model(struct sock *sk, const struct rate_sample *rs) bbr_check_full_bw_reached(sk, rs); bbr_check_drain(sk, rs); bbr_update_min_rtt(sk, rs); + bbr_update_gains(sk); } static void bbr_main(struct sock *sk, const struct rate_sample *rs)