Message ID | 20190926143005.106045-1-yyd@google.com |
---|---|
State | Accepted |
Delegated to: | David Miller |
Headers | show |
Series | [net] tcp_bbr: fix quantization code to not raise cwnd if not probing bandwidth | expand |
From: "Kevin(Yudong) Yang" <yyd@google.com> Date: Thu, 26 Sep 2019 10:30:05 -0400 > There was a bug in the previous logic that attempted to ensure gain cycling > gets inflight above BDP even for small BDPs. This code correctly raised and > lowered target inflight values during the gain cycle. And this code > correctly ensured that cwnd was raised when probing bandwidth. However, it > did not correspondingly ensure that cwnd was *not* raised in this way when > *not* probing for bandwidth. The result was that small-BDP flows that were > always cwnd-bound could go for many cycles with a fixed cwnd, and not probe > or yield bandwidth at all. This meant that multiple small-BDP flows could > fail to converge in their bandwidth allocations. > > Fixes: 383d470 ("tcp_bbr: fix bw probing to raise in-flight data for very small BDPs") Always use 12 digits of significance for SHA1 IDs, there are already 6 digit conflicts. > Signed-off-by: Kevin(Yudong) Yang <yyd@google.com> > Acked-by: Neal Cardwell <ncardwell@google.com> > Acked-by: Yuchung Cheng <ycheng@google.com> > Acked-by: Soheil Hassas Yeganeh <soheil@google.com> > Acked-by: Priyaranjan Jha <priyarjha@google.com> Applied and queued up for -stable.
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index 95b59540eee1..32772d6ded4e 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -388,7 +388,7 @@ static u32 bbr_bdp(struct sock *sk, u32 bw, int gain) * which allows 2 outstanding 2-packet sequences, to try to keep pipe * full even with ACK-every-other-packet delayed ACKs. */ -static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd, int gain) +static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd) { struct bbr *bbr = inet_csk_ca(sk); @@ -399,7 +399,7 @@ static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd, int gain) cwnd = (cwnd + 1) & ~1U; /* Ensure gain cycling gets inflight above BDP even for small BDPs. */ - if (bbr->mode == BBR_PROBE_BW && gain > BBR_UNIT) + if (bbr->mode == BBR_PROBE_BW && bbr->cycle_idx == 0) cwnd += 2; return cwnd; @@ -411,7 +411,7 @@ static u32 bbr_inflight(struct sock *sk, u32 bw, int gain) u32 inflight; inflight = bbr_bdp(sk, bw, gain); - inflight = bbr_quantization_budget(sk, inflight, gain); + inflight = bbr_quantization_budget(sk, inflight); return inflight; } @@ -531,7 +531,7 @@ static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs, * due to aggregation (of data and/or ACKs) visible in the ACK stream. */ target_cwnd += bbr_ack_aggregation_cwnd(sk); - target_cwnd = bbr_quantization_budget(sk, target_cwnd, gain); + target_cwnd = bbr_quantization_budget(sk, target_cwnd); /* If we're below target cwnd, slow start cwnd toward target cwnd. */ if (bbr_full_bw_reached(sk)) /* only cut cwnd if we filled the pipe */