Message ID | 20201031013412.1973112-1-ycheng@google.com |
---|---|
State | Accepted |
Delegated to: | David Miller |
Headers | show |
Series | [net-next] tcp: avoid slow start during fast recovery on new losses | expand |
Context | Check | Description |
---|---|---|
jkicinski/cover_letter | success | Link |
jkicinski/fixes_present | success | Link |
jkicinski/patch_count | success | Link |
jkicinski/tree_selection | success | Clearly marked for net-next |
jkicinski/subject_prefix | success | Link |
jkicinski/source_inline | success | Was 0 now: 0 |
jkicinski/verify_signedoff | success | Link |
jkicinski/module_param | success | Was 0 now: 0 |
jkicinski/build_32bit | success | Errors and warnings before: 1537 this patch: 1537 |
jkicinski/kdoc | success | Errors and warnings before: 0 this patch: 0 |
jkicinski/verify_fixes | success | Link |
jkicinski/checkpatch | warning | WARNING: line length of 90 exceeds 80 columns WARNING: line length of 91 exceeds 80 columns |
jkicinski/build_allmodconfig_warn | success | Errors and warnings before: 1524 this patch: 1524 |
jkicinski/header_inline | success | Link |
jkicinski/stable | success | Stable not CCed |
On Fri, 30 Oct 2020 18:34:12 -0700 Yuchung Cheng wrote: > During TCP fast recovery, the congestion control in charge is by > default the Proportional Rate Reduction (PRR) unless the congestion > control module specified otherwise (e.g. BBR). > > Previously when tcp_packets_in_flight() is below snd_ssthresh PRR > would slow start upon receiving an ACK that > 1) cumulatively acknowledges retransmitted data > and > 2) does not detect further lost retransmission > > Such conditions indicate the repair is in good steady progress > after the first round trip of recovery. Otherwise PRR adopts the > packet conservation principle to send only the amount that was > newly delivered (indicated by this ACK). > > This patch generalizes the previous design principle to include > also the newly sent data beside retransmission: as long as > the delivery is making good progress, both retransmission and > new data should be accounted to make PRR more cautious in slow > starting. > > Suggested-by: Matt Mathis <mattmathis@google.com> > Suggested-by: Neal Cardwell <ncardwell@google.com> > Signed-off-by: Yuchung Cheng <ycheng@google.com> > Signed-off-by: Neal Cardwell <ncardwell@google.com> > Signed-off-by: Eric Dumazet <edumazet@google.com> Applied, thanks!
diff --git a/include/net/tcp.h b/include/net/tcp.h index d4ef5bf94168..4aba0f069b05 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -386,7 +386,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, int tcp_child_process(struct sock *parent, struct sock *child, struct sk_buff *skb); void tcp_enter_loss(struct sock *sk); -void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int flag); +void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int newly_lost, int flag); void tcp_clear_retrans(struct tcp_sock *tp); void tcp_update_metrics(struct sock *sk); void tcp_init_metrics(struct sock *sk); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 389d1b340248..fb3a7750f623 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2546,7 +2546,7 @@ static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo) * 1) If the packets in flight is larger than ssthresh, PRR spreads the * cwnd reductions across a full RTT. * 2) Otherwise PRR uses packet conservation to send as much as delivered. - * But when the retransmits are acked without further losses, PRR + * But when SND_UNA is acked without further losses, * slow starts cwnd up to ssthresh to speed up the recovery. */ static void tcp_init_cwnd_reduction(struct sock *sk) @@ -2563,7 +2563,7 @@ static void tcp_init_cwnd_reduction(struct sock *sk) tcp_ecn_queue_cwr(tp); } -void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int flag) +void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int newly_lost, int flag) { struct tcp_sock *tp = tcp_sk(sk); int sndcnt = 0; @@ -2577,8 +2577,7 @@ void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int flag) u64 dividend = (u64)tp->snd_ssthresh * tp->prr_delivered + tp->prior_cwnd - 1; sndcnt = div_u64(dividend, tp->prior_cwnd) - tp->prr_out; - } else if ((flag & (FLAG_RETRANS_DATA_ACKED | FLAG_LOST_RETRANS)) == - FLAG_RETRANS_DATA_ACKED) { + } else if (flag & FLAG_SND_UNA_ADVANCED && !newly_lost) { sndcnt = min_t(int, delta, max_t(int, tp->prr_delivered - tp->prr_out, newly_acked_sacked) + 1); @@ -3419,7 +3418,7 @@ static void tcp_cong_control(struct sock *sk, u32 ack, u32 acked_sacked, if (tcp_in_cwnd_reduction(sk)) { /* Reduce cwnd if state mandates */ - tcp_cwnd_reduction(sk, acked_sacked, flag); + tcp_cwnd_reduction(sk, acked_sacked, rs->losses, flag); } else if (tcp_may_raise_cwnd(sk, flag)) { /* Advance cwnd if state allows */ tcp_cong_avoid(sk, ack, acked_sacked); diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c index f65a3ddd0d58..177307a3081f 100644 --- a/net/ipv4/tcp_recovery.c +++ b/net/ipv4/tcp_recovery.c @@ -153,6 +153,7 @@ void tcp_rack_reo_timeout(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); u32 timeout, prior_inflight; + u32 lost = tp->lost; prior_inflight = tcp_packets_in_flight(tp); tcp_rack_detect_loss(sk, &timeout); @@ -160,7 +161,7 @@ void tcp_rack_reo_timeout(struct sock *sk) if (inet_csk(sk)->icsk_ca_state != TCP_CA_Recovery) { tcp_enter_recovery(sk, false); if (!inet_csk(sk)->icsk_ca_ops->cong_control) - tcp_cwnd_reduction(sk, 1, 0); + tcp_cwnd_reduction(sk, 1, tp->lost - lost, 0); } tcp_xmit_retransmit_queue(sk); }