diff mbox

[net-next,3/4] ipv4: Use probe_size to check write queue data length

Message ID 1425978986-8512-4-git-send-email-fan.du@intel.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Fan Du March 10, 2015, 9:16 a.m. UTC
When building a probe segment, only 'probe_size' of bytes
is needed, so check write queue data length with probe_size
instead of size_needed.

Signed-off-by: Fan Du <fan.du@intel.com>
---
 net/ipv4/tcp_output.c |    8 +++-----
 1 files changed, 3 insertions(+), 5 deletions(-)

Comments

John Heffner March 10, 2015, 12:26 p.m. UTC | #1
NACK.  From RFC4821:

   In addition, the timely loss detection algorithms in most protocols
   have pre-conditions that SHOULD be satisfied before sending a probe.
   For example, TCP Fast Retransmit is not robust unless there are
   sufficient segments following a probe; that is, the sender SHOULD
   have enough data queued and sufficient receiver window to send the
   probe plus at least Tcprexmtthresh [RFC2760] additional segments.
   This restriction may inhibit probing in some protocol states, such as
   too close to the end of a connection, or when the window is too
   small.


On Tue, Mar 10, 2015 at 5:16 AM, Fan Du <fan.du@intel.com> wrote:
> When building a probe segment, only 'probe_size' of bytes
> is needed, so check write queue data length with probe_size
> instead of size_needed.
>
> Signed-off-by: Fan Du <fan.du@intel.com>
> ---
>  net/ipv4/tcp_output.c |    8 +++-----
>  1 files changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> index 79977d1..89d3f84 100644
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -1872,7 +1872,6 @@ static int tcp_mtu_probe(struct sock *sk)
>         struct net *net = sock_net(sk);
>         int len;
>         int probe_size;
> -       int size_needed;
>         int copy;
>         int mss_now;
>         int interval;
> @@ -1895,7 +1894,6 @@ static int tcp_mtu_probe(struct sock *sk)
>         mss_now = tcp_current_mss(sk);
>         probe_size = tcp_mtu_to_mss(sk, (icsk->icsk_mtup.search_high +
>                                     icsk->icsk_mtup.search_low) >> 1);
> -       size_needed = probe_size + (tp->reordering + 1) * tp->mss_cache;
>         interval = icsk->icsk_mtup.search_high - icsk->icsk_mtup.search_low;
>         /* When misfortune happens, we are reprobing actively,
>          * and then reprobe timer has expired. We stick with current
> @@ -1911,12 +1909,12 @@ static int tcp_mtu_probe(struct sock *sk)
>         }
>
>         /* Have enough data in the send queue to probe? */
> -       if (tp->write_seq - tp->snd_nxt < size_needed)
> +       if (tp->write_seq - tp->snd_nxt < probe_size)
>                 return -1;
>
> -       if (tp->snd_wnd < size_needed)
> +       if (tp->snd_wnd < probe_size)
>                 return -1;
> -       if (after(tp->snd_nxt + size_needed, tcp_wnd_end(tp)))
> +       if (after(tp->snd_nxt + probe_size, tcp_wnd_end(tp)))
>                 return 0;
>
>         /* Do we need to wait to drain cwnd? With none in flight, don't stall */
> --
> 1.7.1
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
FengYu LeiDian March 18, 2015, 1:37 a.m. UTC | #2
于 2015年03月10日 20:26, John Heffner 写道:
> NACK.  From RFC4821:
>
>     In addition, the timely loss detection algorithms in most protocols
>     have pre-conditions that SHOULD be satisfied before sending a probe.
>     For example, TCP Fast Retransmit is not robust unless there are
>     sufficient segments following a probe; that is, the sender SHOULD
>     have enough data queued and sufficient receiver window to send the
>     probe plus at least Tcprexmtthresh [RFC2760] additional segments.
>     This restriction may inhibit probing in some protocol states, such as
>     too close to the end of a connection, or when the window is too
>     small.
>

Thanks for pointing this out for me.

My limit understanding is the extra segments is used to trigger fast retransmit,
and the conditions is the count of duplicate ack. Then why needs an extra one more
segment here besides 'reordering' segment to trigger this?

size_needed = probe_size + (tp->reordering + 1) * tp->mss_cache;
                                             ^^^
btw, any comments for the rest of patches?
John Heffner March 18, 2015, 12:41 p.m. UTC | #3
On Tue, Mar 17, 2015 at 9:37 PM, Fan Du <fengyuleidian0615@gmail.com> wrote:
> 于 2015年03月10日 20:26, John Heffner 写道:
>>
>> NACK.  From RFC4821:
>>
>>     In addition, the timely loss detection algorithms in most protocols
>>     have pre-conditions that SHOULD be satisfied before sending a probe.
>>     For example, TCP Fast Retransmit is not robust unless there are
>>     sufficient segments following a probe; that is, the sender SHOULD
>>     have enough data queued and sufficient receiver window to send the
>>     probe plus at least Tcprexmtthresh [RFC2760] additional segments.
>>     This restriction may inhibit probing in some protocol states, such as
>>     too close to the end of a connection, or when the window is too
>>     small.
>>
>
> Thanks for pointing this out for me.
>
> My limit understanding is the extra segments is used to trigger fast
> retransmit,
> and the conditions is the count of duplicate ack. Then why needs an extra
> one more
> segment here besides 'reordering' segment to trigger this?
>
> size_needed = probe_size + (tp->reordering + 1) * tp->mss_cache;
>                                             ^^^

I can't recall a particular reason for the plus one.  It may be
unnecessarily conservative.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 79977d1..89d3f84 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1872,7 +1872,6 @@  static int tcp_mtu_probe(struct sock *sk)
 	struct net *net = sock_net(sk);
 	int len;
 	int probe_size;
-	int size_needed;
 	int copy;
 	int mss_now;
 	int interval;
@@ -1895,7 +1894,6 @@  static int tcp_mtu_probe(struct sock *sk)
 	mss_now = tcp_current_mss(sk);
 	probe_size = tcp_mtu_to_mss(sk, (icsk->icsk_mtup.search_high +
 				    icsk->icsk_mtup.search_low) >> 1);
-	size_needed = probe_size + (tp->reordering + 1) * tp->mss_cache;
 	interval = icsk->icsk_mtup.search_high - icsk->icsk_mtup.search_low;
 	/* When misfortune happens, we are reprobing actively,
 	 * and then reprobe timer has expired. We stick with current
@@ -1911,12 +1909,12 @@  static int tcp_mtu_probe(struct sock *sk)
 	}
 
 	/* Have enough data in the send queue to probe? */
-	if (tp->write_seq - tp->snd_nxt < size_needed)
+	if (tp->write_seq - tp->snd_nxt < probe_size)
 		return -1;
 
-	if (tp->snd_wnd < size_needed)
+	if (tp->snd_wnd < probe_size)
 		return -1;
-	if (after(tp->snd_nxt + size_needed, tcp_wnd_end(tp)))
+	if (after(tp->snd_nxt + probe_size, tcp_wnd_end(tp)))
 		return 0;
 
 	/* Do we need to wait to drain cwnd? With none in flight, don't stall */