From patchwork Wed Jan 4 16:19:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Soheil Hassas Yeganeh X-Patchwork-Id: 711053 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3ttx7K5qZZz9t0w for ; Thu, 5 Jan 2017 03:27:57 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="S6ts4ncg"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968664AbdADQ1x (ORCPT ); Wed, 4 Jan 2017 11:27:53 -0500 Received: from mail-qt0-f194.google.com ([209.85.216.194]:34508 "EHLO mail-qt0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S968643AbdADQ1r (ORCPT ); Wed, 4 Jan 2017 11:27:47 -0500 Received: by mail-qt0-f194.google.com with SMTP id j29so3482604qtc.1 for ; Wed, 04 Jan 2017 08:27:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=DcOhPRJxf9mJVMipQiXAPvz2men434sT+/OLY99v4f0=; b=S6ts4ncgZPvnHjX6MOS7dL56Oaze9/oig0WTBmOzYPYW+4MOVz3xtmBC+8oN1AlxzM NlOYpJTGccsZrEpBgzcfasQSdbzZ14Sk9ddaUola+XJgSg0e5uCc/nkrm3zDvBs6fcVM a6e6CSsPJYyawpm+AZqb2PNTogcM3X1yGJr8EV3fUzO2OzPZ09IjQPEhDdUGjZuna0UG 51zQMRS4kC3xBZhuTJs/PTMwRFJ3B7BnL4j5VpN3aemcYZZ2oKm2IEmWxxWP/NLFfa+d OIEDAtsA/rZRSjxGgHDftFLnE4EDnFL9innlYE4bu5GqizuYYixmndNb13rOnbIp9Xzg mzAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=DcOhPRJxf9mJVMipQiXAPvz2men434sT+/OLY99v4f0=; b=eXpqgl09JQAb7zDbqtD1vtExAaOVPJzvGEdLlmuqV1IuNSuqYBzi/ugX+vvN3bbQ9a PTQTJ5xomwNIf8Lw6LQDfXGbyZzey5moFgAtqkYUDEcIi9ClxtEx+Rx2Lk7WLd5NbFfY TQto5VlZzq38fBagYHDorQhK4XHJzXEZhEnZ6JETLa+yvVDE8FyQcgRWLHQ/F35l97p9 fThJgH55mtGOHwvW95TM2S7OrFcbiWkTWKWmFnBsvs/0icnCZazl5aXAXj/qUIo5mVrM yxLsyMXhvxg9uNohXlz2WjITR6g4ZjqsaABoia852+h7A23b65a1niIV5YA0LdwDhdWH jwxg== X-Gm-Message-State: AIkVDXI0i2/xRKKiNoA7UrKV5QbtIGFlfuRtbtM94w5dCpA4FwQDDKTkQ7N0U9IDI8PBMQ== X-Received: by 10.200.44.184 with SMTP id 53mr60390791qtw.291.1483546779540; Wed, 04 Jan 2017 08:19:39 -0800 (PST) Received: from soheil.nyc.corp.google.com ([100.101.231.22]) by smtp.gmail.com with ESMTPSA id q3sm46213273qtc.34.2017.01.04.08.19.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 04 Jan 2017 08:19:38 -0800 (PST) From: Soheil Hassas Yeganeh To: davem@davemloft.net, netdev@vger.kernel.org Cc: Soheil Hassas Yeganeh , Willem de Bruijn , Eric Dumazet , Neal Cardwell , Martin KaFai Lau Subject: [PATCH net-next v2] tcp: provide timestamps for partial writes Date: Wed, 4 Jan 2017 11:19:34 -0500 Message-Id: <20170104161934.26849-1-soheil.kdev@gmail.com> X-Mailer: git-send-email 2.11.0.390.gc69c2f50cf-goog Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Soheil Hassas Yeganeh For TCP sockets, TX timestamps are only captured when the user data is successfully and fully written to the socket. In many cases, however, TCP writes can be partial for which no timestamp is collected. Collect timestamps whenever any user data is (fully or partially) copied into the socket. Pass tcp_write_queue_tail to tcp_tx_timestamp instead of the local skb pointer since it can be set to NULL on the error path. Note that tcp_write_queue_tail can be NULL, even if bytes have been copied to the socket. This is because acknowledgements are being processed in tcp_sendmsg(), and by the time tcp_tx_timestamp is called tcp_write_queue_tail can be NULL. For such cases, this patch does not collect any timestamps (i.e., it is best-effort). This patch is written with suggestions from Willem de Bruijn and Eric Dumazet. Change-log V1 -> V2: - Use sockc.tsflags instead of sk->sk_tsflags. - Use the same code path for normal writes and errors. Signed-off-by: Soheil Hassas Yeganeh Acked-by: Yuchung Cheng Cc: Willem de Bruijn Cc: Eric Dumazet Cc: Neal Cardwell Cc: Martin KaFai Lau Acked-by: Willem de Bruijn --- net/ipv4/tcp.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 2e3807d8eba8..ec97e4b4a62f 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -429,7 +429,7 @@ EXPORT_SYMBOL(tcp_init_sock); static void tcp_tx_timestamp(struct sock *sk, u16 tsflags, struct sk_buff *skb) { - if (tsflags) { + if (tsflags && skb) { struct skb_shared_info *shinfo = skb_shinfo(skb); struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); @@ -958,10 +958,8 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, copied += copy; offset += copy; size -= copy; - if (!size) { - tcp_tx_timestamp(sk, sk->sk_tsflags, skb); + if (!size) goto out; - } if (skb->len < size_goal || (flags & MSG_OOB)) continue; @@ -987,8 +985,11 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, } out: - if (copied && !(flags & MSG_SENDPAGE_NOTLAST)) - tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); + if (copied) { + tcp_tx_timestamp(sk, sk->sk_tsflags, tcp_write_queue_tail(sk)); + if (!(flags & MSG_SENDPAGE_NOTLAST)) + tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); + } return copied; do_error: @@ -1281,7 +1282,6 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) copied += copy; if (!msg_data_left(msg)) { - tcp_tx_timestamp(sk, sockc.tsflags, skb); if (unlikely(flags & MSG_EOR)) TCP_SKB_CB(skb)->eor = 1; goto out; @@ -1312,8 +1312,10 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) } out: - if (copied) + if (copied) { + tcp_tx_timestamp(sk, sockc.tsflags, tcp_write_queue_tail(sk)); tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); + } out_nopush: release_sock(sk); return copied + copied_syn;