From patchwork Thu Dec 1 09:34:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 701423 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3tTsZp13Gsz9t1B for ; Thu, 1 Dec 2016 20:35:14 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="cVan9DFI"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756713AbcLAJew (ORCPT ); Thu, 1 Dec 2016 04:34:52 -0500 Received: from mail-wj0-f172.google.com ([209.85.210.172]:35516 "EHLO mail-wj0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754576AbcLAJeu (ORCPT ); Thu, 1 Dec 2016 04:34:50 -0500 Received: by mail-wj0-f172.google.com with SMTP id v7so198956710wjy.2 for ; Thu, 01 Dec 2016 01:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=yjzMsJegeKuWA7mi6wtvZObxR81VRdXQrq/2wqn7Kzk=; b=cVan9DFIM8sJgrmchEDPCIrPKar27FRJrFxSoVwi5HjCZ7/zAL84gpo1KgfiKA5Ile lxYH4QIjty51DGaR6vAhx2ye6wUxPrJ85sq+oLENKn5dtszTCzKwOp1tjX77OFZ4iO1+ +4gXfnvFIlRMcADDLxD4+o74aO8/rChLHTRADcZAw7teAgU+M6sXaMm8rL1oDsmHWo20 LKx6oMbCY9Vogej0t9rmZaJJ/RKdz81dDZhGEV5YyvRMtYDoOtyXxTevXGXxa/PBL/wg AwvVoVFNvxxqm5iHzHw7eHz0GOC3z3yLzUGVCEnmA4Ge/tx3DUeuPWbHxsRiEMlD89m/ jS7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=yjzMsJegeKuWA7mi6wtvZObxR81VRdXQrq/2wqn7Kzk=; b=b8frXDkdjz+P+cytNR69ZXNRh+ePLWpD8bTvdmjxzqPbtQMlWyroGFIGtBZsqW2Pwn aeTRrVG3sH9hDMX2XnkcfINqqAsiVKBJj6hWy38RX450+o+qbwPx3FT2OfIxiAGzFGfs veyCEkRHbROBS5vBYHTbBtH2NfqVEbbuA1cTp0kbfBiIGELVZZGnTPPfc6im2vJvlbtK elP3bwB+hP9Q4AlpPFrHiaOn+UdLbsDxBcUF84ANfNtyqA0hVHXsLOo7QhaGmM2Dauz6 WX1bsmzPOJpCRnaCZac1RC6zip1M9sYSq7APYVQU+B6jTjs7as617mPR1lIZu1f88Oj1 twPQ== X-Gm-Message-State: AKaTC03j4GKIp87Q/PUvhUeBAVX6SmzhPir+mocO/J+rP7C0nIUzKfwBGauV2CdqZLtkpaCM X-Received: by 10.194.169.227 with SMTP id ah3mr32329767wjc.2.1480584889221; Thu, 01 Dec 2016 01:34:49 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([100.105.12.17]) by smtp.gmail.com with ESMTPSA id kp5sm77158341wjb.8.2016.12.01.01.34.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 01 Dec 2016 01:34:47 -0800 (PST) Received: by andreyknvl0.muc.corp.google.com (Postfix, from userid 206546) id B8B5C18017F; Thu, 1 Dec 2016 10:34:46 +0100 (CET) From: Andrey Konovalov To: Herbert Xu , "David S . Miller" , Jason Wang , Eric Dumazet , Peter Klausler , Paolo Abeni , "Michael S . Tsirkin" , Soheil Hassas Yeganeh , Markus Elfring , Mike Rapoport , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Dmitry Vyukov , Kostya Serebryany , syzkaller@googlegroups.com, Andrey Konovalov Subject: [PATCH v2] tun: Use netif_receive_skb instead of netif_rx Date: Thu, 1 Dec 2016 10:34:40 +0100 Message-Id: <1480584880-48651-1-git-send-email-andreyknvl@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch changes tun.c to call netif_receive_skb instead of netif_rx when a packet is received (if CONFIG_4KSTACKS is not enabled to avoid stack exhaustion). The difference between the two is that netif_rx queues the packet into the backlog, and netif_receive_skb proccesses the packet in the current context. This patch is required for syzkaller [1] to collect coverage from packet receive paths, when a packet being received through tun (syzkaller collects coverage per process in the process context). As mentioned by Eric this change also speeds up tun/tap. As measured by Peter it speeds up his closed-loop single-stream tap/OVS benchmark by about 23%, from 700k packets/second to 867k packets/second. A similar patch was introduced back in 2010 [2, 3], but the author found out that the patch doesn't help with the task he had in mind (for cgroups to shape network traffic based on the original process) and decided not to go further with it. The main concern back then was about possible stack exhaustion with 4K stacks. [1] https://github.com/google/syzkaller [2] https://www.spinics.net/lists/netdev/thrd440.html#130570 [3] https://www.spinics.net/lists/netdev/msg130570.html Signed-off-by: Andrey Konovalov Acked-by: Jason Wang Acked-by: Michael S. Tsirkin --- Changes since v1: - incorporate Eric's note about speed improvements in commit description - use netif_receive_skb CONFIG_4KSTACKS is not enabled drivers/net/tun.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 8093e39..d310b13 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1304,7 +1304,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, skb_probe_transport_header(skb, 0); rxhash = skb_get_hash(skb); +#ifndef CONFIG_4KSTACKS + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); +#else netif_rx_ni(skb); +#endif stats = get_cpu_ptr(tun->pcpu_stats); u64_stats_update_begin(&stats->syncp);