From patchwork Tue May 30 15:00:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787685 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=njWr4Fyl; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfF3WSvz20Pc for ; Wed, 31 May 2023 01:04:21 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 719C881EF0; Tue, 30 May 2023 15:04:19 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 719C881EF0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459059; bh=zTUCdN6gLku17DAeRDOfy9G3jxvTWhwvf7TNHXCPdXc=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=njWr4FylZ8Z+9XA+LCo1+BtpfIartE+KucdYGhiz0pVl9O5ZOuCiM2oVOESkxdLQs Uloq/69tous/OFIVnBCeZkgMXqM28AqNpmcTsYtGvCSAWYYNAyqCneD19ufygZpsBi 9JRSF5L/e/sqaqhtC5gkHzNLTA15skx+YcoLLrX8+m60JikN3VlPYZb6XclnHOBiO3 nxO9BqwJIrKVJIJtwAP3rthbGQxJjcdA4gsyyuQ4tz/OhtMYxCs8KPqeTEst6xFwXn CEFatC9lQIhgw/qkTTiOoDjSnvteyPonUU1174jTv2S9OhS7OfKdnaHDVfY5ljqGue 02xc+jt+mDK3w== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4AtRdW3UFBk7; Tue, 30 May 2023 15:04:16 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id E22FD81FD4; Tue, 30 May 2023 15:04:15 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org E22FD81FD4 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id CB2A61BF5DF for ; Tue, 30 May 2023 15:04:14 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id B106641190 for ; Tue, 30 May 2023 15:04:14 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org B106641190 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f0-h-TuwKxUB for ; Tue, 30 May 2023 15:04:12 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 9EE2A40CBC Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 9EE2A40CBC for ; Tue, 30 May 2023 15:04:11 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192208" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192208" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304140" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304140" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:08 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:24 +0200 Message-Id: <20230530150035.1943669-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459051; x=1716995051; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EPeesJyV6ux8pKUL0l4OgzRpXXw2wr55jTBMDWb8IS4=; b=APPbcOuFEvgU7fNE4gsAA12emdAFiYi6V2VB2gnmcaVnuKPxrjhbRnRg kZC6FPO6Gi4o7nhy3ZgWJFNdYKgUYOZNex7O4XMOiwx6ZY+lj1JC2ZqBA vRilFqoo3w4f+W+RzvGNukg1kt3EO85GFhlfK/W0WD9mi0d7CbLy/ZWtg K8shH6G7Xune8BHI/9NjzoIdEF6U4xvsFLo3w7xx8EsnwEy79GbMfi+LI Bo0Uyjh9UuVwYuz2JpNxt1P9mXZOXOfGmd9JbhYjK06v5+AV/UhnbQUAk lIkqgsgYVy2TA0NPnoapu8mDcpq+ZygjEIaqoThzAWkfqEuXlCPEe8ipW A==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=APPbcOuF Subject: [Intel-wired-lan] [PATCH net-next v3 01/12] net: intel: introduce Intel Ethernet common library X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Not a secret there's a ton of code duplication between two and more Intel ethernet modules. Before introducing new changes, which would need to be copied over again, start decoupling the already existing duplicate functionality into a new module, which will be shared between several Intel Ethernet drivers. Add the lookup table which converts 8/10-bit hardware packet type into a parsed bitfield structure for easy checking packet format parameters, such as payload level, IP version, etc. This is currently used by i40e, ice and iavf and it's all the same in all three drivers. The only difference introduced in this implementation is that instead of defining a 256 (or 1024 in case of ice) element array, add unlikely() condition to limit the input to 154 (current maximum non-reserved packet type). There's no reason to waste 600 (or even 3600) bytes only to not hurt very unlikely exception packets. The hash computation function now takes payload level directly as a pkt_hash_type. There's a couple cases when non-IP ptypes are marked as L3 payload and in the previous versions their hash level would be 2, not 3. But skb_set_hash() only sees difference between L4 and non-L4, thus this won't change anything at all. The module is behind the hidden Kconfig symbol, which the drivers will select when needed. The exports are behind 'LIBIE' namespace to limit the scope of the functions. Signed-off-by: Alexander Lobakin --- MAINTAINERS | 3 +- drivers/net/ethernet/intel/Kconfig | 11 +- drivers/net/ethernet/intel/Makefile | 1 + drivers/net/ethernet/intel/i40e/i40e_common.c | 253 -------------- drivers/net/ethernet/intel/i40e/i40e_main.c | 1 + .../net/ethernet/intel/i40e/i40e_prototype.h | 7 - drivers/net/ethernet/intel/i40e/i40e_txrx.c | 74 +--- drivers/net/ethernet/intel/i40e/i40e_type.h | 88 ----- drivers/net/ethernet/intel/iavf/iavf_common.c | 253 -------------- drivers/net/ethernet/intel/iavf/iavf_main.c | 1 + .../net/ethernet/intel/iavf/iavf_prototype.h | 7 - drivers/net/ethernet/intel/iavf/iavf_txrx.c | 70 +--- drivers/net/ethernet/intel/iavf/iavf_type.h | 88 ----- .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 316 ------------------ drivers/net/ethernet/intel/ice/ice_main.c | 1 + drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 74 +--- drivers/net/ethernet/intel/libie/Makefile | 6 + drivers/net/ethernet/intel/libie/rx.c | 110 ++++++ include/linux/net/intel/libie/rx.h | 128 +++++++ 19 files changed, 312 insertions(+), 1180 deletions(-) create mode 100644 drivers/net/ethernet/intel/libie/Makefile create mode 100644 drivers/net/ethernet/intel/libie/rx.c create mode 100644 include/linux/net/intel/libie/rx.h diff --git a/MAINTAINERS b/MAINTAINERS index c904dba1733b..bd65fa9e39c8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10340,7 +10340,8 @@ F: Documentation/networking/device_drivers/ethernet/intel/ F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h -F: include/linux/net/intel/iidc.h +F: include/linux/net/intel/ +F: include/linux/net/intel/*/ INTEL ETHERNET PROTOCOL DRIVER FOR RDMA M: Mustafa Ismail diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 9bc0a9519899..cec4a938fbd0 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -84,6 +84,12 @@ config E1000E_HWTS devices. The cross-timestamp is available through the PTP clock driver precise cross-timestamp ioctl (PTP_SYS_OFFSET_PRECISE). +config LIBIE + tristate + help + libie (Intel Ethernet library) is a common library containing + routines shared by several Intel Ethernet drivers. + config IGB tristate "Intel(R) 82575/82576 PCI-Express Gigabit Ethernet support" depends on PCI @@ -225,6 +231,7 @@ config I40E depends on PTP_1588_CLOCK_OPTIONAL depends on PCI select AUXILIARY_BUS + select LIBIE help This driver supports Intel(R) Ethernet Controller XL710 Family of devices. For more information on how to identify your adapter, go @@ -254,8 +261,9 @@ config IAVF tristate config I40EVF tristate "Intel(R) Ethernet Adaptive Virtual Function support" - select IAVF depends on PCI_MSI + select IAVF + select LIBIE help This driver supports virtual functions for Intel XL710, X710, X722, XXV710, and all devices advertising support for @@ -282,6 +290,7 @@ config ICE depends on GNSS || GNSS = n select AUXILIARY_BUS select DIMLIB + select LIBIE select NET_DEVLINK select PLDMFW help diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index d80d04132073..ce622b4d825d 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -15,3 +15,4 @@ obj-$(CONFIG_I40E) += i40e/ obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_ICE) += ice/ +obj-$(CONFIG_LIBIE) += libie/ diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index ed88e38d488b..25bb858268fc 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -383,259 +383,6 @@ int i40e_aq_set_rss_key(struct i40e_hw *hw, return i40e_aq_get_set_rss_key(hw, vsi_id, key, true); } -/* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT i40e_ptype_lookup[ptype].known - * THEN - * Packet is unknown - * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum i40e_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - I40E_RX_PTYPE_##OUTER_FRAG, \ - I40E_RX_PTYPE_TUNNEL_##T, \ - I40E_RX_PTYPE_TUNNEL_END_##TE, \ - I40E_RX_PTYPE_##TEF, \ - I40E_RX_PTYPE_INNER_PROT_##I, \ - I40E_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define I40E_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG -#define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG -#define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC - -/* Lookup table mapping in the 8-bit HW PTYPE to the bit field for decoding */ -struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = { - /* L2 Packet types */ - I40E_PTT_UNUSED_ENTRY(0), - I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(4), - I40E_PTT_UNUSED_ENTRY(5), - I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(8), - I40E_PTT_UNUSED_ENTRY(9), - I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - - /* Non Tunneled IPv4 */ - I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(25), - I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(32), - I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(39), - I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(47), - I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(54), - I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(62), - I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(69), - I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(77), - I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(84), - I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(91), - I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(98), - I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(105), - I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(113), - I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(120), - I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(128), - I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(135), - I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(143), - I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(150), - I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - /** * i40e_init_shared_code - Initialize the shared code * @hw: pointer to hardware structure diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index b847bd105b16..f3bca6e9bd18 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -97,6 +97,7 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Connection XL710 Network Driver"); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); static struct workqueue_struct *i40e_wq; diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h index fe845987d99a..5287d0ef32d5 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h @@ -380,13 +380,6 @@ void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status); int i40e_set_mac_type(struct i40e_hw *hw); -extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; - -static inline struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) -{ - return i40e_ptype_lookup[ptype]; -} - /** * i40e_virtchnl_link_speed - Convert AdminQ link_speed to virtchnl definition * @link_speed: the speed to convert diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 8b8bf4880faa..c4b6cdbf3611 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1,8 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ -#include #include +#include +#include #include #include #include "i40e.h" @@ -1758,40 +1759,32 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi, struct sk_buff *skb, union i40e_rx_desc *rx_desc) { - struct i40e_rx_ptype_decoded decoded; + struct libie_rx_ptype_parsed parsed; u32 rx_error, rx_status; bool ipv4, ipv6; u8 ptype; u64 qword; + skb->ip_summed = CHECKSUM_NONE; + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT; + + parsed = libie_parse_rx_ptype(ptype); + if (!libie_has_rx_checksum(vsi->netdev, parsed)) + return; + rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >> I40E_RXD_QW1_ERROR_SHIFT; rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT; - decoded = decode_rx_desc_ptype(ptype); - - skb->ip_summed = CHECKSUM_NONE; - - skb_checksum_none_assert(skb); - - /* Rx csum enabled and ip headers found? */ - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) - return; /* did the hardware decode the packet and checksum? */ if (!(rx_status & BIT(I40E_RX_DESC_STATUS_L3L4P_SHIFT))) return; - /* both known and outer_ip must be set for the below code to work */ - if (!(decoded.known && decoded.outer_ip)) - return; - - ipv4 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; if (ipv4 && (rx_error & (BIT(I40E_RX_DESC_ERROR_IPE_SHIFT) | @@ -1819,49 +1812,16 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi, * we need to bump the checksum level by 1 to reflect the fact that * we are indicating we validated the inner checksum. */ - if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT) + if (parsed.tunnel_type >= LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT) skb->csum_level = 1; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case I40E_RX_PTYPE_INNER_PROT_TCP: - case I40E_RX_PTYPE_INNER_PROT_UDP: - case I40E_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - fallthrough; - default: - break; - } - + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: vsi->back->hw_csum_rx_error++; } -/** - * i40e_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns a hash type to be used by skb_set_hash - **/ -static inline int i40e_ptype_to_htype(u8 ptype) -{ - struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - - if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - else if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - else - return PKT_HASH_TYPE_L2; -} - /** * i40e_rx_hash - set the hash value in the skb * @ring: descriptor ring @@ -1874,17 +1834,19 @@ static inline void i40e_rx_hash(struct i40e_ring *ring, struct sk_buff *skb, u8 rx_ptype) { + struct libie_rx_ptype_parsed parsed; u32 hash; const __le64 rss_mask = cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH << I40E_RX_DESC_STATUS_FLTSTAT_SHIFT); - if (!(ring->netdev->features & NETIF_F_RXHASH)) + parsed = libie_parse_rx_ptype(rx_ptype); + if (!libie_has_rx_hash(ring->netdev, parsed)) return; if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); - skb_set_hash(skb, hash, i40e_ptype_to_htype(rx_ptype)); + libie_skb_set_hash(skb, hash, parsed); } } diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index 388c3d36d96a..05b8510f99a9 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -773,94 +773,6 @@ enum i40e_rx_desc_error_l3l4e_fcoe_masks { #define I40E_RXD_QW1_PTYPE_SHIFT 30 #define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) -/* Packet type non-ip values */ -enum i40e_rx_l2_ptype { - I40E_RX_PTYPE_L2_RESERVED = 0, - I40E_RX_PTYPE_L2_MAC_PAY2 = 1, - I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - I40E_RX_PTYPE_L2_FIP_PAY2 = 3, - I40E_RX_PTYPE_L2_OUI_PAY2 = 4, - I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, - I40E_RX_PTYPE_L2_ECP_PAY2 = 7, - I40E_RX_PTYPE_L2_EVB_PAY2 = 8, - I40E_RX_PTYPE_L2_QCN_PAY2 = 9, - I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, - I40E_RX_PTYPE_L2_ARP = 11, - I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, - I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct i40e_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:1; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum i40e_rx_ptype_outer_ip { - I40E_RX_PTYPE_OUTER_L2 = 0, - I40E_RX_PTYPE_OUTER_IP = 1 -}; - -enum i40e_rx_ptype_outer_ip_ver { - I40E_RX_PTYPE_OUTER_NONE = 0, - I40E_RX_PTYPE_OUTER_IPV4 = 0, - I40E_RX_PTYPE_OUTER_IPV6 = 1 -}; - -enum i40e_rx_ptype_outer_fragmented { - I40E_RX_PTYPE_NOT_FRAG = 0, - I40E_RX_PTYPE_FRAG = 1 -}; - -enum i40e_rx_ptype_tunnel_type { - I40E_RX_PTYPE_TUNNEL_NONE = 0, - I40E_RX_PTYPE_TUNNEL_IP_IP = 1, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum i40e_rx_ptype_tunnel_end_prot { - I40E_RX_PTYPE_TUNNEL_END_NONE = 0, - I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, - I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum i40e_rx_ptype_inner_prot { - I40E_RX_PTYPE_INNER_PROT_NONE = 0, - I40E_RX_PTYPE_INNER_PROT_UDP = 1, - I40E_RX_PTYPE_INNER_PROT_TCP = 2, - I40E_RX_PTYPE_INNER_PROT_SCTP = 3, - I40E_RX_PTYPE_INNER_PROT_ICMP = 4, - I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 -}; - -enum i40e_rx_ptype_payload_layer { - I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - #define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 #define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ I40E_RXD_QW1_LENGTH_PBUF_SHIFT) diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c index dd11dbbd5551..ba6c9f154d18 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_common.c +++ b/drivers/net/ethernet/intel/iavf/iavf_common.c @@ -499,259 +499,6 @@ enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id, return iavf_aq_get_set_rss_key(hw, vsi_id, key, true); } -/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT iavf_ptype_lookup[ptype].known - * THEN - * Packet is unknown - * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum iavf_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - IAVF_RX_PTYPE_OUTER_##OUTER_IP, \ - IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - IAVF_RX_PTYPE_##OUTER_FRAG, \ - IAVF_RX_PTYPE_TUNNEL_##T, \ - IAVF_RX_PTYPE_TUNNEL_END_##TE, \ - IAVF_RX_PTYPE_##TEF, \ - IAVF_RX_PTYPE_INNER_PROT_##I, \ - IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define IAVF_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG -#define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG -#define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC - -/* Lookup table mapping the 8-bit HW PTYPE to the bit field for decoding */ -struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = { - /* L2 Packet types */ - IAVF_PTT_UNUSED_ENTRY(0), - IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT_UNUSED_ENTRY(4), - IAVF_PTT_UNUSED_ENTRY(5), - IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT_UNUSED_ENTRY(8), - IAVF_PTT_UNUSED_ENTRY(9), - IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - - /* Non Tunneled IPv4 */ - IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(25), - IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(32), - IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(39), - IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(47), - IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(54), - IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(62), - IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(69), - IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(77), - IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(84), - IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(91), - IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(98), - IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(105), - IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(113), - IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(120), - IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(128), - IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(135), - IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(143), - IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(150), - IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - /** * iavf_aq_send_msg_to_pf * @hw: pointer to the hardware structure diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 2de4baff4c20..c17e909d3ff0 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -46,6 +46,7 @@ MODULE_DEVICE_TABLE(pci, iavf_pci_tbl); MODULE_ALIAS("i40evf"); MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver"); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); static const struct net_device_ops iavf_netdev_ops; diff --git a/drivers/net/ethernet/intel/iavf/iavf_prototype.h b/drivers/net/ethernet/intel/iavf/iavf_prototype.h index edebfbbcffdc..c2e5dbc0a75a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_prototype.h +++ b/drivers/net/ethernet/intel/iavf/iavf_prototype.h @@ -51,13 +51,6 @@ enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid, enum iavf_status iavf_set_mac_type(struct iavf_hw *hw); -extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[]; - -static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) -{ - return iavf_ptype_lookup[ptype]; -} - void iavf_vf_parse_hw_config(struct iavf_hw *hw, struct virtchnl_vf_resource *msg); enum iavf_status iavf_vf_reset(struct iavf_hw *hw); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index e989feda133c..a83b96e9b6fc 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include #include #include "iavf.h" @@ -982,40 +983,32 @@ static inline void iavf_rx_checksum(struct iavf_vsi *vsi, struct sk_buff *skb, union iavf_rx_desc *rx_desc) { - struct iavf_rx_ptype_decoded decoded; + struct libie_rx_ptype_parsed parsed; u32 rx_error, rx_status; bool ipv4, ipv6; u8 ptype; u64 qword; + skb->ip_summed = CHECKSUM_NONE; + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT; + + parsed = libie_parse_rx_ptype(ptype); + if (!libie_has_rx_checksum(vsi->netdev, parsed)) + return; + rx_error = (qword & IAVF_RXD_QW1_ERROR_MASK) >> IAVF_RXD_QW1_ERROR_SHIFT; rx_status = (qword & IAVF_RXD_QW1_STATUS_MASK) >> IAVF_RXD_QW1_STATUS_SHIFT; - decoded = decode_rx_desc_ptype(ptype); - - skb->ip_summed = CHECKSUM_NONE; - - skb_checksum_none_assert(skb); - - /* Rx csum enabled and ip headers found? */ - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) - return; /* did the hardware decode the packet and checksum? */ if (!(rx_status & BIT(IAVF_RX_DESC_STATUS_L3L4P_SHIFT))) return; - /* both known and outer_ip must be set for the below code to work */ - if (!(decoded.known && decoded.outer_ip)) - return; - - ipv4 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; if (ipv4 && (rx_error & (BIT(IAVF_RX_DESC_ERROR_IPE_SHIFT) | @@ -1039,46 +1032,13 @@ static inline void iavf_rx_checksum(struct iavf_vsi *vsi, if (rx_error & BIT(IAVF_RX_DESC_ERROR_PPRS_SHIFT)) return; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case IAVF_RX_PTYPE_INNER_PROT_TCP: - case IAVF_RX_PTYPE_INNER_PROT_UDP: - case IAVF_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - fallthrough; - default: - break; - } - + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: vsi->back->hw_csum_rx_error++; } -/** - * iavf_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns a hash type to be used by skb_set_hash - **/ -static inline int iavf_ptype_to_htype(u8 ptype) -{ - struct iavf_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - - if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - else if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - else - return PKT_HASH_TYPE_L2; -} - /** * iavf_rx_hash - set the hash value in the skb * @ring: descriptor ring @@ -1091,17 +1051,19 @@ static inline void iavf_rx_hash(struct iavf_ring *ring, struct sk_buff *skb, u8 rx_ptype) { + struct libie_rx_ptype_parsed parsed; u32 hash; const __le64 rss_mask = cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH << IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT); - if (!(ring->netdev->features & NETIF_F_RXHASH)) + parsed = libie_parse_rx_ptype(rx_ptype); + if (!libie_has_rx_hash(ring->netdev, parsed)) return; if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); - skb_set_hash(skb, hash, iavf_ptype_to_htype(rx_ptype)); + libie_skb_set_hash(skb, hash, parsed); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h index 9f1f523807c4..3030ba330326 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_type.h +++ b/drivers/net/ethernet/intel/iavf/iavf_type.h @@ -339,94 +339,6 @@ enum iavf_rx_desc_error_l3l4e_fcoe_masks { #define IAVF_RXD_QW1_PTYPE_SHIFT 30 #define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT) -/* Packet type non-ip values */ -enum iavf_rx_l2_ptype { - IAVF_RX_PTYPE_L2_RESERVED = 0, - IAVF_RX_PTYPE_L2_MAC_PAY2 = 1, - IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - IAVF_RX_PTYPE_L2_FIP_PAY2 = 3, - IAVF_RX_PTYPE_L2_OUI_PAY2 = 4, - IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6, - IAVF_RX_PTYPE_L2_ECP_PAY2 = 7, - IAVF_RX_PTYPE_L2_EVB_PAY2 = 8, - IAVF_RX_PTYPE_L2_QCN_PAY2 = 9, - IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10, - IAVF_RX_PTYPE_L2_ARP = 11, - IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12, - IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct iavf_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:1; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum iavf_rx_ptype_outer_ip { - IAVF_RX_PTYPE_OUTER_L2 = 0, - IAVF_RX_PTYPE_OUTER_IP = 1 -}; - -enum iavf_rx_ptype_outer_ip_ver { - IAVF_RX_PTYPE_OUTER_NONE = 0, - IAVF_RX_PTYPE_OUTER_IPV4 = 0, - IAVF_RX_PTYPE_OUTER_IPV6 = 1 -}; - -enum iavf_rx_ptype_outer_fragmented { - IAVF_RX_PTYPE_NOT_FRAG = 0, - IAVF_RX_PTYPE_FRAG = 1 -}; - -enum iavf_rx_ptype_tunnel_type { - IAVF_RX_PTYPE_TUNNEL_NONE = 0, - IAVF_RX_PTYPE_TUNNEL_IP_IP = 1, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum iavf_rx_ptype_tunnel_end_prot { - IAVF_RX_PTYPE_TUNNEL_END_NONE = 0, - IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1, - IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum iavf_rx_ptype_inner_prot { - IAVF_RX_PTYPE_INNER_PROT_NONE = 0, - IAVF_RX_PTYPE_INNER_PROT_UDP = 1, - IAVF_RX_PTYPE_INNER_PROT_TCP = 2, - IAVF_RX_PTYPE_INNER_PROT_SCTP = 3, - IAVF_RX_PTYPE_INNER_PROT_ICMP = 4, - IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5 -}; - -enum iavf_rx_ptype_payload_layer { - IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - #define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38 #define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ IAVF_RXD_QW1_LENGTH_PBUF_SHIFT) diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 89f986a75cc8..611577ebc29d 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -160,64 +160,6 @@ struct ice_fltr_desc { (0x1ULL << ICE_FXD_FLTR_WB_QW1_FAIL_PROF_S) #define ICE_FXD_FLTR_WB_QW1_FAIL_PROF_YES 0x1ULL -struct ice_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:2; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum ice_rx_ptype_outer_ip { - ICE_RX_PTYPE_OUTER_L2 = 0, - ICE_RX_PTYPE_OUTER_IP = 1, -}; - -enum ice_rx_ptype_outer_ip_ver { - ICE_RX_PTYPE_OUTER_NONE = 0, - ICE_RX_PTYPE_OUTER_IPV4 = 1, - ICE_RX_PTYPE_OUTER_IPV6 = 2, -}; - -enum ice_rx_ptype_outer_fragmented { - ICE_RX_PTYPE_NOT_FRAG = 0, - ICE_RX_PTYPE_FRAG = 1, -}; - -enum ice_rx_ptype_tunnel_type { - ICE_RX_PTYPE_TUNNEL_NONE = 0, - ICE_RX_PTYPE_TUNNEL_IP_IP = 1, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum ice_rx_ptype_tunnel_end_prot { - ICE_RX_PTYPE_TUNNEL_END_NONE = 0, - ICE_RX_PTYPE_TUNNEL_END_IPV4 = 1, - ICE_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum ice_rx_ptype_inner_prot { - ICE_RX_PTYPE_INNER_PROT_NONE = 0, - ICE_RX_PTYPE_INNER_PROT_UDP = 1, - ICE_RX_PTYPE_INNER_PROT_TCP = 2, - ICE_RX_PTYPE_INNER_PROT_SCTP = 3, - ICE_RX_PTYPE_INNER_PROT_ICMP = 4, - ICE_RX_PTYPE_INNER_PROT_TIMESYNC = 5, -}; - -enum ice_rx_ptype_payload_layer { - ICE_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - /* Rx Flex Descriptor * This descriptor is used instead of the legacy version descriptor when * ice_rlan_ctx.adv_desc is set @@ -651,262 +593,4 @@ struct ice_tlan_ctx { u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */ }; -/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT ice_ptype_lkup[ptype].known - * THEN - * Packet is unknown - * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum ice_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - ICE_RX_PTYPE_OUTER_##OUTER_IP, \ - ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - ICE_RX_PTYPE_##OUTER_FRAG, \ - ICE_RX_PTYPE_TUNNEL_##T, \ - ICE_RX_PTYPE_TUNNEL_END_##TE, \ - ICE_RX_PTYPE_##TEF, \ - ICE_RX_PTYPE_INNER_PROT_##I, \ - ICE_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define ICE_RX_PTYPE_NOF ICE_RX_PTYPE_NOT_FRAG -#define ICE_RX_PTYPE_FRG ICE_RX_PTYPE_FRAG - -/* Lookup table mapping in the 10-bit HW PTYPE to the bit field for decoding */ -static const struct ice_rx_ptype_decoded ice_ptype_lkup[BIT(10)] = { - /* L2 Packet types */ - ICE_PTT_UNUSED_ENTRY(0), - ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - ICE_PTT_UNUSED_ENTRY(2), - ICE_PTT_UNUSED_ENTRY(3), - ICE_PTT_UNUSED_ENTRY(4), - ICE_PTT_UNUSED_ENTRY(5), - ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - ICE_PTT_UNUSED_ENTRY(8), - ICE_PTT_UNUSED_ENTRY(9), - ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - ICE_PTT_UNUSED_ENTRY(12), - ICE_PTT_UNUSED_ENTRY(13), - ICE_PTT_UNUSED_ENTRY(14), - ICE_PTT_UNUSED_ENTRY(15), - ICE_PTT_UNUSED_ENTRY(16), - ICE_PTT_UNUSED_ENTRY(17), - ICE_PTT_UNUSED_ENTRY(18), - ICE_PTT_UNUSED_ENTRY(19), - ICE_PTT_UNUSED_ENTRY(20), - ICE_PTT_UNUSED_ENTRY(21), - - /* Non Tunneled IPv4 */ - ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(25), - ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(32), - ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(39), - ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(47), - ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(54), - ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(62), - ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(69), - ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(77), - ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(84), - ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(91), - ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(98), - ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(105), - ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(113), - ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(120), - ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(128), - ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(135), - ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(143), - ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - ICE_PTT_UNUSED_ENTRY(150), - ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - [154 ... 1023] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - -static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype) -{ - return ice_ptype_lkup[ptype]; -} - - #endif /* _ICE_LAN_TX_RX_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 62e91512aeab..2f039f48d7a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -34,6 +34,7 @@ static const char ice_copyright[] = "Copyright (c) 2018, Intel Corporation."; MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); MODULE_FIRMWARE(ICE_DDP_PKG_FILE); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index c8322fb6f2b3..7543aba4ff9f 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -2,6 +2,7 @@ /* Copyright (c) 2019, Intel Corporation. */ #include +#include #include "ice_txrx_lib.h" #include "ice_eswitch.h" @@ -38,30 +39,6 @@ void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val) } } -/** - * ice_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by - * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of - * Rx desc. - */ -static enum pkt_hash_types ice_ptype_to_htype(u16 ptype) -{ - struct ice_rx_ptype_decoded decoded = ice_decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - if (decoded.outer_ip == ICE_RX_PTYPE_OUTER_L2) - return PKT_HASH_TYPE_L2; - - return PKT_HASH_TYPE_NONE; -} - /** * ice_rx_hash - set the hash value in the skb * @rx_ring: descriptor ring @@ -74,9 +51,11 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb, u16 rx_ptype) { struct ice_32b_rx_flex_desc_nic *nic_mdid; + struct libie_rx_ptype_parsed parsed; u32 hash; - if (!(rx_ring->netdev->features & NETIF_F_RXHASH)) + parsed = libie_parse_rx_ptype(rx_ptype); + if (!libie_has_rx_hash(rx_ring->netdev, parsed)) return; if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC) @@ -84,7 +63,7 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc; hash = le32_to_cpu(nic_mdid->rss_hash); - skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); + libie_skb_set_hash(skb, hash, parsed); } /** @@ -92,7 +71,7 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, * @ring: the ring we care about * @skb: skb currently being received and modified * @rx_desc: the receive descriptor - * @ptype: the packet type decoded by hardware + * @ptype: the packet type parsed by hardware * * skb->protocol must be set before this function is called */ @@ -100,34 +79,26 @@ static void ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, union ice_32b_rx_flex_desc *rx_desc, u16 ptype) { - struct ice_rx_ptype_decoded decoded; + struct libie_rx_ptype_parsed parsed; u16 rx_status0, rx_status1; bool ipv4, ipv6; - rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); - rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); - - decoded = ice_decode_rx_desc_ptype(ptype); - /* Start with CHECKSUM_NONE and by default csum_level = 0 */ skb->ip_summed = CHECKSUM_NONE; - skb_checksum_none_assert(skb); - /* check if Rx checksum is enabled */ - if (!(ring->netdev->features & NETIF_F_RXCSUM)) + parsed = libie_parse_rx_ptype(ptype); + if (!libie_has_rx_checksum(ring->netdev, parsed)) return; - /* check if HW has decoded the packet and checksum */ - if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S))) - return; + rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); + rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); - if (!(decoded.known && decoded.outer_ip)) + /* check if HW has parsed the packet and checksum */ + if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S))) return; - ipv4 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; if (ipv4 && (rx_status0 & (BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) @@ -151,19 +122,10 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, * we need to bump the checksum level by 1 to reflect the fact that * we are indicating we validated the inner checksum. */ - if (decoded.tunnel_type >= ICE_RX_PTYPE_TUNNEL_IP_GRENAT) + if (parsed.tunnel_type >= LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT) skb->csum_level = 1; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case ICE_RX_PTYPE_INNER_PROT_TCP: - case ICE_RX_PTYPE_INNER_PROT_UDP: - case ICE_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - break; - default: - break; - } + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: @@ -175,7 +137,7 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, * @rx_ring: Rx descriptor ring packet is being transacted on * @rx_desc: pointer to the EOP Rx descriptor * @skb: pointer to current skb being populated - * @ptype: the packet type decoded by hardware + * @ptype: the packet type parsed by hardware * * This function checks the ring, descriptor, and packet information in * order to populate the hash, checksum, VLAN, protocol, and diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile new file mode 100644 index 000000000000..95e81d09b474 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright(c) 2023 Intel Corporation. + +obj-$(CONFIG_LIBIE) += libie.o + +libie-objs += rx.o diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c new file mode 100644 index 000000000000..f503476d8eef --- /dev/null +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed + * bitfield struct. + */ + +#define LIBIE_RX_PTYPE(oip, ofrag, tun, tp, tefr, iprot, pl) { \ + .outer_ip = LIBIE_RX_PTYPE_OUTER_##oip, \ + .outer_frag = LIBIE_RX_PTYPE_##ofrag, \ + .tunnel_type = LIBIE_RX_PTYPE_TUNNEL_IP_##tun, \ + .tunnel_end_prot = LIBIE_RX_PTYPE_TUNNEL_END_##tp, \ + .tunnel_end_frag = LIBIE_RX_PTYPE_##tefr, \ + .inner_prot = LIBIE_RX_PTYPE_INNER_##iprot, \ + .payload_layer = LIBIE_RX_PTYPE_PAYLOAD_##pl, \ + } + +#define LIBIE_RX_PTYPE_UNUSED { } + +#define __LIBIE_RX_PTYPE_L2(iprot, pl) \ + LIBIE_RX_PTYPE(L2, NOT_FRAG, NONE, NONE, NOT_FRAG, iprot, pl) +#define LIBIE_RX_PTYPE_L2 __LIBIE_RX_PTYPE_L2(NONE, L2) +#define LIBIE_RX_PTYPE_TS __LIBIE_RX_PTYPE_L2(TIMESYNC, L2) +#define LIBIE_RX_PTYPE_L3 __LIBIE_RX_PTYPE_L2(NONE, L3) + +#define LIBIE_RX_PTYPE_IP_FRAG(oip) \ + LIBIE_RX_PTYPE(IPV##oip, FRAG, NONE, NONE, NOT_FRAG, NONE, L3) +#define LIBIE_RX_PTYPE_IP_L3(oip, tun, teprot, tefr) \ + LIBIE_RX_PTYPE(IPV##oip, NOT_FRAG, tun, teprot, tefr, NONE, L3) +#define LIBIE_RX_PTYPE_IP_L4(oip, tun, teprot, iprot) \ + LIBIE_RX_PTYPE(IPV##oip, NOT_FRAG, tun, teprot, NOT_FRAG, iprot, L4) + +#define LIBIE_RX_PTYPE_IP_NOF(oip, tun, ver) \ + LIBIE_RX_PTYPE_IP_L3(oip, tun, ver, NOT_FRAG), \ + LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, UDP), \ + LIBIE_RX_PTYPE_UNUSED, \ + LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, TCP), \ + LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, SCTP), \ + LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, ICMP) + +/* IPv oip --> tun --> IPv ver */ +#define LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, ver) \ + LIBIE_RX_PTYPE_IP_L3(oip, tun, ver, FRAG), \ + LIBIE_RX_PTYPE_IP_NOF(oip, tun, ver) + +/* Non Tunneled IPv oip */ +#define LIBIE_RX_PTYPE_IP_RAW(oip) \ + LIBIE_RX_PTYPE_IP_FRAG(oip), \ + LIBIE_RX_PTYPE_IP_NOF(oip, NONE, NONE) + +/* IPv oip --> tun --> { IPv4, IPv6 } */ +#define LIBIE_RX_PTYPE_IP_TUN(oip, tun) \ + LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, IPV4), \ + LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, IPV6) + +/* IPv oip --> GRE/NAT tun --> { x, IPv4, IPv6 } */ +#define LIBIE_RX_PTYPE_IP_GRE(oip, tun) \ + LIBIE_RX_PTYPE_IP_L3(oip, tun, NONE, NOT_FRAG), \ + LIBIE_RX_PTYPE_IP_TUN(oip, tun) + +/* Non Tunneled IPv oip + * IPv oip --> { IPv4, IPv6 } + * IPv oip --> GRE/NAT --> { x, IPv4, IPv6 } + * IPv oip --> GRE/NAT --> MAC --> { x, IPv4, IPv6 } + * IPv oip --> GRE/NAT --> MAC/VLAN --> { x, IPv4, IPv6 } + */ +#define LIBIE_RX_PTYPE_IP(oip) \ + LIBIE_RX_PTYPE_IP_RAW(oip), \ + LIBIE_RX_PTYPE_IP_TUN(oip, IP), \ + LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT), \ + LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT_MAC), \ + LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT_MAC_VLAN) + +/* Lookup table mapping for O(1) parsing */ +const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = { + /* L2 packet types */ + LIBIE_RX_PTYPE_UNUSED, + LIBIE_RX_PTYPE_L2, + LIBIE_RX_PTYPE_TS, + LIBIE_RX_PTYPE_L2, + LIBIE_RX_PTYPE_UNUSED, + LIBIE_RX_PTYPE_UNUSED, + LIBIE_RX_PTYPE_L2, + LIBIE_RX_PTYPE_L2, + LIBIE_RX_PTYPE_UNUSED, + LIBIE_RX_PTYPE_UNUSED, + LIBIE_RX_PTYPE_L2, + LIBIE_RX_PTYPE_UNUSED, + + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + LIBIE_RX_PTYPE_L3, + + LIBIE_RX_PTYPE_IP(4), + LIBIE_RX_PTYPE_IP(6), +}; +EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE); + +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("Intel(R) Ethernet common library"); +MODULE_LICENSE("GPL"); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h new file mode 100644 index 000000000000..58bd0f35d025 --- /dev/null +++ b/include/linux/net/intel/libie/rx.h @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_RX_H +#define __LIBIE_RX_H + +#include + +/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed + * bitfield struct. + */ + +struct libie_rx_ptype_parsed { + u16 outer_ip:2; + u16 outer_frag:1; + u16 tunnel_type:3; + u16 tunnel_end_prot:2; + u16 tunnel_end_frag:1; + u16 inner_prot:3; + u16 payload_layer:2; +}; + +enum libie_rx_ptype_outer_ip { + LIBIE_RX_PTYPE_OUTER_L2 = 0U, + LIBIE_RX_PTYPE_OUTER_IPV4, + LIBIE_RX_PTYPE_OUTER_IPV6, +}; + +enum libie_rx_ptype_outer_fragmented { + LIBIE_RX_PTYPE_NOT_FRAG = 0U, + LIBIE_RX_PTYPE_FRAG, +}; + +enum libie_rx_ptype_tunnel_type { + LIBIE_RX_PTYPE_TUNNEL_IP_NONE = 0U, + LIBIE_RX_PTYPE_TUNNEL_IP_IP, + LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT, + LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC, + LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN, +}; + +enum libie_rx_ptype_tunnel_end_prot { + LIBIE_RX_PTYPE_TUNNEL_END_NONE = 0U, + LIBIE_RX_PTYPE_TUNNEL_END_IPV4, + LIBIE_RX_PTYPE_TUNNEL_END_IPV6, +}; + +enum libie_rx_ptype_inner_prot { + LIBIE_RX_PTYPE_INNER_NONE = 0U, + LIBIE_RX_PTYPE_INNER_UDP, + LIBIE_RX_PTYPE_INNER_TCP, + LIBIE_RX_PTYPE_INNER_SCTP, + LIBIE_RX_PTYPE_INNER_ICMP, + LIBIE_RX_PTYPE_INNER_TIMESYNC, +}; + +enum libie_rx_ptype_payload_layer { + LIBIE_RX_PTYPE_PAYLOAD_NONE = PKT_HASH_TYPE_NONE, + LIBIE_RX_PTYPE_PAYLOAD_L2 = PKT_HASH_TYPE_L2, + LIBIE_RX_PTYPE_PAYLOAD_L3 = PKT_HASH_TYPE_L3, + LIBIE_RX_PTYPE_PAYLOAD_L4 = PKT_HASH_TYPE_L4, +}; + +#define LIBIE_RX_PTYPE_NUM 154 + +extern const struct libie_rx_ptype_parsed +libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM]; + +/** + * libie_parse_rx_ptype - convert HW packet type to software bitfield structure + * @ptype: 10-bit hardware packet type value from the descriptor + * + * @libie_rx_ptype_lut must be accessed only using this wrapper. + * + * Returns the parsed bitfield struct corresponding to the provided ptype. + */ +static inline struct libie_rx_ptype_parsed libie_parse_rx_ptype(u32 ptype) +{ + if (unlikely(ptype >= LIBIE_RX_PTYPE_NUM)) + ptype = 0; + + return libie_rx_ptype_lut[ptype]; +} + +/* libie_has_*() can be used to quickly check whether the HW metadata is + * available to avoid further expensive processing such as descriptor reads. + * They already check for the corresponding netdev feature to be enabled, + * thus can be used as drop-in replacements. + */ + +static inline bool libie_has_rx_checksum(const struct net_device *dev, + struct libie_rx_ptype_parsed parsed) +{ + /* _INNER_{SCTP,TCP,UDP} are possible only when _OUTER_IPV* is set, + * it is enough to check only for the L4 type. + */ + switch (parsed.inner_prot) { + case LIBIE_RX_PTYPE_INNER_TCP: + case LIBIE_RX_PTYPE_INNER_UDP: + case LIBIE_RX_PTYPE_INNER_SCTP: + return dev->features & NETIF_F_RXCSUM; + default: + return false; + } +} + +static inline bool libie_has_rx_hash(const struct net_device *dev, + struct libie_rx_ptype_parsed parsed) +{ + if (parsed.payload_layer < LIBIE_RX_PTYPE_PAYLOAD_L2) + return false; + + return dev->features & NETIF_F_RXHASH; +} + +/** + * libie_skb_set_hash - fill in skb hash value basing on the parsed ptype + * @skb: skb to fill the hash in + * @hash: 32-bit hash value from the descriptor + * @parsed: parsed packet type + */ +static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, + struct libie_rx_ptype_parsed parsed) +{ + skb_set_hash(skb, hash, parsed.payload_layer); +} + +#endif /* __LIBIE_RX_H */ From patchwork Tue May 30 15:00:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787686 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=0pYd5Nj9; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfK4HGwz20Pc for ; Wed, 31 May 2023 01:04:25 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id D72B383449; Tue, 30 May 2023 15:04:23 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org D72B383449 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459063; bh=+abEBr9GnSoquNKGZ3Sns22yJ72Hckg5QMZLe/y0z0o=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=0pYd5Nj9GgQxN4BbVMTzJLb+13rs/lqtxahk4nJIJsoCkewZDC+cIZm5M7QDD8cNx t68Soecqef+/SOzLHmTZXXLBcO1EbxT17ppJe8H78sVasgYlYXSl15M6HtxdYvJKgx iamW3NNO5vO7SbLG0lg5hWLvRn4498S3yh92x9N5D2sHJX8H+o5I40SJMmTmCwyjrp MhOMPXyAo73eN5Cekj7oOS/kRIWLKAFl3kGkATXaYIyr/CIXWgx72UqjOhXGBKdll5 6QU3Hcx6xK31NYFV/4aRzsFlx79mby7ctsobBtN+guy6BBjNgnf1e/j9WJvBLLUukQ 92xdV8EzdjKUw== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5k8WfOXqZ26Y; Tue, 30 May 2023 15:04:22 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 08AD883C81; Tue, 30 May 2023 15:04:22 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 08AD883C81 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 539AF1BF5DF for ; Tue, 30 May 2023 15:04:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 2D1CD41190 for ; Tue, 30 May 2023 15:04:16 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 2D1CD41190 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OXvMhvroE_z2 for ; Tue, 30 May 2023 15:04:15 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org D437240CBC Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id D437240CBC for ; Tue, 30 May 2023 15:04:14 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192261" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192261" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304143" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304143" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:12 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:25 +0200 Message-Id: <20230530150035.1943669-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459054; x=1716995054; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pi0t7xlIDdtF20ISQwf8grMiK5giQ3THqhJGCzN/T8I=; b=By5pnSSyqM9ALZh4NyMAl/C3xJDsExPNCggNVOeHupq+o5abr9kfuPU/ ohd7RZDqNfOY6MiZdPhhGQtsT++eR81A5ORbR9j1RFiboRPENvgjkUyV7 DNvdBmUy68bClrG49UTXdy0/4oR3MQvnVbQY09mYSs52ywuKqxlXbLft4 VyWIvnJpX3H1PqHOW+KXUKnKSWPk7W9Fg4BXMA+9YLQl5OC+9sECkcW26 PEoPLdHfyMlIF40g7BSgwXFbY5+Svkwahu+YCgrZqbdp9lkag7MicKMEo xsBg3mI6d9B+Rlx2iakxWukPtL5XAcPNzQKfN1eON6COLAANMkzfn3NBc A==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=By5pnSSy Subject: [Intel-wired-lan] [PATCH net-next v3 02/12] iavf: kill "legacy-rx" for good X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Ever since build_skb() became stable, the old way with allocating an skb for storing the headers separately, which will be then copied manually, was slower, less flexible and thus obsolete. * it had higher pressure on MM since it actually allocates new pages, which then get split and refcount-biased (NAPI page cache); * it implies memcpy() of packet headers (40+ bytes per each frame); * the actual header length was calculated via eth_get_headlen(), which invokes Flow Dissector and thus wastes a bunch of CPU cycles; * XDP makes it even more weird since it requires headroom for long and also tailroom for some time (since mbuf landed). Take a look at the ice driver, which is built around work-arounds to make XDP work with it. Even on some quite low-end hardware (not a common case for 100G NICs) it was performing worse. The only advantage "legacy-rx" had is that it didn't require any reserved headroom and tailroom. But iavf didn't use this, as it always splits pages into two halves of 2k, while that save would only be useful when striding. And again, XDP effectively removes that sole pro. There's a train of features to land in IAVF soon: Page Pool, XDP, XSk, multi-buffer etc. Each new would require adding more and more Danse Macabre for absolutely no reason, besides making hotpath less and less effective. Remove the "feature" with all the related code. This includes at least one very hot branch (typically hit on each new frame), which was either always-true or always-false at least for a complete NAPI bulk of 64 frames, the whole private flags cruft and so on. Some stats: Function: add/remove: 0/2 grow/shrink: 0/7 up/down: 0/-774 (-774) RO Data: add/remove: 0/1 grow/shrink: 0/0 up/down: 0/-40 (-40) Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf.h | 2 +- .../net/ethernet/intel/iavf/iavf_ethtool.c | 140 ------------------ drivers/net/ethernet/intel/iavf/iavf_main.c | 10 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 84 +---------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 18 +-- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 3 +- 6 files changed, 8 insertions(+), 249 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 9abaff1f2aff..a780e7aa1c2f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -298,7 +298,7 @@ struct iavf_adapter { #define IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS BIT(12) #define IAVF_FLAG_PROMISC_ON BIT(13) #define IAVF_FLAG_ALLMULTI_ON BIT(14) -#define IAVF_FLAG_LEGACY_RX BIT(15) +/* BIT(15) is free, was IAVF_FLAG_LEGACY_RX */ #define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16) #define IAVF_FLAG_QUEUES_DISABLED BIT(17) #define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18) diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 6f171d1d85b7..de3050c02b6f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -239,29 +239,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = { #define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats) -/* For now we have one and only one private flag and it is only defined - * when we have support for the SKIP_CPU_SYNC DMA attribute. Instead - * of leaving all this code sitting around empty we will strip it unless - * our one private flag is actually available. - */ -struct iavf_priv_flags { - char flag_string[ETH_GSTRING_LEN]; - u32 flag; - bool read_only; -}; - -#define IAVF_PRIV_FLAG(_name, _flag, _read_only) { \ - .flag_string = _name, \ - .flag = _flag, \ - .read_only = _read_only, \ -} - -static const struct iavf_priv_flags iavf_gstrings_priv_flags[] = { - IAVF_PRIV_FLAG("legacy-rx", IAVF_FLAG_LEGACY_RX, 0), -}; - -#define IAVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(iavf_gstrings_priv_flags) - /** * iavf_get_link_ksettings - Get Link Speed and Duplex settings * @netdev: network interface device structure @@ -341,8 +318,6 @@ static int iavf_get_sset_count(struct net_device *netdev, int sset) return IAVF_STATS_LEN + (IAVF_QUEUE_STATS_LEN * 2 * netdev->real_num_tx_queues); - else if (sset == ETH_SS_PRIV_FLAGS) - return IAVF_PRIV_FLAGS_STR_LEN; else return -EINVAL; } @@ -384,24 +359,6 @@ static void iavf_get_ethtool_stats(struct net_device *netdev, rcu_read_unlock(); } -/** - * iavf_get_priv_flag_strings - Get private flag strings - * @netdev: network interface device structure - * @data: buffer for string data - * - * Builds the private flags string table - **/ -static void iavf_get_priv_flag_strings(struct net_device *netdev, u8 *data) -{ - unsigned int i; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { - snprintf(data, ETH_GSTRING_LEN, "%s", - iavf_gstrings_priv_flags[i].flag_string); - data += ETH_GSTRING_LEN; - } -} - /** * iavf_get_stat_strings - Get stat strings * @netdev: network interface device structure @@ -440,105 +397,11 @@ static void iavf_get_strings(struct net_device *netdev, u32 sset, u8 *data) case ETH_SS_STATS: iavf_get_stat_strings(netdev, data); break; - case ETH_SS_PRIV_FLAGS: - iavf_get_priv_flag_strings(netdev, data); - break; default: break; } } -/** - * iavf_get_priv_flags - report device private flags - * @netdev: network interface device structure - * - * The get string set count and the string set should be matched for each - * flag returned. Add new strings for each flag to the iavf_gstrings_priv_flags - * array. - * - * Returns a u32 bitmap of flags. - **/ -static u32 iavf_get_priv_flags(struct net_device *netdev) -{ - struct iavf_adapter *adapter = netdev_priv(netdev); - u32 i, ret_flags = 0; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { - const struct iavf_priv_flags *priv_flags; - - priv_flags = &iavf_gstrings_priv_flags[i]; - - if (priv_flags->flag & adapter->flags) - ret_flags |= BIT(i); - } - - return ret_flags; -} - -/** - * iavf_set_priv_flags - set private flags - * @netdev: network interface device structure - * @flags: bit flags to be set - **/ -static int iavf_set_priv_flags(struct net_device *netdev, u32 flags) -{ - struct iavf_adapter *adapter = netdev_priv(netdev); - u32 orig_flags, new_flags, changed_flags; - u32 i; - - orig_flags = READ_ONCE(adapter->flags); - new_flags = orig_flags; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { - const struct iavf_priv_flags *priv_flags; - - priv_flags = &iavf_gstrings_priv_flags[i]; - - if (flags & BIT(i)) - new_flags |= priv_flags->flag; - else - new_flags &= ~(priv_flags->flag); - - if (priv_flags->read_only && - ((orig_flags ^ new_flags) & ~BIT(i))) - return -EOPNOTSUPP; - } - - /* Before we finalize any flag changes, any checks which we need to - * perform to determine if the new flags will be supported should go - * here... - */ - - /* Compare and exchange the new flags into place. If we failed, that - * is if cmpxchg returns anything but the old value, this means - * something else must have modified the flags variable since we - * copied it. We'll just punt with an error and log something in the - * message buffer. - */ - if (cmpxchg(&adapter->flags, orig_flags, new_flags) != orig_flags) { - dev_warn(&adapter->pdev->dev, - "Unable to update adapter->flags as it was modified by another thread...\n"); - return -EAGAIN; - } - - changed_flags = orig_flags ^ new_flags; - - /* Process any additional changes needed as a result of flag changes. - * The changed_flags value reflects the list of bits that were changed - * in the code above. - */ - - /* issue a reset to force legacy-rx change to take effect */ - if (changed_flags & IAVF_FLAG_LEGACY_RX) { - if (netif_running(netdev)) { - adapter->flags |= IAVF_FLAG_RESET_NEEDED; - queue_work(adapter->wq, &adapter->reset_task); - } - } - - return 0; -} - /** * iavf_get_msglevel - Get debug message level * @netdev: network interface device structure @@ -584,7 +447,6 @@ static void iavf_get_drvinfo(struct net_device *netdev, strscpy(drvinfo->driver, iavf_driver_name, 32); strscpy(drvinfo->fw_version, "N/A", 4); strscpy(drvinfo->bus_info, pci_name(adapter->pdev), 32); - drvinfo->n_priv_flags = IAVF_PRIV_FLAGS_STR_LEN; } /** @@ -1969,8 +1831,6 @@ static const struct ethtool_ops iavf_ethtool_ops = { .get_strings = iavf_get_strings, .get_ethtool_stats = iavf_get_ethtool_stats, .get_sset_count = iavf_get_sset_count, - .get_priv_flags = iavf_get_priv_flags, - .set_priv_flags = iavf_set_priv_flags, .get_msglevel = iavf_get_msglevel, .set_msglevel = iavf_set_msglevel, .get_coalesce = iavf_get_coalesce, diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index c17e909d3ff0..a5a6c9861a93 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -713,9 +713,7 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) struct iavf_hw *hw = &adapter->hw; int i; - /* Legacy Rx will always default to a 2048 buffer size. */ -#if (PAGE_SIZE < 8192) - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX)) { + if (PAGE_SIZE < 8192) { struct net_device *netdev = adapter->netdev; /* For jumbo frames on systems with 4K pages we have to use @@ -732,16 +730,10 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) (netdev->mtu <= ETH_DATA_LEN)) rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; } -#endif for (i = 0; i < adapter->num_active_queues; i++) { adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); adapter->rx_rings[i].rx_buf_len = rx_buf_len; - - if (adapter->flags & IAVF_FLAG_LEGACY_RX) - clear_ring_build_skb_enabled(&adapter->rx_rings[i]); - else - set_ring_build_skb_enabled(&adapter->rx_rings[i]); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index a83b96e9b6fc..a7121dc5c32b 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -824,17 +824,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * iavf_rx_offset - Return expected offset into page to access data - * @rx_ring: Ring we are requesting offset of - * - * Returns the offset value for ring into the data buffer. - */ -static inline unsigned int iavf_rx_offset(struct iavf_ring *rx_ring) -{ - return ring_uses_build_skb(rx_ring) ? IAVF_SKB_PAD : 0; -} - /** * iavf_alloc_mapped_page - recycle or make a new page * @rx_ring: ring to use @@ -879,7 +868,7 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, bi->dma = dma; bi->page = page; - bi->page_offset = iavf_rx_offset(rx_ring); + bi->page_offset = IAVF_SKB_PAD; /* initialize pagecnt_bias to 1 representing we fully own page */ bi->pagecnt_bias = 1; @@ -1220,7 +1209,7 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, #if (PAGE_SIZE < 8192) unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; #else - unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring)); + unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); #endif if (!size) @@ -1268,71 +1257,6 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, return rx_buffer; } -/** - * iavf_construct_skb - Allocate skb and populate it - * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: rx buffer to pull data from - * @size: size of buffer to add to skb - * - * This function allocates an skb. It then populates it with the page - * data from the current receive descriptor, taking care to set up the - * skb correctly. - */ -static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer, - unsigned int size) -{ - void *va; -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(size); -#endif - unsigned int headlen; - struct sk_buff *skb; - - if (!rx_buffer) - return NULL; - /* prefetch first cache line of first page */ - va = page_address(rx_buffer->page) + rx_buffer->page_offset; - net_prefetch(va); - - /* allocate a skb to store the frags */ - skb = __napi_alloc_skb(&rx_ring->q_vector->napi, - IAVF_RX_HDR_SIZE, - GFP_ATOMIC | __GFP_NOWARN); - if (unlikely(!skb)) - return NULL; - - /* Determine available headroom for copy */ - headlen = size; - if (headlen > IAVF_RX_HDR_SIZE) - headlen = eth_get_headlen(skb->dev, va, IAVF_RX_HDR_SIZE); - - /* align pull length to size of long to optimize memcpy performance */ - memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); - - /* update all of the pointers */ - size -= headlen; - if (size) { - skb_add_rx_frag(skb, 0, rx_buffer->page, - rx_buffer->page_offset + headlen, - size, truesize); - - /* buffer is used by skb, update page_offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - } else { - /* buffer is unused, reset bias back to rx_buffer */ - rx_buffer->pagecnt_bias++; - } - - return skb; -} - /** * iavf_build_skb - Build skb around an existing buffer * @rx_ring: Rx descriptor ring to transact packets on @@ -1505,10 +1429,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* retrieve a buffer from the ring */ if (skb) iavf_add_rx_frag(rx_ring, rx_buffer, skb, size); - else if (ring_uses_build_skb(rx_ring)) - skb = iavf_build_skb(rx_ring, rx_buffer, size); else - skb = iavf_construct_skb(rx_ring, rx_buffer, size); + skb = iavf_build_skb(rx_ring, rx_buffer, size); /* exit if we failed to retrieve a buffer */ if (!skb) { diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 2624bf6d009e..234e189c1987 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -362,7 +362,8 @@ struct iavf_ring { u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) -#define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1) +/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */ +/* BIT(2) is free */ #define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) #define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) @@ -393,21 +394,6 @@ struct iavf_ring { */ } ____cacheline_internodealigned_in_smp; -static inline bool ring_uses_build_skb(struct iavf_ring *ring) -{ - return !!(ring->flags & IAVF_RXR_FLAGS_BUILD_SKB_ENABLED); -} - -static inline void set_ring_build_skb_enabled(struct iavf_ring *ring) -{ - ring->flags |= IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; -} - -static inline void clear_ring_build_skb_enabled(struct iavf_ring *ring) -{ - ring->flags &= ~IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; -} - #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002 #define IAVF_ITR_ADAPTIVE_MIN_USECS 0x0002 #define IAVF_ITR_ADAPTIVE_MAX_USECS 0x007e diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 7c0578b5457b..fdddc3588487 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -290,8 +290,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) return; /* Limit maximum frame size when jumbo frames is not enabled */ - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX) && - (adapter->netdev->mtu <= ETH_DATA_LEN)) + if (adapter->netdev->mtu <= ETH_DATA_LEN) max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; vqci->vsi_id = adapter->vsi_res->vsi_id; From patchwork Tue May 30 15:00:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787687 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=yuej2AAW; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfR2pJPz20Pc for ; Wed, 31 May 2023 01:04:31 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id F296883AC0; Tue, 30 May 2023 15:04:28 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org F296883AC0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459069; bh=jaib9hCRBeDDaZJS+fbgxNWWAwSv5f5rbmaZf1btj5s=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=yuej2AAWfYI0VDAJFDsJabb1st/gXcomhH++j0//yDQTaY9F5uDfvcVBU4+mLiPei MaaZBb1iZuLHmql21mIKoY+8iFx9H8r5+PDX88abLNzf79fM36UKKVqbB6hl+dVjIs b5wGtKkrwyzVNXrbcT/P6jTPlR3Enl7cXk0cH1ApMBdJGhdTX/4v/CStAXtxLT6xQG PDAhumW7MaRtnhdKNeF9wQSRPBUxGp1x4EB024GAA+JgWVjmUlnpWDBVvhN8FsgSaW Dba+SJWXSOE6X9PEASdwBS+5PmCumDdhzcsI9P5i/V/sntFYNKjXOM4zTqcB6Av8yI BslGHXIZBl95Q== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id IGKO4sIMcIxr; Tue, 30 May 2023 15:04:27 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 27A0C81EF0; Tue, 30 May 2023 15:04:27 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 27A0C81EF0 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 25AD81BF5DF for ; Tue, 30 May 2023 15:04:20 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 0C3B641190 for ; Tue, 30 May 2023 15:04:20 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 0C3B641190 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id inNlBkiXe-fp for ; Tue, 30 May 2023 15:04:18 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 73C6040CBC Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 73C6040CBC for ; Tue, 30 May 2023 15:04:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192318" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192318" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304149" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304149" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:16 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:26 +0200 Message-Id: <20230530150035.1943669-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459058; x=1716995058; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I/RNAvtvrBB+AzR13KWnIbaQECJnCFb/ak6x4S/ksu0=; b=Ok/RmjkaofOcPNmGWIAtVDG34cJurjmrpEW5mDFzBdzn8xAK/fOBfIIZ qGktkb0hKIaUoWIplEvV6fnPT0VnKZHr3btGFPUjRLD0J50pcSu/VoySR xT7+iFHzM7svDu8blhSYhZwrxyPc1j9Eiu+Ffqtux9AKgtQBGnKJxIcVJ uXySupU2ZgE4k8TpS7XafsxbxaVv+OdILp//T+nGITjSb+ve72PjyKw4J rhnw+7C1lhMtNDxkKxNWsk/dVHZJaWvSLCaUtOVck9bOeFRZr0nfYW3Io 4qn8BXqhltDxatX+JN4+qx/9i5V0LvnYPIYeZQX3LC6YT1C7LWeUwCJXu Q==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=Ok/Rmjka Subject: [Intel-wired-lan] [PATCH net-next v3 03/12] iavf: optimize Rx buffer allocation a bunch X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" The Rx hotpath code of IAVF is not well-optimized TBH. Before doing any further buffer model changes, shake it up a bit. Notably: 1. Cache more variables on the stack. DMA device, Rx page size, NTC -- these are the most common things used all throughout the hotpath, often in loops on each iteration. Instead of fetching (or even calculating, as with the page size) them from the ring all the time, cache them on the stack at the beginning of the NAPI polling callback. NTC will be written back at the end, the rest are used read-only, so no sync needed. 2. Don't move the recycled buffers around the ring. The idea of passing the page of the right-now-recycled-buffer to a different buffer, in this case, the first one that needs to be allocated, moreover, on each new frame, is fundamentally wrong. It involves a few o' fetches, branches and then writes (and one Rx buffer struct is at least 32 bytes) where they're completely unneeded, but gives no good -- the result is the same as if we'd recycle it inplace, at the same position where it was used. So drop this and let the main refilling function take care of all the buffers, which were processed and now need to be recycled/refilled. 3. Don't allocate with %GPF_ATOMIC on ifup. This involved introducing the @gfp parameter to a couple functions. Doesn't change anything for Rx -> softirq. 4. 1 budget unit == 1 descriptor, not skb. There could be underflow when receiving a lot of fragmented frames. If each of them would consist of 2 frags, it means that we'd process 64 descriptors at the point where we pass the 32th skb to the stack. But the driver would count that only as a half, which could make NAPI re-enable interrupts prematurely and create unnecessary CPU load. 5. Shortcut !size case. It's super rare, but possible -- for example, if the last buffer of the fragmented frame contained only FCS, which was then stripped by the HW. Instead of checking for size several times when processing, quickly reuse the buffer and jump to the skb fields part. 6. Refill the ring after finishing the polling loop. Previously, the loop wasn't starting a new iteration after the 64th desc, meaning that we were always leaving 16 buffers non-refilled until the next NAPI poll. It's better to refill them while they're still hot, so do that right after exiting the loop as well. For a full cycle of 64 descs, there will be 4 refills of 16 descs from now on. Function: add/remove: 4/2 grow/shrink: 0/5 up/down: 473/-647 (-174) + up to 2% performance. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 259 +++++++++----------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 3 +- 3 files changed, 114 insertions(+), 150 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index a5a6c9861a93..ade32aa1ed78 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1236,7 +1236,7 @@ static void iavf_configure(struct iavf_adapter *adapter) for (i = 0; i < adapter->num_active_queues; i++) { struct iavf_ring *ring = &adapter->rx_rings[i]; - iavf_alloc_rx_buffers(ring, IAVF_DESC_UNUSED(ring)); + iavf_alloc_rx_buffers(ring); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index a7121dc5c32b..fd08ce67380e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -736,7 +736,6 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) /* Zero out the descriptor ring */ memset(rx_ring->desc, 0, rx_ring->size); - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -792,7 +791,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) goto err; } - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; @@ -812,9 +810,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) { rx_ring->next_to_use = val; - /* update next to alloc since we have filled the ring */ - rx_ring->next_to_alloc = val; - /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only * applicable for weak-ordered memory model archs, @@ -828,12 +823,17 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) * iavf_alloc_mapped_page - recycle or make a new page * @rx_ring: ring to use * @bi: rx_buffer struct to modify + * @dev: device used for DMA mapping + * @order: page order to allocate + * @gfp: GFP mask to allocate page * * Returns true if the page was successfully allocated or * reused. **/ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *bi) + struct iavf_rx_buffer *bi, + struct device *dev, u32 order, + gfp_t gfp) { struct page *page = bi->page; dma_addr_t dma; @@ -845,23 +845,21 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, } /* alloc new page for storage */ - page = dev_alloc_pages(iavf_rx_pg_order(rx_ring)); + page = __dev_alloc_pages(gfp, order); if (unlikely(!page)) { rx_ring->rx_stats.alloc_page_failed++; return false; } /* map page for use */ - dma = dma_map_page_attrs(rx_ring->dev, page, 0, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); + dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order, + DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); /* if mapping failed free memory back to system since * there isn't much point in holding memory we can't use */ - if (dma_mapping_error(rx_ring->dev, dma)) { - __free_pages(page, iavf_rx_pg_order(rx_ring)); + if (dma_mapping_error(dev, dma)) { + __free_pages(page, order); rx_ring->rx_stats.alloc_page_failed++; return false; } @@ -898,32 +896,36 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, } /** - * iavf_alloc_rx_buffers - Replace used receive buffers + * __iavf_alloc_rx_buffers - Replace used receive buffers * @rx_ring: ring to place buffers on - * @cleaned_count: number of buffers to replace + * @to_refill: number of buffers to replace + * @gfp: GFP mask to allocate pages * - * Returns false if all allocations were successful, true if any fail + * Returns 0 if all allocations were successful or the number of buffers left + * to refill in case of an allocation failure. **/ -bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) +static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill, + gfp_t gfp) { - u16 ntu = rx_ring->next_to_use; + u32 order = iavf_rx_pg_order(rx_ring); + struct device *dev = rx_ring->dev; + u32 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; struct iavf_rx_buffer *bi; /* do nothing if no valid netdev defined */ - if (!rx_ring->netdev || !cleaned_count) - return false; + if (unlikely(!rx_ring->netdev || !to_refill)) + return 0; rx_desc = IAVF_RX_DESC(rx_ring, ntu); bi = &rx_ring->rx_bi[ntu]; do { - if (!iavf_alloc_mapped_page(rx_ring, bi)) - goto no_buffers; + if (!iavf_alloc_mapped_page(rx_ring, bi, dev, order, gfp)) + break; /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(rx_ring->dev, bi->dma, - bi->page_offset, + dma_sync_single_range_for_device(dev, bi->dma, bi->page_offset, rx_ring->rx_buf_len, DMA_FROM_DEVICE); @@ -943,23 +945,17 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) /* clear the status bits for the next_to_use descriptor */ rx_desc->wb.qword1.status_error_len = 0; - - cleaned_count--; - } while (cleaned_count); + } while (--to_refill); if (rx_ring->next_to_use != ntu) iavf_release_rx_desc(rx_ring, ntu); - return false; - -no_buffers: - if (rx_ring->next_to_use != ntu) - iavf_release_rx_desc(rx_ring, ntu); + return to_refill; +} - /* make sure to come back via polling to try again after - * allocation failure - */ - return true; +void iavf_alloc_rx_buffers(struct iavf_ring *rxr) +{ + __iavf_alloc_rx_buffers(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL); } /** @@ -1104,32 +1100,6 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) return false; } -/** - * iavf_reuse_rx_page - page flip buffer and store it back on the ring - * @rx_ring: rx descriptor ring to store buffers on - * @old_buff: donor buffer to have page reused - * - * Synchronizes page for reuse by the adapter - **/ -static void iavf_reuse_rx_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *old_buff) -{ - struct iavf_rx_buffer *new_buff; - u16 nta = rx_ring->next_to_alloc; - - new_buff = &rx_ring->rx_bi[nta]; - - /* update, and store next to alloc */ - nta++; - rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; - - /* transfer page from old buffer to new buffer */ - new_buff->dma = old_buff->dma; - new_buff->page = old_buff->page; - new_buff->page_offset = old_buff->page_offset; - new_buff->pagecnt_bias = old_buff->pagecnt_bias; -} - /** * iavf_can_reuse_rx_page - Determine if this page can be reused by * the adapter for another receive @@ -1191,30 +1161,26 @@ static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer) /** * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff - * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: buffer containing page to add * @skb: sk_buff to place the data into + * @rx_buffer: buffer containing page to add * @size: packet length from rx_desc + * @pg_size: Rx buffer page size * * This function will add the data contained in rx_buffer->page to the skb. * It will just attach the page as a frag to the skb. * * The function will then update the page offset. **/ -static void iavf_add_rx_frag(struct iavf_ring *rx_ring, +static void iavf_add_rx_frag(struct sk_buff *skb, struct iavf_rx_buffer *rx_buffer, - struct sk_buff *skb, - unsigned int size) + u32 size, u32 pg_size) { #if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; + unsigned int truesize = pg_size / 2; #else unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); #endif - if (!size) - return; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); @@ -1224,63 +1190,47 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, #else rx_buffer->page_offset += truesize; #endif + + /* We have pulled a buffer for use, so decrement pagecnt_bias */ + rx_buffer->pagecnt_bias--; } /** - * iavf_get_rx_buffer - Fetch Rx buffer and synchronize data for use - * @rx_ring: rx descriptor ring to transact packets on - * @size: size of buffer to add to skb + * iavf_sync_rx_buffer - Synchronize received data for use + * @dev: device used for DMA mapping + * @buf: Rx buffer containing the data + * @size: size of the received data * - * This function will pull an Rx buffer from the ring and synchronize it - * for use by the CPU. + * This function will synchronize the Rx buffer for use by the CPU. */ -static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, - const unsigned int size) +static void iavf_sync_rx_buffer(struct device *dev, struct iavf_rx_buffer *buf, + u32 size) { - struct iavf_rx_buffer *rx_buffer; - - rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; - prefetchw(rx_buffer->page); - if (!size) - return rx_buffer; - - /* we are reusing so sync this buffer for CPU use */ - dma_sync_single_range_for_cpu(rx_ring->dev, - rx_buffer->dma, - rx_buffer->page_offset, - size, + dma_sync_single_range_for_cpu(dev, buf->dma, buf->page_offset, size, DMA_FROM_DEVICE); - - /* We have pulled a buffer for use, so decrement pagecnt_bias */ - rx_buffer->pagecnt_bias--; - - return rx_buffer; } /** * iavf_build_skb - Build skb around an existing buffer - * @rx_ring: Rx descriptor ring to transact packets on - * @rx_buffer: Rx buffer to pull data from - * @size: size of buffer to add to skb + * @rx_buffer: Rx buffer with the data + * @size: size of the data + * @pg_size: size of the Rx page * * This function builds an skb around an existing Rx buffer, taking care * to set up the skb correctly and avoid any memcpy overhead. */ -static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer, - unsigned int size) +static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer, + u32 size, u32 pg_size) { void *va; #if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; + unsigned int truesize = pg_size / 2; #else unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + SKB_DATA_ALIGN(IAVF_SKB_PAD + size); #endif struct sk_buff *skb; - if (!rx_buffer || !size) - return NULL; /* prefetch first cache line of first page */ va = page_address(rx_buffer->page) + rx_buffer->page_offset; net_prefetch(va); @@ -1301,36 +1251,33 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, rx_buffer->page_offset += truesize; #endif + rx_buffer->pagecnt_bias--; + return skb; } /** - * iavf_put_rx_buffer - Clean up used buffer and either recycle or free + * iavf_put_rx_buffer - Recycle or free used buffer * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: rx buffer to pull data from + * @dev: device used for DMA mapping + * @rx_buffer: Rx buffer to handle + * @pg_size: Rx page size * - * This function will clean up the contents of the rx_buffer. It will - * either recycle the buffer or unmap it and free the associated resources. + * Either recycle the buffer if possible or unmap and free the page. */ -static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer) +static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct device *dev, + struct iavf_rx_buffer *rx_buffer, u32 pg_size) { - if (!rx_buffer) - return; - if (iavf_can_reuse_rx_page(rx_buffer)) { - /* hand second half of page back to the ring */ - iavf_reuse_rx_page(rx_ring, rx_buffer); rx_ring->rx_stats.page_reuse_count++; - } else { - /* we are not reusing the buffer so unmap it */ - dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_buffer->page, - rx_buffer->pagecnt_bias); + return; } + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(dev, rx_buffer->dma, pg_size, + DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); + __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); + /* clear contents of buffer_info */ rx_buffer->page = NULL; } @@ -1350,14 +1297,6 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, union iavf_rx_desc *rx_desc, struct sk_buff *skb) { - u32 ntc = rx_ring->next_to_clean + 1; - - /* fetch, update, and store next to clean */ - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - - prefetch(IAVF_RX_DESC(rx_ring, ntc)); - /* if we are the last buffer then there is nothing else to do */ #define IAVF_RXD_EOF BIT(IAVF_RX_DESC_STATUS_EOF_SHIFT) if (likely(iavf_test_staterr(rx_desc, IAVF_RXD_EOF))) @@ -1383,11 +1322,16 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; + const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; + u32 to_refill = IAVF_DESC_UNUSED(rx_ring); + u32 pg_size = iavf_rx_pg_size(rx_ring); struct sk_buff *skb = rx_ring->skb; - u16 cleaned_count = IAVF_DESC_UNUSED(rx_ring); - bool failure = false; + struct device *dev = rx_ring->dev; + u32 ntc = rx_ring->next_to_clean; + u32 ring_size = rx_ring->count; + u32 cleaned_count = 0; - while (likely(total_rx_packets < (unsigned int)budget)) { + while (likely(cleaned_count < budget)) { struct iavf_rx_buffer *rx_buffer; union iavf_rx_desc *rx_desc; unsigned int size; @@ -1396,13 +1340,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) u64 qword; /* return some buffers to hardware, one at a time is too slow */ - if (cleaned_count >= IAVF_RX_BUFFER_WRITE) { - failure = failure || - iavf_alloc_rx_buffers(rx_ring, cleaned_count); - cleaned_count = 0; - } + if (to_refill >= IAVF_RX_BUFFER_WRITE) + to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, + gfp); - rx_desc = IAVF_RX_DESC(rx_ring, rx_ring->next_to_clean); + rx_desc = IAVF_RX_DESC(rx_ring, ntc); /* status_error_len will always be zero for unused descriptors * because it's cleared in cleanup, and overlaps with hdr_addr @@ -1424,24 +1366,38 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); - rx_buffer = iavf_get_rx_buffer(rx_ring, size); + rx_buffer = &rx_ring->rx_bi[ntc]; + + /* Very rare, but possible case. The most common reason: + * the last fragment contained FCS only, which was then + * stripped by the HW. + */ + if (unlikely(!size)) + goto skip_data; + + iavf_sync_rx_buffer(dev, rx_buffer, size); /* retrieve a buffer from the ring */ if (skb) - iavf_add_rx_frag(rx_ring, rx_buffer, skb, size); + iavf_add_rx_frag(skb, rx_buffer, size, pg_size); else - skb = iavf_build_skb(rx_ring, rx_buffer, size); + skb = iavf_build_skb(rx_buffer, size, pg_size); /* exit if we failed to retrieve a buffer */ if (!skb) { rx_ring->rx_stats.alloc_buff_failed++; - if (rx_buffer && size) - rx_buffer->pagecnt_bias++; break; } - iavf_put_rx_buffer(rx_ring, rx_buffer); +skip_data: + iavf_put_rx_buffer(rx_ring, dev, rx_buffer, pg_size); + cleaned_count++; + to_refill++; + if (unlikely(++ntc == ring_size)) + ntc = 0; + + prefetch(IAVF_RX_DESC(rx_ring, ntc)); if (iavf_is_non_eop(rx_ring, rx_desc, skb)) continue; @@ -1488,8 +1444,18 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) total_rx_packets++; } + rx_ring->next_to_clean = ntc; rx_ring->skb = skb; + if (to_refill >= IAVF_RX_BUFFER_WRITE) { + to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, gfp); + /* guarantee a trip back through this routine if there was + * a failure + */ + if (unlikely(to_refill)) + cleaned_count = budget; + } + u64_stats_update_begin(&rx_ring->syncp); rx_ring->stats.packets += total_rx_packets; rx_ring->stats.bytes += total_rx_bytes; @@ -1497,8 +1463,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) rx_ring->q_vector->rx.total_packets += total_rx_packets; rx_ring->q_vector->rx.total_bytes += total_rx_bytes; - /* guarantee a trip back through this routine if there was a failure */ - return failure ? budget : (int)total_rx_packets; + return cleaned_count; } static inline u32 iavf_buildreg_itr(const int type, u16 itr) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 234e189c1987..9c6661a6edf2 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -383,7 +383,6 @@ struct iavf_ring { struct iavf_q_vector *q_vector; /* Backreference to associated vector */ struct rcu_head rcu; /* to avoid race on free */ - u16 next_to_alloc; struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must * return before it sees the EOP for * the current packet, we save that skb @@ -426,7 +425,7 @@ static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) #define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring)) -bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count); +void iavf_alloc_rx_buffers(struct iavf_ring *rxr); netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev); void iavf_clean_tx_ring(struct iavf_ring *tx_ring); void iavf_clean_rx_ring(struct iavf_ring *rx_ring); From patchwork Tue May 30 15:00:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787688 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=ozkvIgee; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfW5zXpz20Pc for ; Wed, 31 May 2023 01:04:35 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 1893C83C95; Tue, 30 May 2023 15:04:34 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 1893C83C95 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459074; bh=B7uEWnFkzSSmAVTZ9y0iN99dHYR/zBy60glodcdj6UI=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=ozkvIgeelJZc2QywF5qrgRnCNHLnCX1sXF72foRPUQrU0D4HSAf7ySEky+aW8/nKd WzSgK9q5LmgRY/Ly4i+Hr3NaRmKpVdAcOeopkWoObQK8orqxrwAZWK6LI5u/thKYzJ yEbphQKUPjeIlEl6Vc6oVXrgggPF0WQtE740mmets4BS7nxRhANoI5VT7dhKL0CDHY CY+w4+r2RtMF/tc1abxCgKxTE0muOvwE2m4F7S8SJXTGjqmiQ52dT9WOhKeyMTupox 05JJ9LuoL2bW9gvagKtwvds/gOwBUTScZsgLw4U6ZVVPTDFKFCCFiOABJngXZV2iUA zjVXd1AVllEXw== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id it9kvTaTakYA; Tue, 30 May 2023 15:04:32 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 6786583CA6; Tue, 30 May 2023 15:04:32 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 6786583CA6 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 012921BF5DE for ; Tue, 30 May 2023 15:04:24 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id DC05040901 for ; Tue, 30 May 2023 15:04:23 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org DC05040901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dKvlFFVkJOW8 for ; Tue, 30 May 2023 15:04:22 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 353C540574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 353C540574 for ; Tue, 30 May 2023 15:04:22 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192367" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192367" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304155" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304155" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:20 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:27 +0200 Message-Id: <20230530150035.1943669-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459062; x=1716995062; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QfdoGFlJ8yMDkA8/gAo9Nh/W8B7hE6+yXQ8lf/2FX3U=; b=n+IGZHrt+bJyFZoPTeFSuEyb38+nNF7CzAoYEcil9RxmYTvipbtBJYUk qXl21hWeHvC4V/HWzL1ME5g94mr/BngBYRX1CiVzeBM5gblfkIpcXqKze egWO0W1Ko8rB/U7okeHSDFs1PmtWNuuhcZMcUW5cq9b9BoD+p38p6KQZx bSrUsKM040ZD1IHjJWzRrsXzYd6bHzCgm3ZBMUCW3jh3Ox1jh9Tf6hj4u WP9xLc6WrPDZgiHqBl6oD4adu1MFK57b1o3oRJWFRUXeB3ZmIS6JOics1 cAao/dkEQk7SAAQU2Mh6ZldCMyApNo/LXa1nnQ84vxplefpGbdLpsfMDw Q==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=n+IGZHrt Subject: [Intel-wired-lan] [PATCH net-next v3 04/12] iavf: remove page splitting/recycling X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" As an intermediate step, remove all page splitting/recyclig code. Just always allocate a new page and don't touch its refcount, so that it gets freed by the core stack later. The change allows to greatly simplify certain parts of the code: Function: add/remove: 2/3 grow/shrink: 0/5 up/down: 543/-963 (-420) &iavf_rx_buf can even now retire in favor of just storing an array of pages used for Rx. Their DMA addresses can be stored in page::dma_addr -- use Page Pool's function for that. No surprise perf loses up to 30% here, but that regression will go away once PP lands. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 280 ++++++-------------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 17 +- 3 files changed, 88 insertions(+), 211 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index ade32aa1ed78..2d00be69fcde 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1236,7 +1236,7 @@ static void iavf_configure(struct iavf_adapter *adapter) for (i = 0; i < adapter->num_active_queues; i++) { struct iavf_ring *ring = &adapter->rx_rings[i]; - iavf_alloc_rx_buffers(ring); + iavf_alloc_rx_pages(ring); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index fd08ce67380e..10aaa18e467c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -3,6 +3,7 @@ #include #include +#include #include "iavf.h" #include "iavf_trace.h" @@ -690,11 +691,10 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) **/ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) { - unsigned long bi_size; u16 i; /* ring already cleared, nothing to do */ - if (!rx_ring->rx_bi) + if (!rx_ring->rx_pages) return; if (rx_ring->skb) { @@ -704,38 +704,30 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) /* Free all the Rx ring sk_buffs */ for (i = 0; i < rx_ring->count; i++) { - struct iavf_rx_buffer *rx_bi = &rx_ring->rx_bi[i]; + struct page *page = rx_ring->rx_pages[i]; + dma_addr_t dma; - if (!rx_bi->page) + if (!page) continue; + dma = page_pool_get_dma_addr(page); + /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. */ - dma_sync_single_range_for_cpu(rx_ring->dev, - rx_bi->dma, - rx_bi->page_offset, + dma_sync_single_range_for_cpu(rx_ring->dev, dma, IAVF_SKB_PAD, rx_ring->rx_buf_len, DMA_FROM_DEVICE); /* free resources associated with mapping */ - dma_unmap_page_attrs(rx_ring->dev, rx_bi->dma, + dma_unmap_page_attrs(rx_ring->dev, dma, iavf_rx_pg_size(rx_ring), DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_bi->page, rx_bi->pagecnt_bias); - - rx_bi->page = NULL; - rx_bi->page_offset = 0; + __free_pages(page, iavf_rx_pg_order(rx_ring)); } - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; - memset(rx_ring->rx_bi, 0, bi_size); - - /* Zero out the descriptor ring */ - memset(rx_ring->desc, 0, rx_ring->size); - rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -749,8 +741,8 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) void iavf_free_rx_resources(struct iavf_ring *rx_ring) { iavf_clean_rx_ring(rx_ring); - kfree(rx_ring->rx_bi); - rx_ring->rx_bi = NULL; + kfree(rx_ring->rx_pages); + rx_ring->rx_pages = NULL; if (rx_ring->desc) { dma_free_coherent(rx_ring->dev, rx_ring->size, @@ -768,14 +760,13 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) { struct device *dev = rx_ring->dev; - int bi_size; /* warn if we are about to overwrite the pointer */ - WARN_ON(rx_ring->rx_bi); - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; - rx_ring->rx_bi = kzalloc(bi_size, GFP_KERNEL); - if (!rx_ring->rx_bi) - goto err; + WARN_ON(rx_ring->rx_pages); + rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages), + GFP_KERNEL); + if (!rx_ring->rx_pages) + return -ENOMEM; u64_stats_init(&rx_ring->syncp); @@ -796,8 +787,9 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) return 0; err: - kfree(rx_ring->rx_bi); - rx_ring->rx_bi = NULL; + kfree(rx_ring->rx_pages); + rx_ring->rx_pages = NULL; + return -ENOMEM; } @@ -820,36 +812,23 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) } /** - * iavf_alloc_mapped_page - recycle or make a new page - * @rx_ring: ring to use - * @bi: rx_buffer struct to modify + * iavf_alloc_mapped_page - allocate and map a new page * @dev: device used for DMA mapping * @order: page order to allocate * @gfp: GFP mask to allocate page * - * Returns true if the page was successfully allocated or - * reused. + * Returns a new &page if the it was successfully allocated, %NULL otherwise. **/ -static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *bi, - struct device *dev, u32 order, - gfp_t gfp) +static struct page *iavf_alloc_mapped_page(struct device *dev, u32 order, + gfp_t gfp) { - struct page *page = bi->page; + struct page *page; dma_addr_t dma; - /* since we are recycling buffers we should seldom need to alloc */ - if (likely(page)) { - rx_ring->rx_stats.page_reuse_count++; - return true; - } - /* alloc new page for storage */ page = __dev_alloc_pages(gfp, order); - if (unlikely(!page)) { - rx_ring->rx_stats.alloc_page_failed++; - return false; - } + if (unlikely(!page)) + return NULL; /* map page for use */ dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order, @@ -860,18 +839,12 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, */ if (dma_mapping_error(dev, dma)) { __free_pages(page, order); - rx_ring->rx_stats.alloc_page_failed++; - return false; + return NULL; } - bi->dma = dma; - bi->page = page; - bi->page_offset = IAVF_SKB_PAD; - - /* initialize pagecnt_bias to 1 representing we fully own page */ - bi->pagecnt_bias = 1; + page_pool_set_dma_addr(page, dma); - return true; + return page; } /** @@ -896,7 +869,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, } /** - * __iavf_alloc_rx_buffers - Replace used receive buffers + * __iavf_alloc_rx_pages - Replace used receive pages * @rx_ring: ring to place buffers on * @to_refill: number of buffers to replace * @gfp: GFP mask to allocate pages @@ -904,42 +877,47 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, * Returns 0 if all allocations were successful or the number of buffers left * to refill in case of an allocation failure. **/ -static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill, - gfp_t gfp) +static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, + gfp_t gfp) { u32 order = iavf_rx_pg_order(rx_ring); struct device *dev = rx_ring->dev; u32 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; - struct iavf_rx_buffer *bi; /* do nothing if no valid netdev defined */ if (unlikely(!rx_ring->netdev || !to_refill)) return 0; rx_desc = IAVF_RX_DESC(rx_ring, ntu); - bi = &rx_ring->rx_bi[ntu]; do { - if (!iavf_alloc_mapped_page(rx_ring, bi, dev, order, gfp)) + struct page *page; + dma_addr_t dma; + + page = iavf_alloc_mapped_page(dev, order, gfp); + if (!page) { + rx_ring->rx_stats.alloc_page_failed++; break; + } + + rx_ring->rx_pages[ntu] = page; + dma = page_pool_get_dma_addr(page); /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(dev, bi->dma, bi->page_offset, + dma_sync_single_range_for_device(dev, dma, IAVF_SKB_PAD, rx_ring->rx_buf_len, DMA_FROM_DEVICE); /* Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ - rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); + rx_desc->read.pkt_addr = cpu_to_le64(dma + IAVF_SKB_PAD); rx_desc++; - bi++; ntu++; if (unlikely(ntu == rx_ring->count)) { rx_desc = IAVF_RX_DESC(rx_ring, 0); - bi = rx_ring->rx_bi; ntu = 0; } @@ -953,9 +931,9 @@ static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill, return to_refill; } -void iavf_alloc_rx_buffers(struct iavf_ring *rxr) +void iavf_alloc_rx_pages(struct iavf_ring *rxr) { - __iavf_alloc_rx_buffers(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL); + __iavf_alloc_rx_pages(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL); } /** @@ -1100,80 +1078,20 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) return false; } -/** - * iavf_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * - * @rx_buffer: buffer containing the page - * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ -static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer) -{ - unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; - struct page *page = rx_buffer->page; - - /* Is any reuse possible? */ - if (!dev_page_is_reusable(page)) - return false; - -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely((page_count(page) - pagecnt_bias) > 1)) - return false; -#else -#define IAVF_LAST_OFFSET \ - (SKB_WITH_OVERHEAD(PAGE_SIZE) - IAVF_RXBUFFER_2048) - if (rx_buffer->page_offset > IAVF_LAST_OFFSET) - return false; -#endif - - /* If we have drained the page fragment pool we need to update - * the pagecnt_bias and page count so that we fully restock the - * number of references the driver holds. - */ - if (unlikely(!pagecnt_bias)) { - page_ref_add(page, USHRT_MAX); - rx_buffer->pagecnt_bias = USHRT_MAX; - } - - return true; -} - /** * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff * @skb: sk_buff to place the data into - * @rx_buffer: buffer containing page to add + * @page: page containing data to add * @size: packet length from rx_desc * @pg_size: Rx buffer page size * - * This function will add the data contained in rx_buffer->page to the skb. + * This function will add the data contained in page to the skb. * It will just attach the page as a frag to the skb. * * The function will then update the page offset. **/ -static void iavf_add_rx_frag(struct sk_buff *skb, - struct iavf_rx_buffer *rx_buffer, - u32 size, u32 pg_size) +static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size, + u32 pg_size) { #if (PAGE_SIZE < 8192) unsigned int truesize = pg_size / 2; @@ -1181,46 +1099,34 @@ static void iavf_add_rx_frag(struct sk_buff *skb, unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); #endif - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, - rx_buffer->page_offset, size, truesize); - - /* page is being used so we must update the page offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - - /* We have pulled a buffer for use, so decrement pagecnt_bias */ - rx_buffer->pagecnt_bias--; + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, IAVF_SKB_PAD, + size, truesize); } /** - * iavf_sync_rx_buffer - Synchronize received data for use + * iavf_sync_rx_page - Synchronize received data for use * @dev: device used for DMA mapping - * @buf: Rx buffer containing the data + * @page: Rx page containing the data * @size: size of the received data * * This function will synchronize the Rx buffer for use by the CPU. */ -static void iavf_sync_rx_buffer(struct device *dev, struct iavf_rx_buffer *buf, - u32 size) +static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size) { - dma_sync_single_range_for_cpu(dev, buf->dma, buf->page_offset, size, - DMA_FROM_DEVICE); + dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page), + IAVF_SKB_PAD, size, DMA_FROM_DEVICE); } /** * iavf_build_skb - Build skb around an existing buffer - * @rx_buffer: Rx buffer with the data + * @page: Rx page to with the data * @size: size of the data * @pg_size: size of the Rx page * * This function builds an skb around an existing Rx buffer, taking care * to set up the skb correctly and avoid any memcpy overhead. */ -static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer, - u32 size, u32 pg_size) +static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size) { void *va; #if (PAGE_SIZE < 8192) @@ -1232,11 +1138,11 @@ static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer, struct sk_buff *skb; /* prefetch first cache line of first page */ - va = page_address(rx_buffer->page) + rx_buffer->page_offset; - net_prefetch(va); + va = page_address(page); + net_prefetch(va + IAVF_SKB_PAD); /* build an skb around the page buffer */ - skb = napi_build_skb(va - IAVF_SKB_PAD, truesize); + skb = napi_build_skb(va, truesize); if (unlikely(!skb)) return NULL; @@ -1244,42 +1150,21 @@ static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer, skb_reserve(skb, IAVF_SKB_PAD); __skb_put(skb, size); - /* buffer is used by skb, update page_offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - - rx_buffer->pagecnt_bias--; - return skb; } /** - * iavf_put_rx_buffer - Recycle or free used buffer - * @rx_ring: rx descriptor ring to transact packets on + * iavf_unmap_rx_page - Unmap used page * @dev: device used for DMA mapping - * @rx_buffer: Rx buffer to handle + * @page: page to release * @pg_size: Rx page size - * - * Either recycle the buffer if possible or unmap and free the page. */ -static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct device *dev, - struct iavf_rx_buffer *rx_buffer, u32 pg_size) +static void iavf_unmap_rx_page(struct device *dev, struct page *page, + u32 pg_size) { - if (iavf_can_reuse_rx_page(rx_buffer)) { - rx_ring->rx_stats.page_reuse_count++; - return; - } - - /* we are not reusing the buffer so unmap it */ - dma_unmap_page_attrs(dev, rx_buffer->dma, pg_size, + dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), pg_size, DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); - - /* clear contents of buffer_info */ - rx_buffer->page = NULL; + page_pool_set_dma_addr(page, 0); } /** @@ -1332,8 +1217,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) u32 cleaned_count = 0; while (likely(cleaned_count < budget)) { - struct iavf_rx_buffer *rx_buffer; union iavf_rx_desc *rx_desc; + struct page *page; unsigned int size; u16 vlan_tag = 0; u8 rx_ptype; @@ -1341,8 +1226,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* return some buffers to hardware, one at a time is too slow */ if (to_refill >= IAVF_RX_BUFFER_WRITE) - to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, - gfp); + to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill, + gfp); rx_desc = IAVF_RX_DESC(rx_ring, ntc); @@ -1366,32 +1251,37 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); - rx_buffer = &rx_ring->rx_bi[ntc]; + + page = rx_ring->rx_pages[ntc]; + rx_ring->rx_pages[ntc] = NULL; /* Very rare, but possible case. The most common reason: * the last fragment contained FCS only, which was then * stripped by the HW. */ - if (unlikely(!size)) + if (unlikely(!size)) { + iavf_unmap_rx_page(dev, page, pg_size); + __free_pages(page, get_order(pg_size)); goto skip_data; + } - iavf_sync_rx_buffer(dev, rx_buffer, size); + iavf_sync_rx_page(dev, page, size); + iavf_unmap_rx_page(dev, page, pg_size); /* retrieve a buffer from the ring */ if (skb) - iavf_add_rx_frag(skb, rx_buffer, size, pg_size); + iavf_add_rx_frag(skb, page, size, pg_size); else - skb = iavf_build_skb(rx_buffer, size, pg_size); + skb = iavf_build_skb(page, size, pg_size); /* exit if we failed to retrieve a buffer */ if (!skb) { + __free_pages(page, get_order(pg_size)); rx_ring->rx_stats.alloc_buff_failed++; break; } skip_data: - iavf_put_rx_buffer(rx_ring, dev, rx_buffer, pg_size); - cleaned_count++; to_refill++; if (unlikely(++ntc == ring_size)) @@ -1448,7 +1338,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) rx_ring->skb = skb; if (to_refill >= IAVF_RX_BUFFER_WRITE) { - to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, gfp); + to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill, gfp); /* guarantee a trip back through this routine if there was * a failure */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 9c6661a6edf2..c09ac580fe84 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -272,17 +272,6 @@ struct iavf_tx_buffer { u32 tx_flags; }; -struct iavf_rx_buffer { - dma_addr_t dma; - struct page *page; -#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) - __u32 page_offset; -#else - __u16 page_offset; -#endif - __u16 pagecnt_bias; -}; - struct iavf_queue_stats { u64 packets; u64 bytes; @@ -302,8 +291,6 @@ struct iavf_rx_queue_stats { u64 non_eop_descs; u64 alloc_page_failed; u64 alloc_buff_failed; - u64 page_reuse_count; - u64 realloc_count; }; enum iavf_ring_state_t { @@ -331,7 +318,7 @@ struct iavf_ring { struct net_device *netdev; /* netdev ring maps to */ union { struct iavf_tx_buffer *tx_bi; - struct iavf_rx_buffer *rx_bi; + struct page **rx_pages; }; DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS); u16 queue_index; /* Queue number of ring */ @@ -425,7 +412,7 @@ static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) #define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring)) -void iavf_alloc_rx_buffers(struct iavf_ring *rxr); +void iavf_alloc_rx_pages(struct iavf_ring *rxr); netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev); void iavf_clean_tx_ring(struct iavf_ring *tx_ring); void iavf_clean_rx_ring(struct iavf_ring *rx_ring); From patchwork Tue May 30 15:00:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787689 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=9ogWYeYO; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfd2tKkz20Pc for ; Wed, 31 May 2023 01:04:41 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 9C67F83AC0; Tue, 30 May 2023 15:04:39 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 9C67F83AC0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459079; bh=qa1q2LPIZVvyi9Q6o32jHcnMmgtUuIW2R6/jLXIS/Yk=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=9ogWYeYOOSr1SCBGg0CVjF+u41IeGxct9wwBszV4U4zk4yCl3VRcyO6ejMF36DPLz 8W3CNAtFO3Y9z+cueONSNsR436S8rRxyYrkYN9sNrIiIlUJWkwNaLqQRqy9JcpGFwa 4XUGaujhPq8glZpfWU7YbeRzL4aB3qVrKF9eRteqgD8Y02ujc7GzTL3hwZV1j3FDTn ZwsBpU9SV+6/dzxkt27V2/7JQCbfcaFD6oHuZ4/9TtXhFP7TDOqq7LhqUgHmLp2sHz g2iPQoMNlqzCMzB0nWKhUHijqVyhN/NYBm+LAqKIOrpHtV4RqktDByYmoUVCq50Mr4 D1jgbVYqXN4pg== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id g8LWoSGEKrB6; Tue, 30 May 2023 15:04:37 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 3F383817A3; Tue, 30 May 2023 15:04:37 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 3F383817A3 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id A85061BF5DE for ; Tue, 30 May 2023 15:04:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 8216740901 for ; Tue, 30 May 2023 15:04:28 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 8216740901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DUErdfwJdINv for ; Tue, 30 May 2023 15:04:27 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org CFCF140574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id CFCF140574 for ; Tue, 30 May 2023 15:04:26 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192424" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192424" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304163" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304163" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:23 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:28 +0200 Message-Id: <20230530150035.1943669-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459066; x=1716995066; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u6fOZJx2A2eehQenBa9Q4w9Om2mu/zqE0PB7xsCryKY=; b=TXtQ+RuGIb2eErERqIyVou+3BOwncr45/SLG4JONek7h8YvEMWoUr4gX aLWWrd7oiVy7j8ssgv9uh6g6GB+dzEmYsn9aFlNit92wtQrgu/zdjRhGu UDK0CAGrFFnrgkFZFfjDyzt7FpNpu++5h84W4qIOyS/zXi3BSFa9L5Oui x3Whmbish2N71EKu+YgADNpjrqZr+mmGdUN7s/Q+Rrlm05jYJsp3urbHn RiV5FicFfmtFYyWVqwswqhfh3stJFXmlGPMcgLtsw+c9O7uqxxsigps4i ZLbcSnyuxGrTp9zEzlG8kZe+cwcCIolV7jgVGYB0woT6mKP3IEnKjtzeZ A==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=TXtQ+RuG Subject: [Intel-wired-lan] [PATCH net-next v3 05/12] iavf: always use a full order-0 page X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" The current scheme with trying to pick the smallest buffer possible for the current MTU in order to flip/split pages is not very optimal. For example, on default MTU of 1500 it gives only 192 bytes of headroom, while XDP may require up to 258. But this also involves unnecessary code complication, which sometimes is even hard to follow. As page split is no more, always allocate order-0 pages. This optimizes performance a bit and drops some bytes off the object code. Next, always pick the maximum buffer length available for this %PAGE_SIZE to set it up in the hardware. This means it now becomes a constant value, which also has its positive impact. On x64 this means (without XDP): 4096 page 64 head, 320 tail 3712 HW buffer size 3686 max MTU w/o frags Previously, the maximum MTU w/o splitting a frame into several buffers was 3046. Increased buffer size allows us to reach the maximum frame size w/ frags supported by HW: 16382 bytes (MTU 16356). Reflect it in the netdev config as well. Relying on max single buffer size when calculating MTU was not correct. Move around a couple of fields in &iavf_ring after ::rx_buf_len removal to reduce holes and improve cache locality. Instead of providing the Rx definitions, which can and will be reused in rest of the drivers, exclusively for IAVF, do that in the libie header. Non-PP drivers could still use at least some of them and lose a couple copied lines. Function: add/remove: 0/0 grow/shrink: 3/9 up/down: 18/-265 (-247) + even reclaims a half percent of performance, nice. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_main.c | 32 +----- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 96 +++++++---------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 100 +----------------- drivers/net/ethernet/intel/iavf/iavf_type.h | 2 - .../net/ethernet/intel/iavf/iavf_virtchnl.c | 15 +-- include/linux/net/intel/libie/rx.h | 35 ++++++ 6 files changed, 85 insertions(+), 195 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 2d00be69fcde..120bb6a09ceb 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include + #include "iavf.h" #include "iavf_prototype.h" #include "iavf_client.h" @@ -709,32 +711,10 @@ static void iavf_configure_tx(struct iavf_adapter *adapter) **/ static void iavf_configure_rx(struct iavf_adapter *adapter) { - unsigned int rx_buf_len = IAVF_RXBUFFER_2048; struct iavf_hw *hw = &adapter->hw; - int i; - - if (PAGE_SIZE < 8192) { - struct net_device *netdev = adapter->netdev; - /* For jumbo frames on systems with 4K pages we have to use - * an order 1 page, so we might as well increase the size - * of our Rx buffer to make better use of the available space - */ - rx_buf_len = IAVF_RXBUFFER_3072; - - /* We use a 1536 buffer size for configurations with - * standard Ethernet mtu. On x86 this gives us enough room - * for shared info and 192 bytes of padding. - */ - if (!IAVF_2K_TOO_SMALL_WITH_PADDING && - (netdev->mtu <= ETH_DATA_LEN)) - rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - } - - for (i = 0; i < adapter->num_active_queues; i++) { + for (u32 i = 0; i < adapter->num_active_queues; i++) adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); - adapter->rx_rings[i].rx_buf_len = rx_buf_len; - } } /** @@ -2577,11 +2557,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) netdev->netdev_ops = &iavf_netdev_ops; iavf_set_ethtool_ops(netdev); - netdev->watchdog_timeo = 5 * HZ; - - /* MTU range: 68 - 9710 */ - netdev->min_mtu = ETH_MIN_MTU; - netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD; + netdev->max_mtu = LIBIE_MAX_MTU; if (!is_valid_ether_addr(adapter->hw.mac.addr)) { dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n", diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 10aaa18e467c..c33a3d681c83 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -302,7 +302,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, ((j / WB_STRIDE) == 0) && (j > 0) && !test_bit(__IAVF_VSI_DOWN, vsi->state) && (IAVF_DESC_UNUSED(tx_ring) != tx_ring->count)) - tx_ring->arm_wb = true; + tx_ring->flags |= IAVF_TXRX_FLAGS_ARM_WB; } /* notify netdev of completed buffers */ @@ -715,17 +715,16 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. */ - dma_sync_single_range_for_cpu(rx_ring->dev, dma, IAVF_SKB_PAD, - rx_ring->rx_buf_len, + dma_sync_single_range_for_cpu(rx_ring->dev, dma, + LIBIE_SKB_HEADROOM, + LIBIE_RX_BUF_LEN, DMA_FROM_DEVICE); /* free resources associated with mapping */ - dma_unmap_page_attrs(rx_ring->dev, dma, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); + dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, + DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __free_pages(page, iavf_rx_pg_order(rx_ring)); + __free_page(page); } rx_ring->next_to_clean = 0; @@ -814,31 +813,29 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) /** * iavf_alloc_mapped_page - allocate and map a new page * @dev: device used for DMA mapping - * @order: page order to allocate * @gfp: GFP mask to allocate page * * Returns a new &page if the it was successfully allocated, %NULL otherwise. **/ -static struct page *iavf_alloc_mapped_page(struct device *dev, u32 order, - gfp_t gfp) +static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp) { struct page *page; dma_addr_t dma; /* alloc new page for storage */ - page = __dev_alloc_pages(gfp, order); + page = __dev_alloc_page(gfp); if (unlikely(!page)) return NULL; /* map page for use */ - dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order, - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); + dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE, + IAVF_RX_DMA_ATTR); /* if mapping failed free memory back to system since * there isn't much point in holding memory we can't use */ if (dma_mapping_error(dev, dma)) { - __free_pages(page, order); + __free_page(page); return NULL; } @@ -880,7 +877,6 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, gfp_t gfp) { - u32 order = iavf_rx_pg_order(rx_ring); struct device *dev = rx_ring->dev; u32 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; @@ -895,7 +891,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, struct page *page; dma_addr_t dma; - page = iavf_alloc_mapped_page(dev, order, gfp); + page = iavf_alloc_mapped_page(dev, gfp); if (!page) { rx_ring->rx_stats.alloc_page_failed++; break; @@ -905,14 +901,14 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, dma = page_pool_get_dma_addr(page); /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(dev, dma, IAVF_SKB_PAD, - rx_ring->rx_buf_len, + dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM, + LIBIE_RX_BUF_LEN, DMA_FROM_DEVICE); /* Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ - rx_desc->read.pkt_addr = cpu_to_le64(dma + IAVF_SKB_PAD); + rx_desc->read.pkt_addr = cpu_to_le64(dma + LIBIE_SKB_HEADROOM); rx_desc++; ntu++; @@ -1083,24 +1079,16 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) * @skb: sk_buff to place the data into * @page: page containing data to add * @size: packet length from rx_desc - * @pg_size: Rx buffer page size * * This function will add the data contained in page to the skb. * It will just attach the page as a frag to the skb. * * The function will then update the page offset. **/ -static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size, - u32 pg_size) +static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = pg_size / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); -#endif - - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, IAVF_SKB_PAD, - size, truesize); + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, + LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE); } /** @@ -1114,40 +1102,34 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size, static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size) { dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page), - IAVF_SKB_PAD, size, DMA_FROM_DEVICE); + LIBIE_SKB_HEADROOM, size, + DMA_FROM_DEVICE); } /** * iavf_build_skb - Build skb around an existing buffer * @page: Rx page to with the data * @size: size of the data - * @pg_size: size of the Rx page * * This function builds an skb around an existing Rx buffer, taking care * to set up the skb correctly and avoid any memcpy overhead. */ -static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size) +static struct sk_buff *iavf_build_skb(struct page *page, u32 size) { - void *va; -#if (PAGE_SIZE < 8192) - unsigned int truesize = pg_size / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(IAVF_SKB_PAD + size); -#endif struct sk_buff *skb; + void *va; /* prefetch first cache line of first page */ va = page_address(page); - net_prefetch(va + IAVF_SKB_PAD); + net_prefetch(va + LIBIE_SKB_HEADROOM); /* build an skb around the page buffer */ - skb = napi_build_skb(va, truesize); + skb = napi_build_skb(va, LIBIE_RX_TRUESIZE); if (unlikely(!skb)) return NULL; /* update pointers within the skb to store the data */ - skb_reserve(skb, IAVF_SKB_PAD); + skb_reserve(skb, LIBIE_SKB_HEADROOM); __skb_put(skb, size); return skb; @@ -1157,13 +1139,12 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size) * iavf_unmap_rx_page - Unmap used page * @dev: device used for DMA mapping * @page: page to release - * @pg_size: Rx page size */ -static void iavf_unmap_rx_page(struct device *dev, struct page *page, - u32 pg_size) +static void iavf_unmap_rx_page(struct device *dev, struct page *page) { - dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), pg_size, - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); + dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), + LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE, + IAVF_RX_DMA_ATTR); page_pool_set_dma_addr(page, 0); } @@ -1209,7 +1190,6 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; u32 to_refill = IAVF_DESC_UNUSED(rx_ring); - u32 pg_size = iavf_rx_pg_size(rx_ring); struct sk_buff *skb = rx_ring->skb; struct device *dev = rx_ring->dev; u32 ntc = rx_ring->next_to_clean; @@ -1260,23 +1240,23 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) * stripped by the HW. */ if (unlikely(!size)) { - iavf_unmap_rx_page(dev, page, pg_size); - __free_pages(page, get_order(pg_size)); + iavf_unmap_rx_page(dev, page); + __free_page(page); goto skip_data; } iavf_sync_rx_page(dev, page, size); - iavf_unmap_rx_page(dev, page, pg_size); + iavf_unmap_rx_page(dev, page); /* retrieve a buffer from the ring */ if (skb) - iavf_add_rx_frag(skb, page, size, pg_size); + iavf_add_rx_frag(skb, page, size); else - skb = iavf_build_skb(page, size, pg_size); + skb = iavf_build_skb(page, size); /* exit if we failed to retrieve a buffer */ if (!skb) { - __free_pages(page, get_order(pg_size)); + __free_page(page); rx_ring->rx_stats.alloc_buff_failed++; break; } @@ -1486,8 +1466,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) clean_complete = false; continue; } - arm_wb |= ring->arm_wb; - ring->arm_wb = false; + arm_wb |= !!(ring->flags & IAVF_TXRX_FLAGS_ARM_WB); + ring->flags &= ~IAVF_TXRX_FLAGS_ARM_WB; } /* Handle case where we are called by netpoll with a budget of 0 */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index c09ac580fe84..1421e90c7c4e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -81,79 +81,11 @@ enum iavf_dyn_idx_t { BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \ BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) -/* Supported Rx Buffer Sizes (a multiple of 128) */ -#define IAVF_RXBUFFER_256 256 -#define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */ -#define IAVF_RXBUFFER_2048 2048 -#define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ -#define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ - -/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we - * reserve 2 more, and skb_shared_info adds an additional 384 bytes more, - * this adds up to 512 bytes of extra data meaning the smallest allocation - * we could have is 1K. - * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab) - * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab) - */ -#define IAVF_RX_HDR_SIZE IAVF_RXBUFFER_256 -#define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) #define iavf_rx_desc iavf_32byte_rx_desc #define IAVF_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) -/* Attempt to maximize the headroom available for incoming frames. We - * use a 2K buffer for receives and need 1536/1534 to store the data for - * the frame. This leaves us with 512 bytes of room. From that we need - * to deduct the space needed for the shared info and the padding needed - * to IP align the frame. - * - * Note: For cache line sizes 256 or larger this value is going to end - * up negative. In these cases we should fall back to the legacy - * receive path. - */ -#if (PAGE_SIZE < 8192) -#define IAVF_2K_TOO_SMALL_WITH_PADDING \ -((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048)) - -static inline int iavf_compute_pad(int rx_buf_len) -{ - int page_size, pad_size; - - page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2); - pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len; - - return pad_size; -} - -static inline int iavf_skb_pad(void) -{ - int rx_buf_len; - - /* If a 2K buffer cannot handle a standard Ethernet frame then - * optimize padding for a 3K buffer instead of a 1.5K buffer. - * - * For a 3K buffer we need to add enough padding to allow for - * tailroom due to NET_IP_ALIGN possibly shifting us out of - * cache-line alignment. - */ - if (IAVF_2K_TOO_SMALL_WITH_PADDING) - rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN); - else - rx_buf_len = IAVF_RXBUFFER_1536; - - /* if needed make room for NET_IP_ALIGN */ - rx_buf_len -= NET_IP_ALIGN; - - return iavf_compute_pad(rx_buf_len); -} - -#define IAVF_SKB_PAD iavf_skb_pad() -#else -#define IAVF_2K_TOO_SMALL_WITH_PADDING false -#define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#endif - /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields * @rx_desc: pointer to receive descriptor (in le64 format) @@ -293,12 +225,6 @@ struct iavf_rx_queue_stats { u64 alloc_buff_failed; }; -enum iavf_ring_state_t { - __IAVF_TX_FDIR_INIT_DONE, - __IAVF_TX_XPS_INIT_DONE, - __IAVF_RING_STATE_NBITS /* must be last */ -}; - /* some useful defines for virtchannel interface, which * is the only remaining user of header split */ @@ -320,10 +246,9 @@ struct iavf_ring { struct iavf_tx_buffer *tx_bi; struct page **rx_pages; }; - DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS); + u8 __iomem *tail; u16 queue_index; /* Queue number of ring */ u8 dcb_tc; /* Traffic class of ring */ - u8 __iomem *tail; /* high bit set means dynamic, use accessors routines to read/write. * hardware only supports 2us resolution for the ITR registers. @@ -332,24 +257,16 @@ struct iavf_ring { */ u16 itr_setting; - u16 count; /* Number of descriptors */ u16 reg_idx; /* HW register index of the ring */ - u16 rx_buf_len; + u16 count; /* Number of descriptors */ /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; - u8 atr_sample_rate; - u8 atr_count; - - bool ring_active; /* is ring online or not */ - bool arm_wb; /* do something to arm write back */ - u8 packet_stride; - u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) -/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */ +#define IAVF_TXRX_FLAGS_ARM_WB BIT(1) /* BIT(2) is free */ #define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) @@ -401,17 +318,6 @@ struct iavf_ring_container { #define iavf_for_each_ring(pos, head) \ for (pos = (head).ring; pos != NULL; pos = pos->next) -static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) -{ -#if (PAGE_SIZE < 8192) - if (ring->rx_buf_len > (PAGE_SIZE / 2)) - return 1; -#endif - return 0; -} - -#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring)) - void iavf_alloc_rx_pages(struct iavf_ring *rxr); netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev); void iavf_clean_tx_ring(struct iavf_ring *tx_ring); diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h index 3030ba330326..bb90d8f3ad7e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_type.h +++ b/drivers/net/ethernet/intel/iavf/iavf_type.h @@ -10,8 +10,6 @@ #include "iavf_adminq.h" #include "iavf_devids.h" -#define IAVF_RXQ_CTX_DBUFF_SHIFT 7 - /* IAVF_MASK is a macro used on 32 bit registers */ #define IAVF_MASK(mask, shift) ((u32)(mask) << (shift)) diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index fdddc3588487..c726d0c91cd8 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include + #include "iavf.h" #include "iavf_prototype.h" #include "iavf_client.h" @@ -269,13 +271,12 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter) void iavf_configure_queues(struct iavf_adapter *adapter) { struct virtchnl_vsi_queue_config_info *vqci; - int i, max_frame = adapter->vf_res->max_mtu; + u32 i, max_frame = adapter->vf_res->max_mtu; int pairs = adapter->num_active_queues; struct virtchnl_queue_pair_info *vqpi; size_t len; - if (max_frame > IAVF_MAX_RXBUFFER || !max_frame) - max_frame = IAVF_MAX_RXBUFFER; + max_frame = min_not_zero(max_frame, LIBIE_MAX_RX_FRM_LEN); if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { /* bail because we already have a command pending */ @@ -289,10 +290,6 @@ void iavf_configure_queues(struct iavf_adapter *adapter) if (!vqci) return; - /* Limit maximum frame size when jumbo frames is not enabled */ - if (adapter->netdev->mtu <= ETH_DATA_LEN) - max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - vqci->vsi_id = adapter->vsi_res->vsi_id; vqci->num_queue_pairs = pairs; vqpi = vqci->qpair; @@ -309,9 +306,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) vqpi->rxq.ring_len = adapter->rx_rings[i].count; vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma; vqpi->rxq.max_pkt_size = max_frame; - vqpi->rxq.databuffer_size = - ALIGN(adapter->rx_rings[i].rx_buf_len, - BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT)); + vqpi->rxq.databuffer_size = LIBIE_RX_BUF_LEN; vqpi++; } diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 58bd0f35d025..3e8d0d5206e1 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -4,6 +4,7 @@ #ifndef __LIBIE_RX_H #define __LIBIE_RX_H +#include #include /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed @@ -125,4 +126,38 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, skb_set_hash(skb, hash, parsed.payload_layer); } +/* Rx MTU/buffer/truesize helpers. Mostly pure software-side; HW-defined values + * are valid for all Intel HW. + */ + +/* Space reserved in front of each frame */ +#define LIBIE_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN) +/* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ +#define LIBIE_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) + +/* Truesize: total space wasted on each frame. Always use order-0 pages */ +#define LIBIE_RX_PAGE_ORDER 0 +#define LIBIE_RX_TRUESIZE (PAGE_SIZE << LIBIE_RX_PAGE_ORDER) +/* Rx buffer size config is a multiple of 128 */ +#define LIBIE_RX_BUF_LEN_ALIGN 128 +/* HW-writeable space in one buffer: truesize - headroom/tailroom, + * HW-aligned + */ +#define __LIBIE_RX_BUF_LEN \ + ALIGN_DOWN(SKB_MAX_ORDER(LIBIE_SKB_HEADROOM, LIBIE_RX_PAGE_ORDER), \ + LIBIE_RX_BUF_LEN_ALIGN) +/* The largest size for a single descriptor as per HW */ +#define LIBIE_MAX_RX_BUF_LEN 9728U +/* "True" HW-writeable space: minimum from SW and HW values */ +#define LIBIE_RX_BUF_LEN min_t(u32, __LIBIE_RX_BUF_LEN, \ + LIBIE_MAX_RX_BUF_LEN) + +/* The maximum frame size as per HW (S/G) */ +#define __LIBIE_MAX_RX_FRM_LEN 16382U +/* ATST, HW can chain up to 5 Rx descriptors */ +#define LIBIE_MAX_RX_FRM_LEN min_t(u32, __LIBIE_MAX_RX_FRM_LEN, \ + LIBIE_RX_BUF_LEN * 5) +/* Maximum frame size minus LL overhead */ +#define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN) + #endif /* __LIBIE_RX_H */ From patchwork Tue May 30 15:00:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787691 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=nJWF9pH2; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfj3lmlz20Pc for ; Wed, 31 May 2023 01:04:45 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id BEF1C83C80; Tue, 30 May 2023 15:04:43 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org BEF1C83C80 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459083; bh=9FZT/iCsZNyV034CN5fezoSyIo/1r7UFQj7Xu/Xx914=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=nJWF9pH2fUOM9ayBiITPwFJCYm2bFiFSiDSh9WRPKXaKvgi2yrCThxsnYQxOySlK1 7FslKDDvvM7k/yvXEwWWLU0SOjimqlvYPqePwhjspJ/iOWI5+8nGKAZuCAf0fKkIEp dKL03JFCbZWIYWUoxyhOCE6jMpyH0lz1K/8m4119LM3tcX90133v7Jms9QOP/RrA0g ygvSFexAL8O71neEVjjtjmcg2AZ08Vsp3YEUBEiUn8q+zGirbQZubrKBZgdH23irIp /UXdOjYm6aiS82NSIrNRMBZOseVHRw2HJA3rqELN/obU4vCSfRRB7whIEwbp19Zt7h IiPDeotT4pvmA== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rkS94ROeq2vf; Tue, 30 May 2023 15:04:42 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 8317783CD3; Tue, 30 May 2023 15:04:42 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 8317783CD3 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id B5C9C1BF5DE for ; Tue, 30 May 2023 15:04:32 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 8DEB440901 for ; Tue, 30 May 2023 15:04:32 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 8DEB440901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id T1PGTrM0Gwuk for ; Tue, 30 May 2023 15:04:31 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 76EC340574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 76EC340574 for ; Tue, 30 May 2023 15:04:31 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192474" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192474" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304166" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304166" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:27 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:29 +0200 Message-Id: <20230530150035.1943669-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459071; x=1716995071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SAfuq5omA5qC5ERwyk1Qe81dMmdwFwVQv0ntwrSrLvQ=; b=fT1bQwJ9wp7yR0Cluu/BY94ldR7uq4/UYmmjG7jrpFShO1j2QuJcUxEJ yiLR0HdwK+oRQ8Lfw6uKK2nI/2LHgcV47UKGhYNFEsAyzCN0nLmK8115k /JzYjEXjSAvVBEcdjtuW9l0Zm0iaCeiWOjd9mjQV21DzVFPI5fIn2SdmR lIkNL+RvrZnZgiiBFJiQMX7kYiFgcTnd1cw2HG9z/D6geKhQgL0/5T7Se zJXPYDZuLH3WydQ24PKDZ1ATML1pBbX3JQS0Qd8UGdNlM2ZfowQwM2gXk XsPfF87oSbmRRS+eWORuiZ4RQe06yVGrh6vUwGnagn+ldPDX7+REA6qwf Q==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fT1bQwJ9 Subject: [Intel-wired-lan] [PATCH net-next v3 06/12] net: skbuff: don't include into X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Currently, touching triggers a rebuild of more than a half of the kernel. That's because it's included in . In 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling"), Matteo included it to be able to call a couple functions defined there. Then, in 57f05bc2ab24 ("page_pool: keep pp info as long as page pool owns the page") one of the calls was removed, so only one left. It's call to page_pool_return_skb_page() in napi_frag_unref(). The function is external and doesn't have any dependencies. Having include of very niche page_pool.h only for that looks like an overkill. Instead, move the declaration of that function to skbuff.h itself, with a small comment that it's a special guest and should not be touched. Now, after a few include fixes in the drivers, touching page_pool.h only triggers rebuilding of the drivers using it and a couple core networking files. Signed-off-by: Alexander Lobakin Reviewed-by: Alexander Duyck --- drivers/net/ethernet/engleder/tsnep_main.c | 1 + drivers/net/ethernet/freescale/fec_main.c | 1 + drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 1 + drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 1 + drivers/net/wireless/mediatek/mt76/mt76.h | 1 + include/linux/skbuff.h | 4 +++- include/net/page_pool.h | 2 -- 9 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c index 84751bb303a6..6222aaa5157f 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #define TSNEP_RX_OFFSET (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 632bb4d589d7..0731af62d7dd 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index a79cb680bb23..22cb3973f977 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -7,6 +7,7 @@ #include #include +#include #include #include "otx2_reg.h" diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index db3fcab1c8cd..93f0c8e3ce91 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "otx2_reg.h" #include "otx2_common.h" diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 9c94807097cb..3235a3a4ed08 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -6,6 +6,7 @@ #include "en/port.h" #include "en_accel/en_accel.h" #include "en_accel/ipsec.h" +#include #include static u8 mlx5e_mpwrq_min_page_shift(struct mlx5_core_dev *mdev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index f0e6095809fa..1bd91bc09eb8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -35,6 +35,7 @@ #include "en/xdp.h" #include "en/params.h" #include +#include int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) { diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h index 6b07b8fafec2..95c16f11d156 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76.h +++ b/drivers/net/wireless/mediatek/mt76/mt76.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "util.h" #include "testmode.h" diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 5951904413ab..6d5eee932b95 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -32,7 +32,6 @@ #include #include #include -#include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include #endif @@ -3422,6 +3421,9 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) __skb_frag_ref(&skb_shinfo(skb)->frags[f]); } +/* Internal from net/core/page_pool.c, do not use in drivers directly */ +bool page_pool_return_skb_page(struct page *page, bool napi_safe); + static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 126f9e294389..2a9ce2aa6eb2 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -240,8 +240,6 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -bool page_pool_return_skb_page(struct page *page, bool napi_safe); - struct page_pool *page_pool_create(const struct page_pool_params *params); struct xdp_mem_info; From patchwork Tue May 30 15:00:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787692 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=D4K47cyu; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfn6xt6z20Pc for ; Wed, 31 May 2023 01:04:49 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 3050D83CE6; Tue, 30 May 2023 15:04:48 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 3050D83CE6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459088; bh=fkGWT0lpQyvcix5MSbtv79R0jIDk6MWnN4bhgn3SF10=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=D4K47cyuygtdVZH0+eguvJk7eDq1rMUuvWFCgy5Ufra3I/c+N2e1xd7qJrOditNOl /R33jJuqlelDBrFdRdhge6eHCASIFAj+qwUUSwVuInhl4oNiatsBTrWob/Mq4ou11i Rxlmk0IqQCRFv9RR+Y+uWUoSkjOMHzv2iN33A/c0oDtH3XaubsVA5lheV91Ev3w48y Yujhot+RZ/9HHBY+ewKOwxQWMu7PUiSdlq2HxRp3I+Ors/e9LiyZmjmUKuNKUV9+AS /J3e8SmNXnmsF7G8BqDbfeq6FRpzxfYEbJnDCdpMHvbqCNcL1vgaZEEbXZaKT/UXy5 no/p27Cd89jMA== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Zc0w_mvi61vp; Tue, 30 May 2023 15:04:47 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 0200483CD3; Tue, 30 May 2023 15:04:46 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 0200483CD3 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 109A51BF5DE for ; Tue, 30 May 2023 15:04:35 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id EBB1B40901 for ; Tue, 30 May 2023 15:04:34 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org EBB1B40901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3y_eVuxpSjaM for ; Tue, 30 May 2023 15:04:34 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 3060E40574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 3060E40574 for ; Tue, 30 May 2023 15:04:34 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192518" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192518" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304173" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304173" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:31 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:30 +0200 Message-Id: <20230530150035.1943669-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459074; x=1716995074; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qc32wr3Sot3sTRrntPIfsglvVZOOpvIVVghP/EIoq2w=; b=k5i1Sb67MeSrFNE8E8jr6Jty4T/qrKYUZY0vLxgQ4gLS6nfQvy0GokFs d5A0N5RE1aiL7H3d+FRQS13dVV7VpkhSxFJFTCfL1sOzJpxQ5wQEukHGr mAwj/ZXidiOJYLaFYef0EvwWOwXOklmnuEchHJ6ga8wCX4qdxUzIM0a9E bcXY9+PJ2zPwXx6Otremp/VQbMi4mXFUs7XqADGKadhrmMYrBi2gk6wOg w8TMgoj7TGEUgV86gjf6Lc8qXht5Itayn0DuK5WYKEToqtYt2BIlXg/WO UFUEdsIg0SrNrGld4Ff6BJhCRVeB6FevAIYAxQstqmtpMfANu6HJoulZv w==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=k5i1Sb67 Subject: [Intel-wired-lan] [PATCH net-next v3 07/12] net: page_pool: avoid calling no-op externals when possible X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Turned out page_pool_put{,_full}_page() can burn quite a bunch of cycles even when on DMA-coherent platforms (like x86) with no active IOMMU or swiotlb, just for the call ladder. Indeed, it's page_pool_put_page() page_pool_put_defragged_page() <- external __page_pool_put_page() page_pool_dma_sync_for_device() <- non-inline dma_sync_single_range_for_device() dma_sync_single_for_device() <- external dma_direct_sync_single_for_device() dev_is_dma_coherent() <- exit For the inline functions, no guarantees the compiler won't uninline them (they're clearly not one-liners and sometimes compilers uninline even 2 + 2). The first external call is necessary, but the rest 2+ are done for nothing each time, plus a bunch of checks here and there. Since Page Pool mappings are long-term and for one "device + addr" pair dma_need_sync() will always return the same value (basically, whether it belongs to an swiotlb pool), addresses can be tested once right after they're obtained and the result can be reused until the page is unmapped. Define new PP flag, which will mean "do DMA syncs for device, but only when needed" and turn it on by default when the driver asks to sync pages. When a page is mapped, check whether it needs syncs and if so, replace that "sync when needed" back to "always do syncs" globally for the whole pool (better safe than sorry). As long as a pool has no pages requiring DMA syncs, this cuts off a good piece of calls and checks. On my x86_64, this gives from 2% to 5% performance benefit with no negative impact for cases when IOMMU is on and the shortcut can't be used. Signed-off-by: Alexander Lobakin --- include/net/page_pool.h | 3 +++ net/core/page_pool.c | 10 ++++++++++ 2 files changed, 13 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 2a9ce2aa6eb2..ee895376270e 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -46,6 +46,9 @@ * device driver responsibility */ #define PP_FLAG_PAGE_FRAG BIT(2) /* for page frag feature */ +#define PP_FLAG_DMA_MAYBE_SYNC BIT(3) /* Internal, should not be used in + * drivers + */ #define PP_FLAG_ALL (PP_FLAG_DMA_MAP |\ PP_FLAG_DMA_SYNC_DEV |\ PP_FLAG_PAGE_FRAG) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index a3e12a61d456..102b5e3718c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -198,6 +198,10 @@ static int page_pool_init(struct page_pool *pool, /* pool->p.offset has to be set according to the address * offset used by the DMA engine to start copying rx data */ + + /* Try to avoid calling no-op syncs */ + pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC; + pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV; } if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT && @@ -346,6 +350,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) page_pool_set_dma_addr(page, dma); + if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) && + dma_need_sync(pool->p.dev, dma)) { + pool->p.flags |= PP_FLAG_DMA_SYNC_DEV; + pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC; + } + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); From patchwork Tue May 30 15:00:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787693 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=nd2LDPW8; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwft1cGhz20Pc for ; Wed, 31 May 2023 01:04:54 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 6D02383CD5; Tue, 30 May 2023 15:04:52 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 6D02383CD5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459092; bh=Hn1zALdM7GIEBh3qcE2I/8vZj+Nb8W27Ranya0Ihg0w=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=nd2LDPW8uH4cSvuuBpL6sN+M2bEToPR5VgjyFed3S4FYl97xDUGOYbORUyf0Qfsn6 xn6toDhqrWeKst3324rNxl2W2Aa3VGNZe97yTYPowjCbEC6NIBzhDxGKOBoyGGiSQK rRxP1UlzCYGtxMzU+Pw8heZ/CcwHo0YK9y5HKcDEkycZKpEeXQ+8oXzalr5U9KT2Hm 4c6DPPRSdsswxd9U3FHbouYVqGsFF8r9AtCtJwKeK9lDH0wBSilYHHnSsdw0bBt8GD Wq1k2oiHw5BUMvXdddHWeMVQ6MRTpP+purGVLqkm/bj30eYb7gesugqjlt+fIdXPzz EiLCSQHnU6faA== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UJa3qngeBQjS; Tue, 30 May 2023 15:04:51 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 4172183AC0; Tue, 30 May 2023 15:04:51 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 4172183AC0 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 491441BF5DE for ; Tue, 30 May 2023 15:04:37 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 22F4340901 for ; Tue, 30 May 2023 15:04:37 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 22F4340901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XAkJFsbNIcJh for ; Tue, 30 May 2023 15:04:36 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 2C48440574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 2C48440574 for ; Tue, 30 May 2023 15:04:36 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192563" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192563" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304188" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304188" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:34 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:31 +0200 Message-Id: <20230530150035.1943669-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459076; x=1716995076; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eLJOSFdKG61xS8RaVR7iDFns16OyB/17swgN14K+JTY=; b=DGzKWUUGoSgYKwspNMqO4zX5eJCHGgiGsXQnPwOmrXwB43vruCcPVlEq XrYKfwuCvSG/XSVac4Z4sk2pTq/C3YE00FIyRCL+quxzyuLkaaGwl9MAA k1toS/uQ1xUckAfzSXjpDdp6fDuNtqQ4c+DPFCwscmto3lWpxI4pD6MXE UxA5RGNh14LT6frbZYNCDn73p43JKovOj5OJNg6UqtdVZq83MrndHB0h+ a75dFD2bRcSmGE4fGzNGw701jzbvTWUAu7kenq8feIiNlxjrl74WsGTbp Oo6g83zgw5HZjC+EYZtNL6EcwdqrqNHsNYOVe7J57v8TKW0QmYL4J1y1h g==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=DGzKWUUG Subject: [Intel-wired-lan] [PATCH net-next v3 08/12] net: page_pool: add DMA-sync-for-CPU inline helpers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Each driver is responsible for syncing buffers written by HW for CPU before accessing them. Almost each PP-enabled driver uses the same pattern, which could be shorthanded into a static inline to make driver code a little bit more compact. Introduce a couple such functions. The first one takes the actual size of the data written by HW and is the main one to be used on Rx. The second does the same, but only if the PP performs DMA synchronizations at all. The last one picks max_len from the PP params and is designed for more extreme cases when the size is unknown, but the buffer still needs to be synced. Also constify pointer arguments of page_pool_get_dma_dir() and page_pool_get_dma_addr() to give a bit more room for optimization, as both of them are read-only. Signed-off-by: Alexander Lobakin --- include/net/page_pool.h | 64 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ee895376270e..361c7fdd718c 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -32,7 +32,7 @@ #include /* Needed by ptr_ring */ #include -#include +#include #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA * map/unmap @@ -237,8 +237,8 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, /* get the stored dma direction. A driver might decide to treat this locally and * avoid the extra cache line from page_pool to determine the direction */ -static -inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) +static inline enum dma_data_direction +page_pool_get_dma_dir(const struct page_pool *pool) { return pool->p.dma_dir; } @@ -361,7 +361,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) { dma_addr_t ret = page->dma_addr; @@ -378,6 +378,62 @@ static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) page->dma_addr_upper = upper_32_bits(addr); } +/** + * __page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Can be used as a shorthand to sync Rx pages before accessing them in the + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP. + * Note that this version performs DMA sync unconditionally, even if the + * associated PP doesn't perform sync-for-device. Consider the non-underscored + * version first if unsure. + */ +static inline void __page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + dma_sync_single_range_for_cpu(pool->p.dev, + page_pool_get_dma_addr(page), + pool->p.offset, dma_sync_size, + page_pool_get_dma_dir(pool)); +} + +/** + * page_pool_dma_sync_for_cpu - sync Rx page for CPU if needed + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Performs DMA sync for CPU, but *only* when: + * 1) page_pool was created with %PP_FLAG_DMA_SYNC_DEV to manage DMA syncs; + * 2) AND sync shortcut is not available (IOMMU, swiotlb, non-coherent DMA, ...) + */ +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + __page_pool_dma_sync_for_cpu(pool, page, dma_sync_size); +} + +/** + * page_pool_dma_sync_full_for_cpu - sync full Rx page for CPU (if needed) + * @pool: page_pool which this page belongs to + * @page: page to sync + * + * Performs sync for the entire length exposed to hardware. Can be used on + * DMA errors or before freeing the page, when it's unknown whether the HW + * touched the buffer. + */ +static inline void +page_pool_dma_sync_full_for_cpu(const struct page_pool *pool, + const struct page *page) +{ + page_pool_dma_sync_for_cpu(pool, page, pool->p.max_len); +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL From patchwork Tue May 30 15:00:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787694 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=DXCottwT; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwfz2963z20Pc for ; Wed, 31 May 2023 01:04:59 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 90AF181EF6; Tue, 30 May 2023 15:04:57 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 90AF181EF6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459097; bh=OsXZI8m2YmkAHMg335JWRzuAFHBgRiMrmhRdvYUIiSc=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=DXCottwTd1F7lN9gN1Eb9g2KRbKjz5phSjOawRkRLwAa/ZBkO4tGjxBh6OO22oy1U ChKwbQUe7c7LQkfGYcpD/ncSY5JoO6ObZ6JMKdaoeFsXJgw6LiOIfTwbyJJ7B2LcuT 76kgmupzOxLUDwCzBat33EZH5XhuOHLJJUNzR9tXWFGKG0+2L/9mYlCWVciLte4UnA m0+NuAMd+MlWJ+RU86uQXyTS+1vmaAztirpSegZgPuC/MA0bpuVMkhSnXFzxkWMEtJ /vhqLEOj92tvAiOjF0c6wlrApJLRFwpnvEdxg3xxNbcG7UbWN1IqGd1BinT2WuGA4q MH8ZTQkizUEBA== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ht9OWEAkhNpF; Tue, 30 May 2023 15:04:55 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 8671E83C80; Tue, 30 May 2023 15:04:55 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 8671E83C80 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id BD4771BF5DE for ; Tue, 30 May 2023 15:04:39 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id A407740901 for ; Tue, 30 May 2023 15:04:39 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org A407740901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TpleVi3U2a3s for ; Tue, 30 May 2023 15:04:38 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 6367540574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 6367540574 for ; Tue, 30 May 2023 15:04:38 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192598" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192598" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304200" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304200" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:38 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:32 +0200 Message-Id: <20230530150035.1943669-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459078; x=1716995078; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ih4pWia4593lfLBI9RUCclowfWqTc2GsQI8Csct9dig=; b=Voa0KUJzPTv/5XZAaKClO6d8Sv4LkJa6W+qM3JE2YibikhD8C6R817KJ C4vLFZsqDoRATG9xYPkQwRcICVj0geERfw6RJx11VQ23EPHmyUBRpmMtQ SYBaU36v13rON7hhYwrhfQJYn6ESoHI8EBbicApx5yaEqBLwBTfuFh3Fu LCeO8FmwgvXDG0HmSRKoJX67EmJ+MrUZEZMQ09trgKlIzUn14tFyL4tgE OB0Q9GQaP0sa+4iqBWF1IyTRrKYffoIPw52vC2HG/wikAcYlPe3UKVm6b ehJQryzVIReJJrxqofW3bM5E25e2WWmuRs4scKkfG+pn7CAtlU2AHFCZV w==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=Voa0KUJz Subject: [Intel-wired-lan] [PATCH net-next v3 09/12] iavf: switch to Page Pool X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Now that the IAVF driver simply uses dev_alloc_page() + free_page() with no custom recycling logics and one whole page per frame, it can easily be switched to using Page Pool API instead. Introduce libie_rx_page_pool_create(), a wrapper for creating a PP with the default libie settings applicable to all Intel hardware, and replace the alloc/free calls with the corresponding PP functions, including the newly added sync-for-CPU helpers. Use skb_mark_for_recycle() to bring back the recycling and restore the initial performance. From the important object code changes, worth mentioning that __iavf_alloc_rx_pages() is now inlined due to the greatly reduced size. The resulting driver is on par with the pre-series code and 1-2% slower than the "optimized" version right before the recycling removal. But the number of locs and object code bytes slaughtered is much more important here after all, not speaking of that there's still a vast space for optimization and improvements. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 126 +++++--------------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 8 +- drivers/net/ethernet/intel/libie/rx.c | 28 +++++ include/linux/net/intel/libie/rx.h | 5 +- 5 files changed, 69 insertions(+), 99 deletions(-) diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index cec4a938fbd0..a368afc42b8d 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -86,6 +86,7 @@ config E1000E_HWTS config LIBIE tristate + select PAGE_POOL help libie (Intel Ethernet library) is a common library containing routines shared by several Intel Ethernet drivers. diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index c33a3d681c83..1de67a70f045 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -3,7 +3,6 @@ #include #include -#include #include "iavf.h" #include "iavf_trace.h" @@ -691,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) **/ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) { - u16 i; - /* ring already cleared, nothing to do */ if (!rx_ring->rx_pages) return; @@ -703,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) } /* Free all the Rx ring sk_buffs */ - for (i = 0; i < rx_ring->count; i++) { + for (u32 i = 0; i < rx_ring->count; i++) { struct page *page = rx_ring->rx_pages[i]; - dma_addr_t dma; if (!page) continue; - dma = page_pool_get_dma_addr(page); - /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. */ - dma_sync_single_range_for_cpu(rx_ring->dev, dma, - LIBIE_SKB_HEADROOM, - LIBIE_RX_BUF_LEN, - DMA_FROM_DEVICE); - - /* free resources associated with mapping */ - dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - - __free_page(page); + page_pool_dma_sync_full_for_cpu(rx_ring->pool, page); + page_pool_put_full_page(rx_ring->pool, page, false); } rx_ring->next_to_clean = 0; @@ -739,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) **/ void iavf_free_rx_resources(struct iavf_ring *rx_ring) { + struct device *dev = rx_ring->pool->p.dev; + iavf_clean_rx_ring(rx_ring); kfree(rx_ring->rx_pages); rx_ring->rx_pages = NULL; + page_pool_destroy(rx_ring->pool); + rx_ring->dev = dev; + if (rx_ring->desc) { dma_free_coherent(rx_ring->dev, rx_ring->size, rx_ring->desc, rx_ring->dma); @@ -759,13 +750,15 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) { struct device *dev = rx_ring->dev; + struct page_pool *pool; + int ret = -ENOMEM; /* warn if we are about to overwrite the pointer */ WARN_ON(rx_ring->rx_pages); rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages), GFP_KERNEL); if (!rx_ring->rx_pages) - return -ENOMEM; + return ret; u64_stats_init(&rx_ring->syncp); @@ -781,15 +774,27 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) goto err; } + pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi, + rx_ring->count); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + goto err_free_dma; + } + + rx_ring->pool = pool; + rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; return 0; + +err_free_dma: + dma_free_coherent(dev, rx_ring->size, rx_ring->desc, rx_ring->dma); err: kfree(rx_ring->rx_pages); rx_ring->rx_pages = NULL; - return -ENOMEM; + return ret; } /** @@ -810,40 +815,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * iavf_alloc_mapped_page - allocate and map a new page - * @dev: device used for DMA mapping - * @gfp: GFP mask to allocate page - * - * Returns a new &page if the it was successfully allocated, %NULL otherwise. - **/ -static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp) -{ - struct page *page; - dma_addr_t dma; - - /* alloc new page for storage */ - page = __dev_alloc_page(gfp); - if (unlikely(!page)) - return NULL; - - /* map page for use */ - dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - - /* if mapping failed free memory back to system since - * there isn't much point in holding memory we can't use - */ - if (dma_mapping_error(dev, dma)) { - __free_page(page); - return NULL; - } - - page_pool_set_dma_addr(page, dma); - - return page; -} - /** * iavf_receive_skb - Send a completed packet up the stack * @rx_ring: rx ring in play @@ -877,7 +848,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, gfp_t gfp) { - struct device *dev = rx_ring->dev; + struct page_pool *pool = rx_ring->pool; u32 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; @@ -891,7 +862,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, struct page *page; dma_addr_t dma; - page = iavf_alloc_mapped_page(dev, gfp); + page = page_pool_alloc_pages(pool, gfp); if (!page) { rx_ring->rx_stats.alloc_page_failed++; break; @@ -900,11 +871,6 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, rx_ring->rx_pages[ntu] = page; dma = page_pool_get_dma_addr(page); - /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM, - LIBIE_RX_BUF_LEN, - DMA_FROM_DEVICE); - /* Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ @@ -1091,21 +1057,6 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size) LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE); } -/** - * iavf_sync_rx_page - Synchronize received data for use - * @dev: device used for DMA mapping - * @page: Rx page containing the data - * @size: size of the received data - * - * This function will synchronize the Rx buffer for use by the CPU. - */ -static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size) -{ - dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page), - LIBIE_SKB_HEADROOM, size, - DMA_FROM_DEVICE); -} - /** * iavf_build_skb - Build skb around an existing buffer * @page: Rx page to with the data @@ -1128,6 +1079,8 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) if (unlikely(!skb)) return NULL; + skb_mark_for_recycle(skb); + /* update pointers within the skb to store the data */ skb_reserve(skb, LIBIE_SKB_HEADROOM); __skb_put(skb, size); @@ -1135,19 +1088,6 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) return skb; } -/** - * iavf_unmap_rx_page - Unmap used page - * @dev: device used for DMA mapping - * @page: page to release - */ -static void iavf_unmap_rx_page(struct device *dev, struct page *page) -{ - dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), - LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - page_pool_set_dma_addr(page, 0); -} - /** * iavf_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed @@ -1190,8 +1130,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; u32 to_refill = IAVF_DESC_UNUSED(rx_ring); + struct page_pool *pool = rx_ring->pool; struct sk_buff *skb = rx_ring->skb; - struct device *dev = rx_ring->dev; u32 ntc = rx_ring->next_to_clean; u32 ring_size = rx_ring->count; u32 cleaned_count = 0; @@ -1240,13 +1180,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) * stripped by the HW. */ if (unlikely(!size)) { - iavf_unmap_rx_page(dev, page); - __free_page(page); + page_pool_recycle_direct(pool, page); goto skip_data; } - iavf_sync_rx_page(dev, page, size); - iavf_unmap_rx_page(dev, page); + page_pool_dma_sync_for_cpu(pool, page, size); /* retrieve a buffer from the ring */ if (skb) @@ -1256,7 +1194,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { - __free_page(page); + page_pool_put_page(pool, page, size, true); rx_ring->rx_stats.alloc_buff_failed++; break; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 1421e90c7c4e..8fbe549ce6a5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -83,9 +83,6 @@ enum iavf_dyn_idx_t { #define iavf_rx_desc iavf_32byte_rx_desc -#define IAVF_RX_DMA_ATTR \ - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) - /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields * @rx_desc: pointer to receive descriptor (in le64 format) @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { struct iavf_ring { struct iavf_ring *next; /* pointer to next ring in q_vector */ void *desc; /* Descriptor ring memory */ - struct device *dev; /* Used for DMA mapping */ + union { + struct page_pool *pool; /* Used for Rx page management */ + struct device *dev; /* Used for DMA mapping on Tx */ + }; struct net_device *netdev; /* netdev ring maps to */ union { struct iavf_tx_buffer *tx_bi; diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index f503476d8eef..d68eab76593c 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -105,6 +105,34 @@ const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = { }; EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE); +/* Page Pool */ + +/** + * libie_rx_page_pool_create - create a PP with the default libie settings + * @napi: &napi_struct covering this PP (no usage outside its poll loops) + * @size: size of the PP, usually simply Rx queue len + * + * Returns &page_pool on success, casted -errno on failure. + */ +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, + u32 size) +{ + const struct page_pool_params pp = { + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .order = LIBIE_RX_PAGE_ORDER, + .pool_size = size, + .nid = NUMA_NO_NODE, + .dev = napi->dev->dev.parent, + .napi = napi, + .dma_dir = DMA_FROM_DEVICE, + .max_len = LIBIE_RX_BUF_LEN, + .offset = LIBIE_SKB_HEADROOM, + }; + + return page_pool_create(&pp); +} +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); + MODULE_AUTHOR("Intel Corporation"); MODULE_DESCRIPTION("Intel(R) Ethernet common library"); MODULE_LICENSE("GPL"); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 3e8d0d5206e1..b86cadd281f1 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -5,7 +5,7 @@ #define __LIBIE_RX_H #include -#include +#include /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed * bitfield struct. @@ -160,4 +160,7 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, /* Maximum frame size minus LL overhead */ #define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN) +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, + u32 size); + #endif /* __LIBIE_RX_H */ From patchwork Tue May 30 15:00:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787695 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=DnllaCbN; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwg33k6Pz20Pc for ; Wed, 31 May 2023 01:05:03 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id B3C2583CAC; Tue, 30 May 2023 15:05:01 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org B3C2583CAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459101; bh=Qxb5VyluIxD/oVKH9tM+KjgIVfugdYIaUOqLL4ubT6w=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=DnllaCbNsPeYkGy/oOhUUuOVGzX9VrK8s93NgdCyONhmQsaW5DGXb0ePKcRGeJCPQ zcquIEvG5nHjrPgtfODy2Ann6iSHa06P+7qYzISnFXbyUFcKzBLbiU8kFHf5xiXyKq gTvtx4ZtzM+0S5BUhR4oXwVSBxoAkBCJU7FgM07NnHcldqbk5ceRBr4F750PEIPUpk kGq9VkI5OuJO0T2zn5LEkoHVWRiNwSKtCmAn4TcL44rViEBaS45qLONTPSa/kaqMrB 6a2F6BMWo9OtPJN7z/KrXt6eWUn7EAPFG5p1tfGx+ysbuNBUnGYnzVWUMD5uHKr0OB bDzOGlD0wAmaQ== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EL0fD7PvlY1b; Tue, 30 May 2023 15:05:00 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 81F3A83CEB; Tue, 30 May 2023 15:05:00 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 81F3A83CEB X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 981D41BF5DE for ; Tue, 30 May 2023 15:04:41 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 719DC40901 for ; Tue, 30 May 2023 15:04:41 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 719DC40901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jNTkbhfa2c0w for ; Tue, 30 May 2023 15:04:40 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 5FF6740574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 5FF6740574 for ; Tue, 30 May 2023 15:04:40 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192629" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192629" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304206" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304206" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:41 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:33 +0200 Message-Id: <20230530150035.1943669-11-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459080; x=1716995080; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8rv9nlvNwZI0m5eeZfeqrro4xCO6aRYBd1RceFAklCo=; b=lwuvLRLIPQ8QDOHLhQepu8zvUSc/1L21sypSq9IpPRkIU/Y3jyf0kfAX HfW1YNfXXYAdDhUre/0bG2VqWMrFLJYmPaLtjAphSRf5TjHmq13i3fedI lv9bomAIOM5UHLJsaEf8annjRWTiXyBX8+MP0HMnDM+3rfmOmJbSj6moD TkGAOXrajHuDmlpB0vGofHTAfH45/ckX8NTFdl2gT/OX46pFBdFmqQMUE 1E5iMnlWk8/lr7fRXQqdGN9ASuZ1Fx+5GFvrcuJx1w4nlAc/6V1KYzm57 ITk7nJFnov/F2H3yJmpIW455YM2Q6pOoATYrizHtXCMfbPm9zTH7wmlYA A==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=lwuvLRLI Subject: [Intel-wired-lan] [PATCH net-next v3 10/12] libie: add common queue stats X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Next stop, per-queue private stats. They have only subtle differences from driver to driver and can easily be resolved. Define common structures, inline helpers and Ethtool helpers to collect, update and export the statistics. Use u64_stats_t right from the start, as well as the corresponding helpers to ensure tear-free operations. For the NAPI parts of both Rx and Tx, also define small onstack containers to update them in polling loops and then sync the actual containers once a loop ends. The drivers will be switched to use this API later on a per-driver basis, along with conversion to PP. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/Makefile | 1 + drivers/net/ethernet/intel/libie/stats.c | 119 ++++++++++++++ include/linux/net/intel/libie/stats.h | 179 ++++++++++++++++++++++ 3 files changed, 299 insertions(+) create mode 100644 drivers/net/ethernet/intel/libie/stats.c create mode 100644 include/linux/net/intel/libie/stats.h diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile index 95e81d09b474..76f32253481b 100644 --- a/drivers/net/ethernet/intel/libie/Makefile +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -4,3 +4,4 @@ obj-$(CONFIG_LIBIE) += libie.o libie-objs += rx.o +libie-objs += stats.o diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c new file mode 100644 index 000000000000..61456842a362 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/stats.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include +#include + +/* Rx per-queue stats */ + +static const char * const libie_rq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_RQ_STATS(act) +#undef act +}; + +#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str) + +/** + * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided + * + * Returns the number of per-queue Rx stats supported by the library. + */ +u32 libie_rq_stats_get_sset_count(void) +{ + return LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_sset_count, LIBIE); + +/** + * libie_rq_stats_get_strings - get the name strings of Ethtool RQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: RQ number to print in the prefix + */ +void libie_rq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + ethtool_sprintf(data, "rq%u_%s", qid, libie_rq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE); + +/** + * libie_rq_stats_get_data - get the RQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @stats: RQ stats container from the queue + */ +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) +{ + u64 sarr[LIBIE_RQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_data, LIBIE); + +/* Tx per-queue stats */ + +static const char * const libie_sq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_SQ_STATS(act) +#undef act +}; + +#define LIBIE_SQ_STATS_NUM ARRAY_SIZE(libie_sq_stats_str) + +/** + * libie_sq_stats_get_sset_count - get the number of Ethtool SQ stats provided + * + * Returns the number of per-queue Tx stats supported by the library. + */ +u32 libie_sq_stats_get_sset_count(void) +{ + return LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_sset_count, LIBIE); + +/** + * libie_sq_stats_get_strings - get the name strings of Ethtool SQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: SQ number to print in the prefix + */ +void libie_sq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + ethtool_sprintf(data, "sq%u_%s", qid, libie_sq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_strings, LIBIE); + +/** + * libie_sq_stats_get_data - get the SQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @stats: SQ stats container from the queue + */ +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats) +{ + u64 sarr[LIBIE_SQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_data, LIBIE); diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h new file mode 100644 index 000000000000..dbbc98bbd3a7 --- /dev/null +++ b/include/linux/net/intel/libie/stats.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_STATS_H +#define __LIBIE_STATS_H + +#include + +/* Common */ + +/* Use 32-byte alignment to reduce false sharing */ +#define __libie_stats_aligned __aligned(4 * sizeof(u64_stats_t)) + +/** + * libie_stats_add - update one structure counter from a local struct + * @qs: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @ss: local/onstack stats structure + * @f: name of the field to update + * + * If a local/onstack stats structure is used to collect statistics during + * hotpath loops, this macro can be used to shorthand updates, given that + * the fields have the same name. + * Must be guarded with u64_stats_update_{begin,end}(). + */ +#define libie_stats_add(qs, ss, f) \ + u64_stats_add(&(qs)->f, (ss)->f) + +/** + * __libie_stats_inc_one - safely increment one stats structure counter + * @s: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @f: name of the field to increment + * @n: name of the temporary variable, result of __UNIQUE_ID() + * + * To be used on exception or slow paths -- allocation fails, queue stops etc. + */ +#define __libie_stats_inc_one(s, f, n) ({ \ + typeof(*(s)) *n = (s); \ + \ + u64_stats_update_begin(&n->syncp); \ + u64_stats_inc(&n->f); \ + u64_stats_update_end(&n->syncp); \ +}) +#define libie_stats_inc_one(s, f) \ + __libie_stats_inc_one(s, f, __UNIQUE_ID(qs_)) + +/* Rx per-queue stats: + * packets: packets received on this queue + * bytes: bytes received on this queue + * fragments: number of processed descriptors carrying only a fragment + * alloc_page_fail: number of Rx page allocation fails + * build_skb_fail: number of build_skb() fails + */ + +#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) \ + act(fragments) + +#define DECLARE_LIBIE_RQ_FAIL_STATS(act) \ + act(alloc_page_fail) \ + act(build_skb_fail) + +#define DECLARE_LIBIE_RQ_STATS(act) \ + DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + DECLARE_LIBIE_RQ_FAIL_STATS(act) + +struct libie_rq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); + DECLARE_LIBIE_RQ_FAIL_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +/* Rx stats being modified frequently during the NAPI polling, to sync them + * with the queue stats once after the loop is finished. + */ +struct libie_rq_onstack_stats { + union { + struct { +#define act(s) u32 s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u32, raw); + }; +}; + +/** + * libie_rq_napi_stats_add - add onstack Rx stats to the queue container + * @qs: Rx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_rq_napi_stats_add(struct libie_rq_stats *qs, + const struct libie_rq_onstack_stats *ss) +{ + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + libie_stats_add(qs, ss, fragments); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_rq_stats_get_sset_count(void); +void libie_rq_stats_get_strings(u8 **data, u32 qid); +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats); + +/* Tx per-queue stats: + * packets: packets sent from this queue + * bytes: bytes sent from this queue + * busy: number of xmit failures due to the ring being full + * stops: number times the ring was stopped from the driver + * restarts: number times it was started after being stopped + * linearized: number of skbs linearized due to HW limits + */ + +#define DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) + +#define DECLARE_LIBIE_SQ_XMIT_STATS(act) \ + act(busy) \ + act(stops) \ + act(restarts) \ + act(linearized) + +#define DECLARE_LIBIE_SQ_STATS(act) \ + DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + DECLARE_LIBIE_SQ_XMIT_STATS(act) + +struct libie_sq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_SQ_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +struct libie_sq_onstack_stats { +#define act(s) u32 s; + DECLARE_LIBIE_SQ_NAPI_STATS(act); +#undef act +}; + +/** + * libie_sq_napi_stats_add - add onstack Tx stats to the queue container + * @qs: Tx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_sq_napi_stats_add(struct libie_sq_stats *qs, + const struct libie_sq_onstack_stats *ss) +{ + if (unlikely(!ss->packets)) + return; + + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_sq_stats_get_sset_count(void); +void libie_sq_stats_get_strings(u8 **data, u32 qid); +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats); + +#endif /* __LIBIE_STATS_H */ From patchwork Tue May 30 15:00:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787696 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=1mTa2Of7; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwgB40zRz20Q4 for ; Wed, 31 May 2023 01:05:10 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id C935883CA6; Tue, 30 May 2023 15:05:08 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org C935883CA6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459108; bh=WuXQfU+DKUh575z8yzp8vdcppuS2xHzpxy/BcnsAbu4=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=1mTa2Of7ozRZ8eQjQRNaBjPtzwEZsm9eol7nGVwYhvCS4Vy9Fq082ixLeNw9MrGbE zv9al2rmrWN4XEc0dBvkmJP4a4afxVnHI9dodc+ZUPo7gaOI6h2VYOR8TtnNfpUqDB cbcWmdcSmgwLZr/Ro/UN3Bg/A7S+lZKxIQNzwNXEBlbl8M+KcYTceOnCVLd8d+quSv YxN5RJ/DS/XXuzXG08kCWI4P/PzxgyUbfXLT7reYRCmokoDURSQUt0SzDni1j/N49N WVpIaFFPTD+zLw498az3Xn2xRhPfVJeUpkRXlaZCCt08Ddh5ts658tZq8yBa4VX5mD E9i1KvUqihkeQ== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gwi9lMhXfynJ; Tue, 30 May 2023 15:05:05 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 44AB083D11; Tue, 30 May 2023 15:05:05 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 44AB083D11 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 3AB261BF5DE for ; Tue, 30 May 2023 15:04:46 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 141BA40901 for ; Tue, 30 May 2023 15:04:46 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 141BA40901 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nevKBKd-WRSc for ; Tue, 30 May 2023 15:04:45 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org EE98640574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id EE98640574 for ; Tue, 30 May 2023 15:04:44 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192682" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192682" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304209" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304209" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:45 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:34 +0200 Message-Id: <20230530150035.1943669-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459085; x=1716995085; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GwVgZzmK6hAs/7H6eK8PN89Rgl6guxNN7+o+ff6EBp4=; b=BzgnVWNxqvtjnqj+85maYoa9ue9JMxCf/8/hvxO34GwVR3l5sdvOIh/b DVF9o32i4RNlwDXN/bQ1LS/WixazHwcfS43ofg/c/u08R4pWbMu1WeY4W EB701hyIuEut6pCq7h8f3u7WS4BfkShlbZY85tA1auB5DEj+eXdcclbUe 4Vy9/02Bm8SzezWDR+D3PIWYgIm/hA38+E0Hexa5jl2pVPrmUiEEAOzqz 61QuJzibDr+wIk0oD72g/DFr2vnXujv4LfTWXp3p7o8EV3OXrEfAgR0Y3 bxprVVC5QCT/ezANdPU2QNuXCKFIQfsyf7o7tjzXt5W2kKRTGrM+VfY/0 Q==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=BzgnVWNx Subject: [Intel-wired-lan] [PATCH net-next v3 11/12] libie: add per-queue Page Pool stats X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Expand the libie generic per-queue stats with the generic Page Pool stats provided by the API itself, when CONFIG_PAGE_POOL is enabled. When it's not, there'll be no such fields in the stats structure, so no space wasted. They are also a bit special in terms of how they are obtained. One &page_pool accumulates statistics until it's destroyed obviously, which happens on ifdown. So, in order to not lose any statistics, get the stats and store in the queue container before destroying a pool. This container survives ifups/downs, so it basically stores the statistics accumulated since the very first pool was allocated on this queue. When it's needed to export the stats, first get the numbers from this container and then add the "live" numbers -- the ones that the current active pool returns. The result values will always represent the actual device-lifetime* stats. There's a cast from &page_pool_stats to `u64 *` in a couple functions, but they are guarded with stats asserts to make sure it's safe to do. FWIW it saves a lot of object code. Reviewed-by: Paul Menzel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/internal.h | 23 +++++++ drivers/net/ethernet/intel/libie/rx.c | 20 ++++++ drivers/net/ethernet/intel/libie/stats.c | 73 ++++++++++++++++++++- include/linux/net/intel/libie/rx.h | 4 ++ include/linux/net/intel/libie/stats.h | 39 ++++++++++- 5 files changed, 156 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/intel/libie/internal.h diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h new file mode 100644 index 000000000000..083398dc37c6 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/internal.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* libie internal declarations not to be used in drivers. + * + * Copyright(c) 2023 Intel Corporation. + */ + +#ifndef __LIBIE_INTERNAL_H +#define __LIBIE_INTERNAL_H + +struct libie_rq_stats; +struct page_pool; + +#ifdef CONFIG_PAGE_POOL_STATS +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool); +#else +static inline void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool) +{ +} +#endif + +#endif /* __LIBIE_INTERNAL_H */ diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index d68eab76593c..128f134aca6d 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -3,6 +3,8 @@ #include +#include "internal.h" + /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed * bitfield struct. */ @@ -133,6 +135,24 @@ struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, } EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); +/** + * libie_rx_page_pool_destroy - destroy a &page_pool created by libie + * @pool: pool to destroy + * @stats: RQ stats from the ring (or %NULL to skip updating PP stats) + * + * As the stats usually has the same lifetime as the device, but PP is usually + * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated + * during the last ifup, the PP stats need to be added to the driver stats + * container. Then the PP gets destroyed. + */ +void libie_rx_page_pool_destroy(struct page_pool *pool, + struct libie_rq_stats *stats) +{ + libie_rq_stats_sync_pp(stats, pool); + page_pool_destroy(pool); +} +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_destroy, LIBIE); + MODULE_AUTHOR("Intel Corporation"); MODULE_DESCRIPTION("Intel(R) Ethernet common library"); MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c index 61456842a362..71c7ce14edca 100644 --- a/drivers/net/ethernet/intel/libie/stats.c +++ b/drivers/net/ethernet/intel/libie/stats.c @@ -3,6 +3,9 @@ #include #include +#include + +#include "internal.h" /* Rx per-queue stats */ @@ -14,6 +17,70 @@ static const char * const libie_rq_stats_str[] = { #define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str) +#ifdef CONFIG_PAGE_POOL_STATS +/** + * libie_rq_stats_get_pp - get the current stats from a &page_pool + * @sarr: local array to add stats to + * @pool: pool to get the stats from + * + * Adds the current "live" stats from an online PP to the stats read from + * the RQ container, so that the actual totals will be returned. + */ +static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool) +{ + struct page_pool_stats *pps; + /* Used only to calculate pos below */ + struct libie_rq_stats tmp; + u32 pos; + + /* Validate the libie PP stats array can be casted <-> PP struct */ + static_assert(sizeof(tmp.pp) == sizeof(*pps)); + + if (!pool) + return; + + /* Position of the first Page Pool stats field */ + pos = (u64_stats_t *)&tmp.pp - tmp.raw; + pps = (typeof(pps))&sarr[pos]; + + page_pool_get_stats(pool, pps); +} + +/** + * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container + * @stats: stats structure to update + * @pool: pool to read the stats + * + * Called by libie_rx_page_pool_destroy() to save the stats before destroying + * the pool. + */ +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool) +{ + u64_stats_t *qarr = (u64_stats_t *)&stats->pp; + struct page_pool_stats pps = { }; + u64 *sarr = (u64 *)&pps; + + if (!stats) + return; + + page_pool_get_stats(pool, &pps); + + u64_stats_update_begin(&stats->syncp); + + for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++) + u64_stats_add(&qarr[i], sarr[i]); + + u64_stats_update_end(&stats->syncp); +} +#else +static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool) +{ +} + +/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */ +#endif + /** * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided * @@ -41,8 +108,10 @@ EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE); * libie_rq_stats_get_data - get the RQ stats in Ethtool format * @data: reference to the cursor pointing to the output array * @stats: RQ stats container from the queue + * @pool: &page_pool from the queue (%NULL to ignore PP "live" stats) */ -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats, + struct page_pool *pool) { u64 sarr[LIBIE_RQ_STATS_NUM]; u32 start; @@ -54,6 +123,8 @@ void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) sarr[i] = u64_stats_read(&stats->raw[i]); } while (u64_stats_fetch_retry(&stats->syncp, start)); + libie_rq_stats_get_pp(sarr, pool); + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) (*data)[i] += sarr[i]; diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index b86cadd281f1..d401feb17928 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -160,7 +160,11 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, /* Maximum frame size minus LL overhead */ #define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN) +struct libie_rq_stats; + struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, u32 size); +void libie_rx_page_pool_destroy(struct page_pool *pool, + struct libie_rq_stats *stats); #endif /* __LIBIE_RX_H */ diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h index dbbc98bbd3a7..23ca0079a905 100644 --- a/include/linux/net/intel/libie/stats.h +++ b/include/linux/net/intel/libie/stats.h @@ -49,6 +49,17 @@ * fragments: number of processed descriptors carrying only a fragment * alloc_page_fail: number of Rx page allocation fails * build_skb_fail: number of build_skb() fails + * pp_alloc_fast: pages taken from the cache or ring + * pp_alloc_slow: actual page allocations + * pp_alloc_slow_ho: non-order-0 page allocations + * pp_alloc_empty: number of times the pool was empty + * pp_alloc_refill: number of cache refills + * pp_alloc_waive: NUMA node mismatches during recycling + * pp_recycle_cached: direct recyclings into the cache + * pp_recycle_cache_full: number of times the cache was full + * pp_recycle_ring: recyclings into the ring + * pp_recycle_ring_full: number of times the ring was full + * pp_recycle_released_ref: pages released due to elevated refcnt */ #define DECLARE_LIBIE_RQ_NAPI_STATS(act) \ @@ -60,9 +71,29 @@ act(alloc_page_fail) \ act(build_skb_fail) +#ifdef CONFIG_PAGE_POOL_STATS +#define DECLARE_LIBIE_RQ_PP_STATS(act) \ + act(pp_alloc_fast) \ + act(pp_alloc_slow) \ + act(pp_alloc_slow_ho) \ + act(pp_alloc_empty) \ + act(pp_alloc_refill) \ + act(pp_alloc_waive) \ + act(pp_recycle_cached) \ + act(pp_recycle_cache_full) \ + act(pp_recycle_ring) \ + act(pp_recycle_ring_full) \ + act(pp_recycle_released_ref) +#else +#define DECLARE_LIBIE_RQ_PP_STATS(act) +#endif + #define DECLARE_LIBIE_RQ_STATS(act) \ DECLARE_LIBIE_RQ_NAPI_STATS(act) \ - DECLARE_LIBIE_RQ_FAIL_STATS(act) + DECLARE_LIBIE_RQ_FAIL_STATS(act) \ + DECLARE_LIBIE_RQ_PP_STATS(act) + +struct page_pool; struct libie_rq_stats { struct u64_stats_sync syncp; @@ -72,6 +103,9 @@ struct libie_rq_stats { #define act(s) u64_stats_t s; DECLARE_LIBIE_RQ_NAPI_STATS(act); DECLARE_LIBIE_RQ_FAIL_STATS(act); + struct_group(pp, + DECLARE_LIBIE_RQ_PP_STATS(act); + ); #undef act }; DECLARE_FLEX_ARRAY(u64_stats_t, raw); @@ -110,7 +144,8 @@ libie_rq_napi_stats_add(struct libie_rq_stats *qs, u32 libie_rq_stats_get_sset_count(void); void libie_rq_stats_get_strings(u8 **data, u32 qid); -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats); +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats, + struct page_pool *pool); /* Tx per-queue stats: * packets: packets sent from this queue From patchwork Tue May 30 15:00:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1787697 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=YtnsXqVr; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QVwgG25QFz20Q4 for ; Wed, 31 May 2023 01:05:14 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 12C6783C80; Tue, 30 May 2023 15:05:12 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 12C6783C80 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1685459112; bh=wTZmWrVIOXkh2Ta2rrTD/5PN6sGQPhnIVeIBem3HRTo=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=YtnsXqVrRTWFxrDzS1nK8LDoXAavBgoF9OqmIvUjcVAXk7llt+Reo9KIhbbdbmSkn Osv3CRPgL+0n1GFxJ/nBqqZ0KB5VR/lKTkf/20ORT2lSGt6F/q7PIfK9Wz/72In9+I rNs0Z/wHsOJ9j79Y74bcD9JqjbonofTQDVAjjs/4ngE2BiGABXx8ViywlvKAGlfta4 RSIIMHkNNA2fL27//9/nBCtUD0iRvaiTLx6q+muhqqMveQxHOaOXObyK87zIWN9m1q X9YSiOYeb1jzWx3lvXSnxCU9XJwzi7ncZvSF53ACuh2UUkrWOF8y/zsCgedvVyFDrj Gk/pLDTYk08Ww== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pIN73ItvkDa9; Tue, 30 May 2023 15:05:10 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp1.osuosl.org (Postfix) with ESMTP id 3B17C83CD4; Tue, 30 May 2023 15:05:10 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 3B17C83CD4 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id AE3001BF5DE for ; Tue, 30 May 2023 15:04:50 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 9525540C25 for ; Tue, 30 May 2023 15:04:50 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 9525540C25 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id k99Il7V2lNxm for ; Tue, 30 May 2023 15:04:49 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 1992140574 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by smtp2.osuosl.org (Postfix) with ESMTPS id 1992140574 for ; Tue, 30 May 2023 15:04:49 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192744" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192744" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304218" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304218" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:48 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Date: Tue, 30 May 2023 17:00:35 +0200 Message-Id: <20230530150035.1943669-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459089; x=1716995089; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wF6bRgbMGI8gMENkkSkKdZ8e+y0Jpj2PF5F+u+WqUzg=; b=I6+bt56BF5PPBcY3t7yb+BORWRlSvUdxW0c10SiCpUHBCjsKsiM2/Ag+ qxhTUqiMJ8qUme6Mm5JqZVGKzqeykVISCUlDIqOYquD5jz71XwAeXgQUR 66eLC5loEo3dwgnkHll06igNG2YY4ywU2hrf20AXMWUuCVeHJxvWOu0UF Wt1LmxTyhFZilazERqbzmbUA+XrhweJuKcCckXYPalPgLBuwgQq/fA7/W noL866IXm58x4aypHS0F+xSutyVr2bxq6YCdM7iTFgfW9AoXBnKF3keJU dRAo5Y+g3na+8bBObYGFuy2cU/YVCk5rGLT/mKe7VKC/BSx4WB/IR4Wk9 w==; X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=I6+bt56B Subject: [Intel-wired-lan] [PATCH net-next v3 12/12] iavf: switch queue stats to libie X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paul Menzel , Jesper Dangaard Brouer , Larysa Zaremba , netdev@vger.kernel.org, Ilias Apalodimas , linux-kernel@vger.kernel.org, Michal Kubiak , intel-wired-lan@lists.osuosl.org, Christoph Hellwig , Magnus Karlsson Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" iavf is pretty much ready for using the generic libie stats, so drop all the custom code and just use generic definitions. The only thing is that it previously lacked the counter of Tx queue stops. It's present in the other drivers, so add it here as well. The rest is straightforward. There were two fields in the Tx stats struct, which didn't belong there. The first one has never been used, wipe it; and move the other to the queue structure. Plus move around a couple fields in &iavf_ring to account stats structs' alignment. Signed-off-by: Alexander Lobakin --- .../net/ethernet/intel/iavf/iavf_ethtool.c | 87 ++++------------ drivers/net/ethernet/intel/iavf/iavf_main.c | 2 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 98 ++++++++++--------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 47 +++------ 4 files changed, 87 insertions(+), 147 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index de3050c02b6f..0dcf50d75f86 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -46,16 +46,6 @@ struct iavf_stats { .stat_offset = offsetof(_type, _stat) \ } -/* Helper macro for defining some statistics related to queues */ -#define IAVF_QUEUE_STAT(_name, _stat) \ - IAVF_STAT(struct iavf_ring, _name, _stat) - -/* Stats associated with a Tx or Rx ring */ -static const struct iavf_stats iavf_gstrings_queue_stats[] = { - IAVF_QUEUE_STAT("%s-%u.packets", stats.packets), - IAVF_QUEUE_STAT("%s-%u.bytes", stats.bytes), -}; - /** * iavf_add_one_ethtool_stat - copy the stat into the supplied buffer * @data: location to store the stat value @@ -141,43 +131,6 @@ __iavf_add_ethtool_stats(u64 **data, void *pointer, #define iavf_add_ethtool_stats(data, pointer, stats) \ __iavf_add_ethtool_stats(data, pointer, stats, ARRAY_SIZE(stats)) -/** - * iavf_add_queue_stats - copy queue statistics into supplied buffer - * @data: ethtool stats buffer - * @ring: the ring to copy - * - * Queue statistics must be copied while protected by - * u64_stats_fetch_begin, so we can't directly use iavf_add_ethtool_stats. - * Assumes that queue stats are defined in iavf_gstrings_queue_stats. If the - * ring pointer is null, zero out the queue stat values and update the data - * pointer. Otherwise safely copy the stats from the ring into the supplied - * buffer and update the data pointer when finished. - * - * This function expects to be called while under rcu_read_lock(). - **/ -static void -iavf_add_queue_stats(u64 **data, struct iavf_ring *ring) -{ - const unsigned int size = ARRAY_SIZE(iavf_gstrings_queue_stats); - const struct iavf_stats *stats = iavf_gstrings_queue_stats; - unsigned int start; - unsigned int i; - - /* To avoid invalid statistics values, ensure that we keep retrying - * the copy until we get a consistent value according to - * u64_stats_fetch_retry. But first, make sure our ring is - * non-null before attempting to access its syncp. - */ - do { - start = !ring ? 0 : u64_stats_fetch_begin(&ring->syncp); - for (i = 0; i < size; i++) - iavf_add_one_ethtool_stat(&(*data)[i], ring, &stats[i]); - } while (ring && u64_stats_fetch_retry(&ring->syncp, start)); - - /* Once we successfully copy the stats in, update the data pointer */ - *data += size; -} - /** * __iavf_add_stat_strings - copy stat strings into ethtool buffer * @p: ethtool supplied buffer @@ -237,8 +190,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = { #define IAVF_STATS_LEN ARRAY_SIZE(iavf_gstrings_stats) -#define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats) - /** * iavf_get_link_ksettings - Get Link Speed and Duplex settings * @netdev: network interface device structure @@ -308,18 +259,22 @@ static int iavf_get_link_ksettings(struct net_device *netdev, **/ static int iavf_get_sset_count(struct net_device *netdev, int sset) { - /* Report the maximum number queues, even if not every queue is - * currently configured. Since allocation of queues is in pairs, - * use netdev->real_num_tx_queues * 2. The real_num_tx_queues is set - * at device creation and never changes. - */ + u32 num; - if (sset == ETH_SS_STATS) - return IAVF_STATS_LEN + - (IAVF_QUEUE_STATS_LEN * 2 * - netdev->real_num_tx_queues); - else + switch (sset) { + case ETH_SS_STATS: + /* Per-queue */ + num = libie_rq_stats_get_sset_count(); + num += libie_sq_stats_get_sset_count(); + num *= netdev->real_num_tx_queues; + + /* Global */ + num += IAVF_STATS_LEN; + + return num; + default: return -EINVAL; + } } /** @@ -346,15 +301,15 @@ static void iavf_get_ethtool_stats(struct net_device *netdev, * it to iterate over rings' stats. */ for (i = 0; i < adapter->num_active_queues; i++) { - struct iavf_ring *ring; + const struct iavf_ring *ring; /* Tx rings stats */ - ring = &adapter->tx_rings[i]; - iavf_add_queue_stats(&data, ring); + libie_sq_stats_get_data(&data, &adapter->tx_rings[i].sq_stats); /* Rx rings stats */ ring = &adapter->rx_rings[i]; - iavf_add_queue_stats(&data, ring); + libie_rq_stats_get_data(&data, &ring->rq_stats, + ring->rx_pages ? ring->pool : NULL); } rcu_read_unlock(); } @@ -376,10 +331,8 @@ static void iavf_get_stat_strings(struct net_device *netdev, u8 *data) * real_num_tx_queues for both Tx and Rx queues. */ for (i = 0; i < netdev->real_num_tx_queues; i++) { - iavf_add_stat_strings(&data, iavf_gstrings_queue_stats, - "tx", i); - iavf_add_stat_strings(&data, iavf_gstrings_queue_stats, - "rx", i); + libie_sq_stats_get_strings(&data, i); + libie_rq_stats_get_strings(&data, i); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 120bb6a09ceb..4a702795ddb3 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1581,6 +1581,7 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) tx_ring->itr_setting = IAVF_ITR_TX_DEF; if (adapter->flags & IAVF_FLAG_WB_ON_ITR_CAPABLE) tx_ring->flags |= IAVF_TXR_FLAGS_WB_ON_ITR; + u64_stats_init(&tx_ring->sq_stats.syncp); rx_ring = &adapter->rx_rings[i]; rx_ring->queue_index = i; @@ -1588,6 +1589,7 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) rx_ring->dev = &adapter->pdev->dev; rx_ring->count = adapter->rx_desc_count; rx_ring->itr_setting = IAVF_ITR_RX_DEF; + u64_stats_init(&rx_ring->rq_stats.syncp); } adapter->num_active_queues = num_active_queues; diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 1de67a70f045..12308e7c9ec0 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -158,6 +158,9 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) for (i = 0; i < vsi->back->num_active_queues; i++) { tx_ring = &vsi->back->tx_rings[i]; if (tx_ring && tx_ring->desc) { + const struct libie_sq_stats *st = &tx_ring->sq_stats; + u32 start; + /* If packet counter has not changed the queue is * likely stalled, so force an interrupt for this * queue. @@ -165,8 +168,13 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) * prev_pkt_ctr would be negative if there was no * pending work. */ - packets = tx_ring->stats.packets & INT_MAX; - if (tx_ring->tx_stats.prev_pkt_ctr == packets) { + do { + start = u64_stats_fetch_begin(&st->syncp); + packets = u64_stats_read(&st->packets) & + INT_MAX; + } while (u64_stats_fetch_retry(&st->syncp, start)); + + if (tx_ring->prev_pkt_ctr == packets) { iavf_force_wb(vsi, tx_ring->q_vector); continue; } @@ -175,7 +183,7 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) * to iavf_get_tx_pending() */ smp_rmb(); - tx_ring->tx_stats.prev_pkt_ctr = + tx_ring->prev_pkt_ctr = iavf_get_tx_pending(tx_ring, true) ? packets : -1; } } @@ -194,10 +202,10 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, struct iavf_ring *tx_ring, int napi_budget) { + struct libie_sq_onstack_stats stats = { }; int i = tx_ring->next_to_clean; struct iavf_tx_buffer *tx_buf; struct iavf_tx_desc *tx_desc; - unsigned int total_bytes = 0, total_packets = 0; unsigned int budget = IAVF_DEFAULT_IRQ_WORK; tx_buf = &tx_ring->tx_bi[i]; @@ -224,8 +232,8 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, tx_buf->next_to_watch = NULL; /* update the statistics for this packet */ - total_bytes += tx_buf->bytecount; - total_packets += tx_buf->gso_segs; + stats.bytes += tx_buf->bytecount; + stats.packets += tx_buf->gso_segs; /* free the skb */ napi_consume_skb(tx_buf->skb, napi_budget); @@ -282,12 +290,9 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, i += tx_ring->count; tx_ring->next_to_clean = i; - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->stats.bytes += total_bytes; - tx_ring->stats.packets += total_packets; - u64_stats_update_end(&tx_ring->syncp); - tx_ring->q_vector->tx.total_bytes += total_bytes; - tx_ring->q_vector->tx.total_packets += total_packets; + libie_sq_napi_stats_add(&tx_ring->sq_stats, &stats); + tx_ring->q_vector->tx.total_bytes += stats.bytes; + tx_ring->q_vector->tx.total_packets += stats.packets; if (tx_ring->flags & IAVF_TXR_FLAGS_WB_ON_ITR) { /* check to see if there are < 4 descriptors @@ -306,10 +311,10 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, /* notify netdev of completed buffers */ netdev_tx_completed_queue(txring_txq(tx_ring), - total_packets, total_bytes); + stats.packets, stats.bytes); #define TX_WAKE_THRESHOLD ((s16)(DESC_NEEDED * 2)) - if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && + if (unlikely(stats.packets && netif_carrier_ok(tx_ring->netdev) && (IAVF_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))) { /* Make sure that anybody stopping the queue after this * sees the new next_to_clean. @@ -320,7 +325,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, !test_bit(__IAVF_VSI_DOWN, vsi->state)) { netif_wake_subqueue(tx_ring->netdev, tx_ring->queue_index); - ++tx_ring->tx_stats.restart_queue; + libie_stats_inc_one(&tx_ring->sq_stats, restarts); } } @@ -675,7 +680,7 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->tx_stats.prev_pkt_ctr = -1; + tx_ring->prev_pkt_ctr = -1; return 0; err: @@ -731,7 +736,7 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) kfree(rx_ring->rx_pages); rx_ring->rx_pages = NULL; - page_pool_destroy(rx_ring->pool); + libie_rx_page_pool_destroy(rx_ring->pool, &rx_ring->rq_stats); rx_ring->dev = dev; if (rx_ring->desc) { @@ -760,8 +765,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) if (!rx_ring->rx_pages) return ret; - u64_stats_init(&rx_ring->syncp); - /* Round up to nearest 4K */ rx_ring->size = rx_ring->count * sizeof(union iavf_32byte_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); @@ -863,10 +866,8 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, dma_addr_t dma; page = page_pool_alloc_pages(pool, gfp); - if (!page) { - rx_ring->rx_stats.alloc_page_failed++; + if (!page) break; - } rx_ring->rx_pages[ntu] = page; dma = page_pool_get_dma_addr(page); @@ -1090,25 +1091,23 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) /** * iavf_is_non_eop - process handling of non-EOP buffers - * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress + * @stats: NAPI poll local stats to update * * This function updates next to clean. If the buffer is an EOP buffer * this function exits returning false, otherwise it will place the * sk_buff in the next buffer to be chained and return true indicating * that this is in fact a non-EOP buffer. **/ -static bool iavf_is_non_eop(struct iavf_ring *rx_ring, - union iavf_rx_desc *rx_desc, - struct sk_buff *skb) +static bool iavf_is_non_eop(union iavf_rx_desc *rx_desc, + struct libie_rq_onstack_stats *stats) { /* if we are the last buffer then there is nothing else to do */ #define IAVF_RXD_EOF BIT(IAVF_RX_DESC_STATUS_EOF_SHIFT) if (likely(iavf_test_staterr(rx_desc, IAVF_RXD_EOF))) return false; - rx_ring->rx_stats.non_eop_descs++; + stats->fragments++; return true; } @@ -1127,8 +1126,8 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, **/ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) { - unsigned int total_rx_bytes = 0, total_rx_packets = 0; const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; + struct libie_rq_onstack_stats stats = { }; u32 to_refill = IAVF_DESC_UNUSED(rx_ring); struct page_pool *pool = rx_ring->pool; struct sk_buff *skb = rx_ring->skb; @@ -1145,9 +1144,13 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) u64 qword; /* return some buffers to hardware, one at a time is too slow */ - if (to_refill >= IAVF_RX_BUFFER_WRITE) + if (to_refill >= IAVF_RX_BUFFER_WRITE) { to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill, gfp); + if (unlikely(to_refill)) + libie_stats_inc_one(&rx_ring->rq_stats, + alloc_page_fail); + } rx_desc = IAVF_RX_DESC(rx_ring, ntc); @@ -1195,7 +1198,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { page_pool_put_page(pool, page, size, true); - rx_ring->rx_stats.alloc_buff_failed++; + libie_stats_inc_one(&rx_ring->rq_stats, + build_skb_fail); break; } @@ -1207,7 +1211,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) prefetch(IAVF_RX_DESC(rx_ring, ntc)); - if (iavf_is_non_eop(rx_ring, rx_desc, skb)) + if (iavf_is_non_eop(rx_desc, &stats)) continue; /* ERR_MASK will only have valid bits if EOP set, and @@ -1227,7 +1231,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) } /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + stats.bytes += skb->len; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); rx_ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> @@ -1249,7 +1253,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) skb = NULL; /* update budget accounting */ - total_rx_packets++; + stats.packets++; } rx_ring->next_to_clean = ntc; @@ -1260,16 +1264,16 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* guarantee a trip back through this routine if there was * a failure */ - if (unlikely(to_refill)) + if (unlikely(to_refill)) { + libie_stats_inc_one(&rx_ring->rq_stats, + alloc_page_fail); cleaned_count = budget; + } } - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->stats.packets += total_rx_packets; - rx_ring->stats.bytes += total_rx_bytes; - u64_stats_update_end(&rx_ring->syncp); - rx_ring->q_vector->rx.total_packets += total_rx_packets; - rx_ring->q_vector->rx.total_bytes += total_rx_bytes; + libie_rq_napi_stats_add(&rx_ring->rq_stats, &stats); + rx_ring->q_vector->rx.total_packets += stats.packets; + rx_ring->q_vector->rx.total_bytes += stats.bytes; return cleaned_count; } @@ -1448,10 +1452,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) return budget - 1; } tx_only: - if (arm_wb) { - q_vector->tx.ring[0].tx_stats.tx_force_wb++; + if (arm_wb) iavf_enable_wb_on_itr(vsi, q_vector); - } return budget; } @@ -1910,6 +1912,7 @@ bool __iavf_chk_linearize(struct sk_buff *skb) int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size) { netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + libie_stats_inc_one(&tx_ring->sq_stats, stops); /* Memory barrier before checking head and tail */ smp_mb(); @@ -1919,7 +1922,8 @@ int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size) /* A reprieve! - use start_queue because it doesn't call schedule */ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); - ++tx_ring->tx_stats.restart_queue; + libie_stats_inc_one(&tx_ring->sq_stats, restarts); + return 0; } @@ -2100,7 +2104,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, return NETDEV_TX_OK; } count = iavf_txd_use_count(skb->len); - tx_ring->tx_stats.tx_linearize++; + libie_stats_inc_one(&tx_ring->sq_stats, linearized); } /* need: 1 descriptor per page * PAGE_SIZE/IAVF_MAX_DATA_PER_TXD, @@ -2110,7 +2114,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, * otherwise try next time */ if (iavf_maybe_stop_tx(tx_ring, count + 4 + 1)) { - tx_ring->tx_stats.tx_busy++; + libie_stats_inc_one(&tx_ring->sq_stats, busy); return NETDEV_TX_BUSY; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 8fbe549ce6a5..64c93d6fa54d 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -4,6 +4,8 @@ #ifndef _IAVF_TXRX_H_ #define _IAVF_TXRX_H_ +#include + /* Interrupt Throttling and Rate Limiting Goodies */ #define IAVF_DEFAULT_IRQ_WORK 256 @@ -201,27 +203,6 @@ struct iavf_tx_buffer { u32 tx_flags; }; -struct iavf_queue_stats { - u64 packets; - u64 bytes; -}; - -struct iavf_tx_queue_stats { - u64 restart_queue; - u64 tx_busy; - u64 tx_done_old; - u64 tx_linearize; - u64 tx_force_wb; - int prev_pkt_ctr; - u64 tx_lost_interrupt; -}; - -struct iavf_rx_queue_stats { - u64 non_eop_descs; - u64 alloc_page_failed; - u64 alloc_buff_failed; -}; - /* some useful defines for virtchannel interface, which * is the only remaining user of header split */ @@ -272,21 +253,9 @@ struct iavf_ring { #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) #define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) - /* stats structs */ - struct iavf_queue_stats stats; - struct u64_stats_sync syncp; - union { - struct iavf_tx_queue_stats tx_stats; - struct iavf_rx_queue_stats rx_stats; - }; - - unsigned int size; /* length of descriptor ring in bytes */ - dma_addr_t dma; /* physical address of ring */ - struct iavf_vsi *vsi; /* Backreference to associated VSI */ struct iavf_q_vector *q_vector; /* Backreference to associated vector */ - struct rcu_head rcu; /* to avoid race on free */ struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must * return before it sees the EOP for * the current packet, we save that skb @@ -295,6 +264,18 @@ struct iavf_ring { * iavf_clean_rx_ring_irq() is called * for this ring. */ + + /* stats structs */ + union { + struct libie_sq_stats sq_stats; + struct libie_rq_stats rq_stats; + }; + + int prev_pkt_ctr; /* For stall detection */ + unsigned int size; /* length of descriptor ring in bytes */ + dma_addr_t dma; /* physical address of ring */ + + struct rcu_head rcu; /* to avoid race on free */ } ____cacheline_internodealigned_in_smp; #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002