From patchwork Tue Aug 6 13:12:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969553 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=Q5H5M28A; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYdm48vyz1yYD for ; Tue, 6 Aug 2024 23:13:16 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 0287C40584; Tue, 6 Aug 2024 13:13:15 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id oz1fmmAGH7PC; Tue, 6 Aug 2024 13:13:14 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org E83874045B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722949994; bh=Nopj9uRDKUuSLlplWpL9smMIIKDnoAQas0Ql8lfVMA0=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=Q5H5M28AFP/6mCgUotFZH/qucJ6SrqUbZfuM44Q6Fr2jGjbKNzdriO+nSoSYVUcxi nKsabqsNcQkT0N7b5joRV5DNL90OhfBQUsHPsge0dyyMTzyHylP8FZ7VHqSMzpBdpv QxWf3I5q2OhMwPn+QRe6Pr1DcMLXuo66CN+/NuXcHYGkx8ypSoDmtep4ZI/EkQvR3J 38xu2ZvWzyBq5c99ZBmtmaUa+eO2PsgRnBzwjZofqtEz2I7Jxm1JJf4iIgRFPUnQuh XorP+7BnYc8iV6X2UphKP7IDz/038oCWJZ1YUeW5weI8F5G4Rxb+L1oq2O1hc82nxP dROpHXPhk3Tew== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id E83874045B; Tue, 6 Aug 2024 13:13:13 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 7A22F1BF300 for ; Tue, 6 Aug 2024 13:13:11 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 73584403B2 for ; Tue, 6 Aug 2024 13:13:11 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id anK5wE4AMZ85 for ; Tue, 6 Aug 2024 13:13:10 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 7B75D40201 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 7B75D40201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 7B75D40201 for ; Tue, 6 Aug 2024 13:13:10 +0000 (UTC) X-CSE-ConnectionGUID: X2jakLpbSreywFkEFcLw8A== X-CSE-MsgGUID: 1XQpDa8xRFqW/GFv/85wtA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842077" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842077" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:10 -0700 X-CSE-ConnectionGUID: 16IgqtC6SOargJEsfZV4Eg== X-CSE-MsgGUID: f3BC7GwTS6q0dm8SundO3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475796" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:07 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:32 +0200 Message-ID: <20240806131240.800259-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722949990; x=1754485990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lfliiD9dtcEG344A0Se/HN7jBzJiQBOpagRt04D+Nvw=; b=fBmV2dGD6jsKBGAcqNc7NMUOS/mIGjcINWm71jjF/cH0mtXh0eShzedw pH7ERVCJelxbCOyKJ504/ezxt1Qk0jfB/sqzuyLTr6d6mUCkoFGwbS7Wj FZEA8Q6Pt/Bl6MNFOxiwngF2ypxAqaPb7eCbvaQHv2UJB/cMoDQz8fL5S /Bw/yfBnrMGF8sOTuxb7Ofqcr9V4Pg6tw0mv+KU8GvqtON3MiigViljvN sl5V9r7dkj4oFPL3FEvVPvgrCpUZgwHgMlBi9dNrwb8pTeapRmHq/Sd+q 2JZvYMoLZKcBQm9UTIpEDr/DsHraexxlnm3VZK0AptNFITi++h9VCw174 Q==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fBmV2dGD Subject: [Intel-wired-lan] [PATCH iwl-next 1/9] unroll: add generic loop unroll helpers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jose E . Marchesi" , Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" There are cases when we need to explicitly unroll loops. For example, cache operations, filling DMA descriptors on very high speeds etc. Add compiler-specific attribute macros to give the compiler a hint that we'd like to unroll a loop. Example usage: #define UNROLL_BATCH 8 unrolled_count(UNROLL_BATCH) for (u32 i = 0; i < UNROLL_BATCH; i++) op(priv, i); Note that sometimes the compilers won't unroll loops if they think this would have worse optimization and perf than without unrolling, and that unroll attributes are available only starting GCC 8. For older compiler versions, no hints/attributes will be applied. For better unrolling/parallelization, don't have any variables that interfere between iterations except for the iterator itself. Co-developed-by: Jose E. Marchesi # pragmas Signed-off-by: Jose E. Marchesi Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/unroll.h | 50 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 include/linux/unroll.h diff --git a/include/linux/unroll.h b/include/linux/unroll.h new file mode 100644 index 000000000000..e305d155faa6 --- /dev/null +++ b/include/linux/unroll.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef _LINUX_UNROLL_H +#define _LINUX_UNROLL_H + +#ifdef CONFIG_CC_IS_CLANG +#define __pick_unrolled(x, y) _Pragma(#x) +#elif CONFIG_GCC_VERSION >= 80000 +#define __pick_unrolled(x, y) _Pragma(#y) +#else +#define __pick_unrolled(x, y) /* not supported */ +#endif + +/** + * unrolled - loop attributes to ask the compiler to unroll it + * + * Usage: + * + * #define BATCH 4 + * unrolled_count(BATCH) + * for (u32 i = 0; i < BATCH; i++) + * // loop body without cross-iteration dependencies + * + * This is only a hint and the compiler is free to disable unrolling if it + * thinks the count is suboptimal and may hurt performance and/or hugely + * increase object code size. + * Not having any cross-iteration dependencies (i.e. when iter x + 1 depends + * on what iter x will do with variables) is not a strict requirement, but + * provides best performance and object code size. + * Available only on Clang and GCC 8.x onwards. + */ + +/* Ask the compiler to pick an optimal unroll count, Clang only */ +#define unrolled \ + __pick_unrolled(clang loop unroll(enable), /* nothing */) + +/* Unroll each @n iterations of a loop */ +#define unrolled_count(n) \ + __pick_unrolled(clang loop unroll_count(n), GCC unroll n) + +/* Unroll the whole loop */ +#define unrolled_full \ + __pick_unrolled(clang loop unroll(full), GCC unroll 65534) + +/* Never unroll a loop */ +#define unrolled_none \ + __pick_unrolled(clang loop unroll(disable), GCC unroll 1) + +#endif /* _LINUX_UNROLL_H */ From patchwork Tue Aug 6 13:12:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969555 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=YE2bDVUn; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYf23kY8z1yYD for ; Tue, 6 Aug 2024 23:13:30 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 0BA2940A53; Tue, 6 Aug 2024 13:13:27 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id VWn1IkgMr-nE; Tue, 6 Aug 2024 13:13:22 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 8C8F840A2F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950002; bh=psoDfpiqAikcM/2cUfqwEuiCF79WLM5b7QtUgCw9v3I=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=YE2bDVUn49+JEmgVGj7AC66g5MMY2tnS45+QtCkFH6rAyDBuhivE5NwW4Gfpc+2gS zpmYPpg9IbrtnH1QDk7RSM9gGJurMr4oEGmPMdUkbbNX/yrfZCxXEGg7LscXzDNQeG p7H3xSYueZ1RbU2ZeIq2qAk0ranJpNwAmRgbKzvAW+6m0b9uv1G48yIZpspdDanZWt isoJoezuaFS2ONt4sFnfb1tDUN4Z0vAqb2NrIONI/awKiRaz+g9fQjJir6xvfk167T g2ljVj6U/jmSdO61D1J3/zg4/yot3yfgWIR0bFUTcDqc1VPGW9qiyV7vGdjUuy2Pnx yU7ANcjwzXzYg== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 8C8F840A2F; Tue, 6 Aug 2024 13:13:22 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 9B6C61BF969 for ; Tue, 6 Aug 2024 13:13:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 89066403B2 for ; Tue, 6 Aug 2024 13:13:16 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id cULRlVXyGyaT for ; Tue, 6 Aug 2024 13:13:14 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 0AA8440201 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 0AA8440201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 0AA8440201 for ; Tue, 6 Aug 2024 13:13:13 +0000 (UTC) X-CSE-ConnectionGUID: AyxDFd+dTjy8OQj4eKrHrw== X-CSE-MsgGUID: LDddTMudRJWgir7k3bh6Jw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842089" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842089" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:13 -0700 X-CSE-ConnectionGUID: oPRohnbhS96BmrZxmLGHwQ== X-CSE-MsgGUID: ICbH5ZXxRfKlYcaWn1+/8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475801" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:10 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:33 +0200 Message-ID: <20240806131240.800259-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722949994; x=1754485994; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1nYBDZNzvE1/YJ4t6OXW9/DNn6Y40xaPjov3MG+CL/U=; b=My0pwNPdV1Ojpj5vs6Z65dYVS8+9Zx2YL+/0ZoQcLmteBdm4LvRC/pPr nyg2sOeiABuV0+8tLl2vJRAh4hToiUqYOOjKJFDRjtnUL8UtX8SAY6Xf7 N0qzvOiiRTn1UmvYuzt0N5WV6+DpYHtGodFHsMltDgNtT6gpK2fME/heR GSZbdh7ijHJE+S/AR3Wws7m3E7OWyzWlP+VInCQ09uRT7RrEq93gkVV+c O3vi2BYjYwVw+qIvkNAkZBKnIXAsDNprba+khuKos3f+8w/qMsy77tHyg GdEmAksAmM4rjfuQfH6SriINSoun8jnPwP+Pkps5C4KG8bBN0ufgJ70Cd Q==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=My0pwNPd Subject: [Intel-wired-lan] [PATCH iwl-next 2/9] libeth: add common queue stats X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Define common structures, inline helpers and Ethtool helpers to collect, update and export the statistics (RQ, SQ, XDPSQ). Use u64_stats_t right from the start, as well as the corresponding helpers to ensure tear-free operations. For the NAPI parts of both Rx and Tx, also define small onstack containers to update them in polling loops and then sync the actual containers once a loop ends. In order to implement fully generic Netlink per-queue stats callbacks, &libeth_netdev_priv is introduced and is required to be embedded at the start of the driver's netdev_priv structure. Note on the stats types: * onstack stats are u32 and are local to one NAPI loop or sending function. At the end of processing, they get added to the "live" stats below; * "live" stats are u64_stats_t and they reflect the current session (interface up) only (Netlink queue stats). When an ifdown occurs, they get added to the "base" stats below; * "base" stats are u64 guarded by a mutex, they survive ifdowns and don't get updated when the interface is up. This corresponds to the Netlink base stats. Drivers are responsible for filling the onstack stats and calling stack -> live update functions; base stats are internal to libeth. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/Makefile | 4 +- include/net/libeth/types.h | 247 ++++++++++++++ drivers/net/ethernet/intel/libeth/priv.h | 21 ++ include/net/libeth/netdev.h | 31 ++ include/net/libeth/stats.h | 141 ++++++++ drivers/net/ethernet/intel/libeth/netdev.c | 157 +++++++++ drivers/net/ethernet/intel/libeth/rx.c | 5 - drivers/net/ethernet/intel/libeth/stats.c | 360 +++++++++++++++++++++ 8 files changed, 960 insertions(+), 6 deletions(-) create mode 100644 include/net/libeth/types.h create mode 100644 drivers/net/ethernet/intel/libeth/priv.h create mode 100644 include/net/libeth/netdev.h create mode 100644 include/net/libeth/stats.h create mode 100644 drivers/net/ethernet/intel/libeth/netdev.c create mode 100644 drivers/net/ethernet/intel/libeth/stats.c diff --git a/drivers/net/ethernet/intel/libeth/Makefile b/drivers/net/ethernet/intel/libeth/Makefile index 52492b081132..b30a2804554f 100644 --- a/drivers/net/ethernet/intel/libeth/Makefile +++ b/drivers/net/ethernet/intel/libeth/Makefile @@ -3,4 +3,6 @@ obj-$(CONFIG_LIBETH) += libeth.o -libeth-y := rx.o +libeth-y += netdev.o +libeth-y += rx.o +libeth-y += stats.o diff --git a/include/net/libeth/types.h b/include/net/libeth/types.h new file mode 100644 index 000000000000..d7adc637408b --- /dev/null +++ b/include/net/libeth/types.h @@ -0,0 +1,247 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_TYPES_H +#define __LIBETH_TYPES_H + +#include + +/** + * struct libeth_netdev_priv - libeth netdev private structure + * @curr_xdpsqs: current number of XDPSQs in use + * @max_xdpsqs: maximum number of XDPSQs this netdev has + * @last_rqs: number of RQs last time Ethtool stats were requested + * @last_sqs: number of SQs last time Ethtool stats were requested + * @last_xdpsqs: number of XDPSQ last time Ethtool stats were requested + * @base_rqs: per-queue RQ stats containers with the netdev lifetime + * @base_sqs: per-queue SQ stats containers with the netdev lifetime + * @base_xdpsqs: per-queue XDPSQ stats containers with the netdev lifetime + * @live_rqs: pointers to the current driver's embedded RQ stats + * @live_sqs: pointers to the current driver's embedded SQ stats + * @live_xdpsqs: pointers to the current driver's embedded XDPSQ stats + * + * The structure must be placed strictly at the beginning of driver's netdev + * private structure if it uses libeth generic stats, as libeth uses + * netdev_priv() to access it. The structure is private to libeth and + * shouldn't be accessed from drivers directly. + */ +struct libeth_netdev_priv { + u32 curr_xdpsqs; + u32 max_xdpsqs; + + u16 last_rqs; + u16 last_sqs; + u16 last_xdpsqs; + + struct libeth_rq_base_stats *base_rqs; + struct libeth_sq_base_stats *base_sqs; + struct libeth_xdpsq_base_stats *base_xdpsqs; + + const struct libeth_rq_stats **live_rqs; + const struct libeth_sq_stats **live_sqs; + const struct libeth_xdpsq_stats **live_xdpsqs; + + /* Driver private data, ____cacheline_aligned */ +} ____cacheline_aligned; + +/** + * libeth_netdev_priv_assert - assert the layout of driver's netdev priv struct + * @t: typeof() of driver's netdev private structure + * @f: name of the embedded &libeth_netdev_priv inside @t + * + * Make sure &libeth_netdev_priv is placed strictly at the beginning of + * driver's private structure, so that libeth can use netdev_priv() to + * access it. + * To be called right after driver's netdev private struct declaration. + */ +#define libeth_netdev_priv_assert(t, f) \ + static_assert(__same_type(struct libeth_netdev_priv, \ + typeof_member(t, f)) && !offsetof(t, f)) + +/* Stats. '[NL]' means it's exported to the Netlink per-queue stats */ + +/* Use 32-byte alignment to reduce false sharing. The first ~4 fields usually + * are the hottest and the stats update helpers are unrolled by this count. + */ +#define __libeth_stats_aligned \ + __aligned(__cmp(min, 4 * sizeof(u64_stats_t), SMP_CACHE_BYTES)) + +/* Align queue stats counters naturally in case they aren't */ +#define __libeth_u64_stats_t \ + u64_stats_t __aligned(sizeof(u64_stats_t)) + +#define ___live(s) __libeth_u64_stats_t s; + +/* Rx per-queue stats: + * + * napi: "hot" counters, updated in bulks from NAPI polling loops: + * bytes: bytes received on this queue [NL] + * packets: packets received on this queue [NL] + * fragments: number of processed descriptors carrying only a fragment + * csum_unnecessary: number of frames the device checked the checksum for [NL] + * hsplit: number of frames the device performed the header split for + * hsplit_linear: number of frames placed entirely to the header buffer + * hw_gro_packets: number of frames the device did HW GRO for [NL] + * hw_gro_bytes: bytes for all HW GROed frames [NL] + * + * fail: "slow"/error counters, incremented by one when occurred: + * alloc_fail: number of FQE (Rx buffer) allocation fails [NL] + * dma_errs: number of hardware Rx DMA errors + * csum_none: number of frames the device didn't check the checksum for [NL] + * csum_bad: number of frames with invalid checksum [NL] + * hsplit_errs: number of header split errors (header buffer overflows etc.) + * build_fail: number of napi_build_skb() fails + * + * &libeth_rq_stats must be embedded into the corresponding queue structure. + */ + +#define LIBETH_DECLARE_RQ_NAPI_STATS(act) \ + act(bytes) \ + act(packets) \ + act(fragments) \ + act(csum_unnecessary) \ + act(hsplit) \ + act(hsplit_linear) \ + act(hw_gro_packets) \ + act(hw_gro_bytes) + +#define LIBETH_DECLARE_RQ_FAIL_STATS(act) \ + act(alloc_fail) \ + act(dma_errs) \ + act(csum_none) \ + act(csum_bad) \ + act(hsplit_errs) \ + act(build_fail) + +#define LIBETH_DECLARE_RQ_STATS(act) \ + LIBETH_DECLARE_RQ_NAPI_STATS(act) \ + LIBETH_DECLARE_RQ_FAIL_STATS(act) + +struct libeth_rq_stats { + struct u64_stats_sync syncp; + + union { + struct { + struct_group(napi, + LIBETH_DECLARE_RQ_NAPI_STATS(___live); + ); + LIBETH_DECLARE_RQ_FAIL_STATS(___live); + }; + DECLARE_FLEX_ARRAY(__libeth_u64_stats_t, raw); + }; +} __libeth_stats_aligned; + +/* Tx per-queue stats: + * + * napi: "hot" counters, updated in bulks from NAPI polling loops: + * bytes: bytes sent from this queue [NL] + * packets: packets sent from this queue [NL] + * + * xmit: "hot" counters, updated in bulks from ::ndo_start_xmit(): + * fragments: number of descriptors carrying only a fragment + * csum_none: number of frames sent w/o checksum offload [NL] + * needs_csum: number of frames sent with checksum offload [NL] + * hw_gso_packets: number of frames sent with segmentation offload [NL] + * tso: number of frames sent with TCP segmentation offload + * uso: number of frames sent with UDP L4 segmentation offload + * hw_gso_bytes: total bytes for HW GSOed frames [NL] + * + * fail: "slow"/error counters, incremented by one when occurred: + * linearized: number of non-linear skbs linearized due to HW limits + * dma_map_errs: number of DMA mapping errors + * drops: number of skbs dropped by ::ndo_start_xmit() + * busy: number of xmit failures due to the queue being full + * stop: number of times the queue was stopped by the driver [NL] + * wake: number of times the queue was started after being stopped [NL] + * + * &libeth_sq_stats must be embedded into the corresponding queue structure. + */ + +#define LIBETH_DECLARE_SQ_NAPI_STATS(act) \ + act(bytes) \ + act(packets) + +#define LIBETH_DECLARE_SQ_XMIT_STATS(act) \ + act(fragments) \ + act(csum_none) \ + act(needs_csum) \ + act(hw_gso_packets) \ + act(tso) \ + act(uso) \ + act(hw_gso_bytes) + +#define LIBETH_DECLARE_SQ_FAIL_STATS(act) \ + act(linearized) \ + act(dma_map_errs) \ + act(drops) \ + act(busy) \ + act(stop) \ + act(wake) + +#define LIBETH_DECLARE_SQ_STATS(act) \ + LIBETH_DECLARE_SQ_NAPI_STATS(act) \ + LIBETH_DECLARE_SQ_XMIT_STATS(act) \ + LIBETH_DECLARE_SQ_FAIL_STATS(act) + +struct libeth_sq_stats { + struct u64_stats_sync syncp; + + union { + struct { + struct_group(napi, + LIBETH_DECLARE_SQ_NAPI_STATS(___live); + ); + struct_group(xmit, + LIBETH_DECLARE_SQ_XMIT_STATS(___live); + ); + LIBETH_DECLARE_SQ_FAIL_STATS(___live); + }; + DECLARE_FLEX_ARRAY(__libeth_u64_stats_t, raw); + }; +} __libeth_stats_aligned; + +/* XDP Tx per-queue stats: + * + * napi: "hot" counters, updated in bulks from NAPI polling loops: + * bytes: bytes sent from this queue + * packets: packets sent from this queue + * fragments: number of descriptors carrying only a fragment + * + * fail: "slow"/error counters, incremented by one when occurred: + * dma_map_errs: number of DMA mapping errors + * drops: number of frags dropped due to the queue being full + * busy: number of xmit failures due to the queue being full + * + * &libeth_xdpsq_stats must be embedded into the corresponding queue structure. + */ + +#define LIBETH_DECLARE_XDPSQ_NAPI_STATS(act) \ + LIBETH_DECLARE_SQ_NAPI_STATS(act) \ + act(fragments) + +#define LIBETH_DECLARE_XDPSQ_FAIL_STATS(act) \ + act(dma_map_errs) \ + act(drops) \ + act(busy) + +#define LIBETH_DECLARE_XDPSQ_STATS(act) \ + LIBETH_DECLARE_XDPSQ_NAPI_STATS(act) \ + LIBETH_DECLARE_XDPSQ_FAIL_STATS(act) + +struct libeth_xdpsq_stats { + struct u64_stats_sync syncp; + + union { + struct { + struct_group(napi, + LIBETH_DECLARE_XDPSQ_NAPI_STATS(___live); + ); + LIBETH_DECLARE_XDPSQ_FAIL_STATS(___live); + }; + DECLARE_FLEX_ARRAY(__libeth_u64_stats_t, raw); + }; +} __libeth_stats_aligned; + +#undef ___live + +#endif /* __LIBETH_TYPES_H */ diff --git a/drivers/net/ethernet/intel/libeth/priv.h b/drivers/net/ethernet/intel/libeth/priv.h new file mode 100644 index 000000000000..6455aab0311c --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/priv.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_PRIV_H +#define __LIBETH_PRIV_H + +#include + +/* Stats */ + +struct net_device; + +bool libeth_stats_init_priv(struct net_device *dev, u32 rqs, u32 sqs, + u32 xdpsqs); +void libeth_stats_free_priv(const struct net_device *dev); + +int libeth_stats_get_sset_count(struct net_device *dev); +void libeth_stats_get_strings(struct net_device *dev, u8 *data); +void libeth_stats_get_data(struct net_device *dev, u64 *data); + +#endif /* __LIBETH_PRIV_H */ diff --git a/include/net/libeth/netdev.h b/include/net/libeth/netdev.h new file mode 100644 index 000000000000..22a07f0b16d7 --- /dev/null +++ b/include/net/libeth/netdev.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_NETDEV_H +#define __LIBETH_NETDEV_H + +#include + +struct ethtool_stats; + +struct net_device *__libeth_netdev_alloc(u32 priv, u32 rqs, u32 sqs, + u32 xdpsqs); +void libeth_netdev_free(struct net_device *dev); + +int __libeth_set_real_num_queues(struct net_device *dev, u32 rqs, u32 sqs, + u32 xdpsqs); + +#define libeth_netdev_alloc(priv, rqs, sqs, ...) \ + __libeth_netdev_alloc(priv, rqs, sqs, (__VA_ARGS__ + 0)) +#define libeth_set_real_num_queues(dev, rqs, sqs, ...) \ + __libeth_set_real_num_queues(dev, rqs, sqs, (__VA_ARGS__ + 0)) + +/* Ethtool */ + +int libeth_ethtool_get_sset_count(struct net_device *dev, int sset); +void libeth_ethtool_get_strings(struct net_device *dev, u32 sset, u8 *data); +void libeth_ethtool_get_stats(struct net_device *dev, + struct ethtool_stats *stats, + u64 *data); + +#endif /* __LIBETH_NETDEV_H */ diff --git a/include/net/libeth/stats.h b/include/net/libeth/stats.h new file mode 100644 index 000000000000..fe7fed543d33 --- /dev/null +++ b/include/net/libeth/stats.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_STATS_H +#define __LIBETH_STATS_H + +#include +#include + +#include + +/* Common */ + +/** + * libeth_stats_inc_one - safely increment one stats structure counter + * @s: queue stats structure to update (&libeth_rq_stats etc.) + * @f: name of the field to increment + * + * To be used on exception or slow paths -- allocation fails, queue stops etc. + */ +#define libeth_stats_inc_one(s, f) \ + __libeth_stats_inc_one(s, f, __UNIQUE_ID(qs_)) +#define __libeth_stats_inc_one(s, f, n) do { \ + typeof(*(s)) *n = (s); \ + \ + u64_stats_update_begin(&n->syncp); \ + u64_stats_inc(&n->f); \ + u64_stats_update_end(&n->syncp); \ +} while (0) + +/** + * libeth_stats_add_frags - update the frags counter if needed + * @s: onstack stats structure to update (&libeth_rq_napi_stats etc.) + * @frags: number of frags processed + * + * Update the frags counter if @frags > 1, do nothing for non-SG frames. + */ +#define libeth_stats_add_frags(s, frags) \ + __libeth_stats_add_frags(s, frags, __UNIQUE_ID(frags_)) +#define __libeth_stats_add_frags(s, frags, uf) do { \ + u32 uf = (frags); \ + \ + if (uf > 1) \ + (s)->fragments += uf; \ +} while (0) + +#define ___libeth_stats_add(qs, ss, group, uq, us, ur) do { \ + typeof(*(qs)) *uq = (qs); \ + u64_stats_t *ur = (typeof(ur))&uq->group; \ + typeof(*(ss)) *us = (ss); \ + \ + static_assert(sizeof(uq->group) == sizeof(*us) * 2); \ + u64_stats_update_begin(&uq->syncp); \ + \ + unrolled_count(__alignof(*uq) / sizeof(*uq->raw)) \ + for (u32 i = 0; i < sizeof(*us) / sizeof(*us->raw); i++) \ + u64_stats_add(&ur[i], us->raw[i]); \ + \ + u64_stats_update_end(&uq->syncp); \ +} while (0) +#define __libeth_stats_add(qs, ss, group) \ + ___libeth_stats_add(qs, ss, group, __UNIQUE_ID(qs_), \ + __UNIQUE_ID(ss_), __UNIQUE_ID(raw_)) + +/* The following barely readable compression block defines the following + * entities to be used in drivers: + * + * &libeth_rq_napi_stats - onstack stats container for RQ NAPI polling + * libeth_rq_napi_stats_add() - add RQ onstack stats to the queue container + * &libeth_sq_napi_stats - onstack stats container for SQ completion polling + * libeth_sq_napi_stats_add() - add SQ onstack stats to the queue container + * &libeth_sq_xmit_stats - onstack stats container for ::ndo_start_xmit() + * libeth_sq_xmit_stats_add() - add SQ xmit stats to the queue container + * &libeth_xdpsq_napi_stats - onstack stats container for XDPSQ polling + * libeth_xdpsq_napi_stats_add() - add XDPSQ stats to the queue container + * + * During the NAPI poll loop or any other hot function, the "hot" counters + * get updated on the stack only. Then at the end, the corresponding _add() + * is called to quickly add them to the stats container embedded into the + * queue structure using __libeth_stats_add(). + * The onstack counters are of type u32, thus it is assumed that one + * polling/sending cycle can't go above ``U32_MAX`` for any of them. + */ + +#define ___stack(s) u32 s; + +#define LIBETH_STATS_DEFINE_STACK(pfx, PFX, type, TYPE) \ +struct libeth_##pfx##_##type##_stats { \ + union { \ + struct { \ + LIBETH_DECLARE_##PFX##_##TYPE##_STATS(___stack); \ + }; \ + DECLARE_FLEX_ARRAY(u32, raw); \ + }; \ +}; \ + \ +static inline void \ +libeth_##pfx##_##type##_stats_add(struct libeth_##pfx##_stats *qs, \ + const struct libeth_##pfx##_##type##_stats \ + *ss) \ +{ \ + __libeth_stats_add(qs, ss, type); \ +} + +#define LIBETH_STATS_DECLARE_HELPERS(pfx) \ +void libeth_##pfx##_stats_init(const struct net_device *dev, \ + struct libeth_##pfx##_stats *stats, \ + u32 qid); \ +void libeth_##pfx##_stats_deinit(const struct net_device *dev, u32 qid) + +LIBETH_STATS_DEFINE_STACK(rq, RQ, napi, NAPI); +LIBETH_STATS_DECLARE_HELPERS(rq); + +LIBETH_STATS_DEFINE_STACK(sq, SQ, napi, NAPI); +LIBETH_STATS_DEFINE_STACK(sq, SQ, xmit, XMIT); + +/** + * libeth_sq_xmit_stats_csum - convert skb csum status to the SQ xmit stats + * @ss: onstack SQ xmit stats to increment + * @skb: &sk_buff to process stats for + * + * To be called from ::ndo_start_xmit() to account whether checksum offload + * was enabled when sending this @skb. + */ +static inline void libeth_sq_xmit_stats_csum(struct libeth_sq_xmit_stats *ss, + const struct sk_buff *skb) +{ + if (skb->ip_summed == CHECKSUM_PARTIAL) + ss->needs_csum++; + else + ss->csum_none++; +} + +LIBETH_STATS_DECLARE_HELPERS(sq); + +LIBETH_STATS_DEFINE_STACK(xdpsq, XDPSQ, napi, NAPI); +LIBETH_STATS_DECLARE_HELPERS(xdpsq); + +#undef ___stack + +#endif /* __LIBETH_STATS_H */ diff --git a/drivers/net/ethernet/intel/libeth/netdev.c b/drivers/net/ethernet/intel/libeth/netdev.c new file mode 100644 index 000000000000..6115472b3bb6 --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/netdev.c @@ -0,0 +1,157 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2024 Intel Corporation */ + +#include +#include + +#include +#include + +#include "priv.h" + +/** + * __libeth_netdev_alloc - allocate a &net_device with libeth generic stats + * @priv: sizeof() of the private structure with embedded &libeth_netdev_priv + * @rqs: maximum number of Rx queues to be used + * @sqs: maximum number of Tx queues to be used + * @xdpsqs: maximum number of XDP Tx queues to be used + * + * Allocates a new &net_device and initializes the embedded &libeth_netdev_priv + * and the libeth generic stats for it. + * Use the non-underscored wrapper in drivers instead. + * + * Return: new &net_device on success, %NULL on error. + */ +struct net_device *__libeth_netdev_alloc(u32 priv, u32 rqs, u32 sqs, + u32 xdpsqs) +{ + struct net_device *dev; + + dev = alloc_etherdev_mqs(priv, sqs, rqs); + if (!dev) + return NULL; + + if (!libeth_stats_init_priv(dev, rqs, sqs, xdpsqs)) + goto err_netdev; + + return dev; + +err_netdev: + free_netdev(dev); + + return NULL; +} +EXPORT_SYMBOL_NS_GPL(__libeth_netdev_alloc, LIBETH); + +/** + * libeth_netdev_free - free a &net_device with libeth generic stats + * @dev: &net_device to free + * + * Deinitializes and frees the embedded &libeth_netdev_priv and the netdev + * itself, to be used if @dev was allocated using libeth_netdev_alloc(). + */ +void libeth_netdev_free(struct net_device *dev) +{ + libeth_stats_free_priv(dev); + free_netdev(dev); +} +EXPORT_SYMBOL_NS_GPL(libeth_netdev_free, LIBETH); + +/** + * __libeth_set_real_num_queues - set the actual number of queues in use + * @dev: &net_device to configure + * @rqs: actual number of Rx queues + * @sqs: actual number of Tx queues + * @xdpsqs: actual number of XDP Tx queues + * + * Sets the actual number of queues in use, to be called on ifup for netdevs + * allocated via libeth_netdev_alloc(). + * Use the non-underscored wrapper in drivers instead. + * + * Return: %0 on success, -errno on error. + */ +int __libeth_set_real_num_queues(struct net_device *dev, u32 rqs, u32 sqs, + u32 xdpsqs) +{ + struct libeth_netdev_priv *priv = netdev_priv(dev); + int ret; + + ret = netif_set_real_num_rx_queues(dev, rqs); + if (ret) + return ret; + + ret = netif_set_real_num_tx_queues(dev, sqs); + if (ret) + return ret; + + priv->curr_xdpsqs = xdpsqs; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(__libeth_set_real_num_queues, LIBETH); + +/* Ethtool */ + +/** + * libeth_ethtool_get_sset_count - get the number of libeth generic stats + * @dev: libeth-driven &net_device + * @sset: ``ETH_SS_STATS`` only, for compatibility with Ethtool callbacks + * + * Can be used directly in ðtool_ops if the driver doesn't have HW-specific + * stats or called from the corresponding driver callback. + * + * Return: the number of stats/stringsets. + */ +int libeth_ethtool_get_sset_count(struct net_device *dev, int sset) +{ + if (sset != ETH_SS_STATS) + return 0; + + return libeth_stats_get_sset_count(dev); +} +EXPORT_SYMBOL_NS_GPL(libeth_ethtool_get_sset_count, LIBETH); + +/** + * libeth_ethtool_get_strings - get libeth generic stats strings/names + * @dev: libeth-driven &net_device + * @sset: ``ETH_SS_STATS`` only, for compatibility with Ethtool callbacks + * @data: container to fill with the stats names + * + * Can be used directly in ðtool_ops if the driver doesn't have HW-specific + * stats or called from the corresponding driver callback. + * Note that the function doesn't advance the @data pointer, so it's better to + * call it at the end to avoid code complication. + */ +void libeth_ethtool_get_strings(struct net_device *dev, u32 sset, u8 *data) +{ + if (sset != ETH_SS_STATS) + return; + + libeth_stats_get_strings(dev, data); +} +EXPORT_SYMBOL_NS_GPL(libeth_ethtool_get_strings, LIBETH); + +/** + * libeth_ethtool_get_stats - get libeth generic stats counters + * @dev: libeth-driven &net_device + * @stats: unused, for compatibility with Ethtool callbacks + * @data: container to fill with the stats counters + * + * Can be used directly in ðtool_ops if the driver doesn't have HW-specific + * stats or called from the corresponding driver callback. + * Note that the function doesn't advance the @data pointer, so it's better to + * call it at the end to avoid code complication. Anyhow, the order must be the + * same as in the ::get_strings() implementation. + */ +void libeth_ethtool_get_stats(struct net_device *dev, + struct ethtool_stats *stats, + u64 *data) +{ + libeth_stats_get_data(dev, data); +} +EXPORT_SYMBOL_NS_GPL(libeth_ethtool_get_stats, LIBETH); + +/* Module */ + +MODULE_DESCRIPTION("Common Ethernet library"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c index f20926669318..d31779bbfccd 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -252,8 +252,3 @@ void libeth_rx_pt_gen_hash_type(struct libeth_rx_pt *pt) pt->hash_type |= libeth_rx_pt_xdp_pl[pt->payload_layer]; } EXPORT_SYMBOL_NS_GPL(libeth_rx_pt_gen_hash_type, LIBETH); - -/* Module */ - -MODULE_DESCRIPTION("Common Ethernet library"); -MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/intel/libeth/stats.c b/drivers/net/ethernet/intel/libeth/stats.c new file mode 100644 index 000000000000..61618aa71cbf --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/stats.c @@ -0,0 +1,360 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2024 Intel Corporation */ + +#include + +#include +#include + +#include "priv.h" + +/* Common */ + +static void libeth_stats_sync(u64 *base, u64 *sarr, + const struct u64_stats_sync *syncp, + const u64_stats_t *raw, u32 num) +{ + u32 start; + + do { + start = u64_stats_fetch_begin(syncp); + for (u32 i = 0; i < num; i++) + sarr[i] = u64_stats_read(&raw[i]); + } while (u64_stats_fetch_retry(syncp, start)); + + for (u32 i = 0; i < num; i++) + base[i] += sarr[i]; +} + +static void __libeth_stats_get_strings(u8 **data, u32 qid, const char *pfx, + const char * const *str, u32 num) +{ + for (u32 i = 0; i < num; i++) + ethtool_sprintf(data, "%s%u_%s", pfx, qid, str[i]); +} + +/* The following barely readable compression block defines, amidst others, + * exported libeth_{rq,sq,xdpsq}_stats_{,de}init() which must be called for + * each stats container embedded in a queue structure on ifup/ifdown + * correspondingly. Note that the @qid it takes is the networking stack + * queue ID, not a driver/device internal one. + */ + +#define ___base(s) aligned_u64 s; +#define ___string(s) __stringify(s), + +#define LIBETH_STATS_DEFINE_HELPERS(pfx, PFX) \ +struct libeth_##pfx##_base_stats { \ + struct mutex lock; \ + \ + union { \ + struct { \ + LIBETH_DECLARE_##PFX##_STATS(___base); \ + }; \ + DECLARE_FLEX_ARRAY(aligned_u64, raw); \ + }; \ +}; \ +static const char * const libeth_##pfx##_stats_str[] = { \ + LIBETH_DECLARE_##PFX##_STATS(___string) \ +}; \ +static const u32 LIBETH_##PFX##_STATS_NUM = \ + ARRAY_SIZE(libeth_##pfx##_stats_str); \ + \ +static void libeth_##pfx##_stats_sync(u64 *base, \ + const struct libeth_##pfx##_stats *qs) \ +{ \ + u64 sarr[ARRAY_SIZE(libeth_##pfx##_stats_str)]; \ + \ + if (qs) \ + libeth_stats_sync(base, sarr, &qs->syncp, qs->raw, \ + LIBETH_##PFX##_STATS_NUM); \ +} \ + \ +void libeth_##pfx##_stats_init(const struct net_device *dev, \ + struct libeth_##pfx##_stats *stats, \ + u32 qid) \ +{ \ + const struct libeth_netdev_priv *priv = netdev_priv(dev); \ + \ + memset(stats, 0, sizeof(*stats)); \ + u64_stats_init(&stats->syncp); \ + \ + mutex_init(&priv->base_##pfx##s[qid].lock); \ + WRITE_ONCE(priv->live_##pfx##s[qid], stats); \ +} \ +EXPORT_SYMBOL_NS_GPL(libeth_##pfx##_stats_init, LIBETH); \ + \ +void libeth_##pfx##_stats_deinit(const struct net_device *dev, u32 qid) \ +{ \ + const struct libeth_netdev_priv *priv = netdev_priv(dev); \ + struct libeth_##pfx##_base_stats *base = &priv->base_##pfx##s[qid]; \ + \ + mutex_lock(&base->lock); \ + libeth_##pfx##_stats_sync(base->raw, \ + READ_ONCE(priv->live_##pfx##s[qid])); \ + mutex_unlock(&base->lock); \ + \ + WRITE_ONCE(priv->live_##pfx##s[qid], NULL); \ +} \ +EXPORT_SYMBOL_NS_GPL(libeth_##pfx##_stats_deinit, LIBETH); \ + \ +static void libeth_##pfx##_stats_get_strings(u8 **data, u32 num) \ +{ \ + for (u32 i = 0; i < num; i++) \ + __libeth_stats_get_strings(data, i, #pfx, \ + libeth_##pfx##_stats_str, \ + LIBETH_##PFX##_STATS_NUM); \ +} \ + \ +static void \ +__libeth_##pfx##_stats_get_data(u64 **data, \ + struct libeth_##pfx##_base_stats *base, \ + const struct libeth_##pfx##_stats *qs) \ +{ \ + mutex_lock(&base->lock); \ + memcpy(*data, base->raw, sizeof(*base)); \ + mutex_unlock(&base->lock); \ + \ + libeth_##pfx##_stats_sync(*data, qs); \ + *data += LIBETH_##PFX##_STATS_NUM; \ +} \ + \ +static void \ +libeth_##pfx##_stats_get_data(u64 **data, \ + const struct libeth_netdev_priv *priv) \ +{ \ + for (u32 i = 0; i < priv->last_##pfx##s; i++) { \ + const struct libeth_##pfx##_stats *qs; \ + \ + qs = READ_ONCE(priv->live_##pfx##s[i]); \ + __libeth_##pfx##_stats_get_data(data, \ + &priv->base_##pfx##s[i], \ + qs); \ + } \ +} + +LIBETH_STATS_DEFINE_HELPERS(rq, RQ); +LIBETH_STATS_DEFINE_HELPERS(sq, SQ); +LIBETH_STATS_DEFINE_HELPERS(xdpsq, XDPSQ); + +#undef ___base +#undef ___string + +/* Netlink stats. Exported fields have the same names as in the NL structs */ + +struct libeth_stats_export { + u16 li; + u16 gi; +}; + +#define LIBETH_STATS_EXPORT(lpfx, gpfx, field) { \ + .li = (offsetof(struct libeth_##lpfx##_stats, field) - \ + offsetof(struct libeth_##lpfx##_stats, raw)) / \ + sizeof_field(struct libeth_##lpfx##_stats, field), \ + .gi = offsetof(struct netdev_queue_stats_##gpfx, field) / \ + sizeof_field(struct netdev_queue_stats_##gpfx, field) \ +} +#define LIBETH_RQ_STATS_EXPORT(field) LIBETH_STATS_EXPORT(rq, rx, field) +#define LIBETH_SQ_STATS_EXPORT(field) LIBETH_STATS_EXPORT(sq, tx, field) + +static const struct libeth_stats_export libeth_rq_stats_export[] = { + LIBETH_RQ_STATS_EXPORT(bytes), + LIBETH_RQ_STATS_EXPORT(packets), + LIBETH_RQ_STATS_EXPORT(csum_unnecessary), + LIBETH_RQ_STATS_EXPORT(hw_gro_packets), + LIBETH_RQ_STATS_EXPORT(hw_gro_bytes), + LIBETH_RQ_STATS_EXPORT(alloc_fail), + LIBETH_RQ_STATS_EXPORT(csum_none), + LIBETH_RQ_STATS_EXPORT(csum_bad), +}; + +static const struct libeth_stats_export libeth_sq_stats_export[] = { + LIBETH_SQ_STATS_EXPORT(bytes), + LIBETH_SQ_STATS_EXPORT(packets), + LIBETH_SQ_STATS_EXPORT(csum_none), + LIBETH_SQ_STATS_EXPORT(needs_csum), + LIBETH_SQ_STATS_EXPORT(hw_gso_packets), + LIBETH_SQ_STATS_EXPORT(hw_gso_bytes), + LIBETH_SQ_STATS_EXPORT(stop), + LIBETH_SQ_STATS_EXPORT(wake), +}; + +#define libeth_stats_foreach_export(pfx, iter) \ + for (const struct libeth_stats_export *iter = \ + &libeth_##pfx##_stats_export[0]; \ + iter < &libeth_##pfx##_stats_export[ \ + ARRAY_SIZE(libeth_##pfx##_stats_export)]; \ + iter++) + +#define LIBETH_STATS_DEFINE_EXPORT(pfx, gpfx) \ +static void \ +libeth_get_queue_stats_##gpfx(struct net_device *dev, int idx, \ + struct netdev_queue_stats_##gpfx *stats) \ +{ \ + const struct libeth_netdev_priv *priv = netdev_priv(dev); \ + const struct libeth_##pfx##_stats *qs; \ + u64 *raw = (u64 *)stats; \ + u32 start; \ + \ + qs = READ_ONCE(priv->live_##pfx##s[idx]); \ + if (!qs) \ + return; \ + \ + do { \ + start = u64_stats_fetch_begin(&qs->syncp); \ + \ + libeth_stats_foreach_export(pfx, exp) \ + raw[exp->gi] = u64_stats_read(&qs->raw[exp->li]); \ + } while (u64_stats_fetch_retry(&qs->syncp, start)); \ +} \ + \ +static void \ +libeth_get_##pfx##_base_stats(const struct net_device *dev, \ + struct netdev_queue_stats_##gpfx *stats) \ +{ \ + const struct libeth_netdev_priv *priv = netdev_priv(dev); \ + u64 *raw = (u64 *)stats; \ + \ + memset(stats, 0, sizeof(*(stats))); \ + \ + for (u32 i = 0; i < dev->num_##gpfx##_queues; i++) { \ + struct libeth_##pfx##_base_stats *base = \ + &priv->base_##pfx##s[i]; \ + \ + mutex_lock(&base->lock); \ + \ + libeth_stats_foreach_export(pfx, exp) \ + raw[exp->gi] += base->raw[exp->li]; \ + \ + mutex_unlock(&base->lock); \ + } \ +} + +LIBETH_STATS_DEFINE_EXPORT(rq, rx); +LIBETH_STATS_DEFINE_EXPORT(sq, tx); + +static void libeth_get_base_stats(struct net_device *dev, + struct netdev_queue_stats_rx *rx, + struct netdev_queue_stats_tx *tx) +{ + libeth_get_rq_base_stats(dev, rx); + libeth_get_sq_base_stats(dev, tx); +} + +static const struct netdev_stat_ops libeth_netdev_stat_ops = { + .get_base_stats = libeth_get_base_stats, + .get_queue_stats_rx = libeth_get_queue_stats_rx, + .get_queue_stats_tx = libeth_get_queue_stats_tx, +}; + +/* Ethtool: base + live */ + +int libeth_stats_get_sset_count(struct net_device *dev) +{ + struct libeth_netdev_priv *priv = netdev_priv(dev); + + priv->last_rqs = dev->real_num_rx_queues; + priv->last_sqs = dev->real_num_tx_queues; + priv->last_xdpsqs = priv->curr_xdpsqs; + + return priv->last_rqs * LIBETH_RQ_STATS_NUM + + priv->last_sqs * LIBETH_SQ_STATS_NUM + + priv->last_xdpsqs * LIBETH_XDPSQ_STATS_NUM; +} + +void libeth_stats_get_strings(struct net_device *dev, u8 *data) +{ + const struct libeth_netdev_priv *priv = netdev_priv(dev); + + libeth_rq_stats_get_strings(&data, priv->last_rqs); + libeth_sq_stats_get_strings(&data, priv->last_sqs); + libeth_xdpsq_stats_get_strings(&data, priv->last_xdpsqs); +} + +void libeth_stats_get_data(struct net_device *dev, u64 *data) +{ + struct libeth_netdev_priv *priv = netdev_priv(dev); + + libeth_rq_stats_get_data(&data, priv); + libeth_sq_stats_get_data(&data, priv); + libeth_xdpsq_stats_get_data(&data, priv); + + priv->last_rqs = 0; + priv->last_sqs = 0; + priv->last_xdpsqs = 0; +} + +/* Private init. + * All the structures getting allocated here are slowpath, so NUMA-aware + * allocation is not required for them. + */ + +bool libeth_stats_init_priv(struct net_device *dev, u32 rqs, u32 sqs, + u32 xdpsqs) +{ + struct libeth_netdev_priv *priv = netdev_priv(dev); + + priv->base_rqs = kvcalloc(rqs, sizeof(*priv->base_rqs), GFP_KERNEL); + if (!priv->base_rqs) + return false; + + priv->live_rqs = kvcalloc(rqs, sizeof(*priv->live_rqs), GFP_KERNEL); + if (!priv->live_rqs) + goto err_base_rqs; + + priv->base_sqs = kvcalloc(sqs, sizeof(*priv->base_sqs), GFP_KERNEL); + if (!priv->base_sqs) + goto err_live_rqs; + + priv->live_sqs = kvcalloc(sqs, sizeof(*priv->live_sqs), GFP_KERNEL); + if (!priv->live_sqs) + goto err_base_sqs; + + dev->stat_ops = &libeth_netdev_stat_ops; + + if (!xdpsqs) + return true; + + priv->base_xdpsqs = kvcalloc(xdpsqs, sizeof(*priv->base_xdpsqs), + GFP_KERNEL); + if (!priv->base_xdpsqs) + goto err_live_sqs; + + priv->live_xdpsqs = kvcalloc(xdpsqs, sizeof(*priv->live_xdpsqs), + GFP_KERNEL); + if (!priv->live_xdpsqs) + goto err_base_xdpsqs; + + priv->max_xdpsqs = xdpsqs; + + return true; + +err_base_xdpsqs: + kvfree(priv->base_xdpsqs); +err_live_sqs: + kvfree(priv->live_sqs); +err_base_sqs: + kvfree(priv->base_sqs); +err_live_rqs: + kvfree(priv->live_rqs); +err_base_rqs: + kvfree(priv->base_rqs); + + return false; +} + +void libeth_stats_free_priv(const struct net_device *dev) +{ + const struct libeth_netdev_priv *priv = netdev_priv(dev); + + kvfree(priv->base_rqs); + kvfree(priv->live_rqs); + kvfree(priv->base_sqs); + kvfree(priv->live_sqs); + + if (!priv->max_xdpsqs) + return; + + kvfree(priv->base_xdpsqs); + kvfree(priv->live_xdpsqs); +} From patchwork Tue Aug 6 13:12:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969554 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=EhhMfcF4; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYf002BKz1yYD for ; Tue, 6 Aug 2024 23:13:27 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 19CD040A13; Tue, 6 Aug 2024 13:13:26 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id iaRh2ueTBytz; Tue, 6 Aug 2024 13:13:24 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 5560E40A10 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950004; bh=27Lvw40kPCF+ldEizlysU2DI325ktvTFKBCoQlrzWDA=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=EhhMfcF434vOlw1FXBsksoqFnGfS2OyBIOnVKjktYHeqvq0+qOf58L9x1oTu1y6z2 NguxB+f4HSIRFOKoitUgmso/exww/rVuctrC2H90lLjUZU1naSE5TJjrihrf2OOK+O oAZBjSLS4usu4/+TE5h3+ByvjAbmUd3JOj6ecv9AGI+sfe1ibn70Xht/rripK9vGXz xHt+fVowuaL+BAX5ZSKaqoM8P1k0MEE6VOclmsjmYh7UUndgIOjA3rJ4f3Dfu1clvI 23/niTXu3TIFHaTvcocyryilJuB0IN+Cp/MdPZ3oJYD2hSXNJduMjegb2mE1bQ+mZ6 CB9wqav20b3Ug== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 5560E40A10; Tue, 6 Aug 2024 13:13:24 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 4022E1BF300 for ; Tue, 6 Aug 2024 13:13:18 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 3A83D403B2 for ; Tue, 6 Aug 2024 13:13:18 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id BtP4l1RUlR4p for ; Tue, 6 Aug 2024 13:13:17 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 1696840201 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 1696840201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 1696840201 for ; Tue, 6 Aug 2024 13:13:17 +0000 (UTC) X-CSE-ConnectionGUID: VcCbONImRZW4Bd2OT9da7w== X-CSE-MsgGUID: zHE/e09yQQOjDxjYhu2iOw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842100" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842100" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:16 -0700 X-CSE-ConnectionGUID: taD1EY35RdabGiOk9I/FWQ== X-CSE-MsgGUID: 9l7+llSuTDSY/MharIfiBA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475804" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:14 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:34 +0200 Message-ID: <20240806131240.800259-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722949997; x=1754485997; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3fzjogVxQcYKzr4bAR1EWwKcI7WFmK8d8Ts7V39KtYQ=; b=inbGD830OE2EAdXAP8uVYqh4KWkoBmbnkgwe/L5TLEJhWL6GztTc8tUw wnTvWn+0MAgk8PD57SMtepYvLnIeSq6jqd/GBWcZvXc1yKBUDRAEj97ba sNoFiUuf427b0ZLWrMIngzrO8zrn8jOF/aiz51XbgMHsYSr/vJYkq5nHF B5ZICUMF4vmxJuyKWC6gmmujPhP9jHzJdcs0LlwQMTsQl26bA7PwaRZuV cWmSijL6iEvJbkzPJn0dN02+QvLMErNgiJqicGLZeyLEDp2xSBidK350j Juf0TwdA3WxDv/oCJxHfLQO2H/W8+EcO7T7Ld0xgGAgmbI+viZ6LYC/E5 g==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=inbGD830 Subject: [Intel-wired-lan] [PATCH iwl-next 3/9] libie: add Tx buffer completion helpers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Software-side Tx buffers for storing DMA, frame size, skb pointers etc. are pretty much generic and every driver defines them the same way. The same can be said for software Tx completions -- same napi_consume_skb()s and all that... Add a couple simple wrappers for doing that to stop repeating the old tale at least within the Intel code. Drivers are free to use 'priv' member at the end of the structure. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/net/libeth/tx.h | 127 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 include/net/libeth/tx.h diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h new file mode 100644 index 000000000000..facfeca96858 --- /dev/null +++ b/include/net/libeth/tx.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_TX_H +#define __LIBETH_TX_H + +#include + +/* Tx buffer completion */ + +/** + * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion + * @LIBETH_SQE_EMPTY: unused/empty, no action required + * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required + * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() + * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA + * @LIBETH_SQE_SKB: &sk_buff, unmap and napi_consume_skb(), update stats + */ +enum libeth_sqe_type { + LIBETH_SQE_EMPTY = 0U, + LIBETH_SQE_CTX, + LIBETH_SQE_SLAB, + LIBETH_SQE_FRAG, + LIBETH_SQE_SKB, +}; + +/** + * struct libeth_sqe - represents a Send Queue Element / Tx buffer + * @type: type of the buffer, see the enum above + * @rs_idx: index of the last buffer from the batch this one was sent in + * @raw: slab buffer to free via kfree() + * @skb: &sk_buff to consume + * @dma: DMA address to unmap + * @len: length of the mapped region to unmap + * @nr_frags: number of frags in the frame this buffer belongs to + * @packets: number of physical packets sent for this frame + * @bytes: number of physical bytes sent for this frame + * @priv: driver-private scratchpad + */ +struct libeth_sqe { + enum libeth_sqe_type type:32; + u32 rs_idx; + + union { + void *raw; + struct sk_buff *skb; + }; + + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + + u32 nr_frags; + u32 packets; + u32 bytes; + + unsigned long priv; +} __aligned_largest; + +/** + * LIBETH_SQE_CHECK_PRIV - check the driver's private SQE data + * @p: type or name of the object the driver wants to fit into &libeth_sqe + * + * Make sure the driver's private data fits into libeth_sqe::priv. To be used + * right after its declaration. + */ +#define LIBETH_SQE_CHECK_PRIV(p) \ + static_assert(sizeof(p) <= sizeof_field(struct libeth_sqe, priv)) + +/** + * struct libeth_cq_pp - completion queue poll params + * @dev: &device to perform DMA unmapping + * @ss: onstack NAPI stats to fill + * @napi: whether it's called from the NAPI context + * + * libeth uses this structure to access objects needed for performing full + * Tx complete operation without passing lots of arguments and change the + * prototypes each time a new one is added. + */ +struct libeth_cq_pp { + struct device *dev; + struct libeth_sq_napi_stats *ss; + + bool napi; +}; + +/** + * libeth_tx_complete - perform Tx completion for one SQE + * @sqe: SQE to complete + * @cp: poll params + * + * Do Tx complete for all the types of buffers, incl. freeing, unmapping, + * updating the stats etc. + */ +static inline void libeth_tx_complete(struct libeth_sqe *sqe, + const struct libeth_cq_pp *cp) +{ + switch (sqe->type) { + case LIBETH_SQE_EMPTY: + return; + case LIBETH_SQE_SKB: + case LIBETH_SQE_FRAG: + case LIBETH_SQE_SLAB: + dma_unmap_page(cp->dev, dma_unmap_addr(sqe, dma), + dma_unmap_len(sqe, len), DMA_TO_DEVICE); + break; + default: + break; + } + + switch (sqe->type) { + case LIBETH_SQE_SKB: + cp->ss->packets += sqe->packets; + cp->ss->bytes += sqe->bytes; + + napi_consume_skb(sqe->skb, cp->napi); + break; + case LIBETH_SQE_SLAB: + kfree(sqe->raw); + break; + default: + break; + } + + sqe->type = LIBETH_SQE_EMPTY; +} + +#endif /* __LIBETH_TX_H */ From patchwork Tue Aug 6 13:12:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969556 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=Do2GVWOd; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYf53kHZz1yYD for ; Tue, 6 Aug 2024 23:13:33 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 8300040A70; Tue, 6 Aug 2024 13:13:30 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id UCe3UOv-ZOiw; Tue, 6 Aug 2024 13:13:27 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 6320340A51 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950007; bh=WuETUepZ1Zz6+YsiPfAO7dtKgVmk0TPUQzsl2JSUNTM=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=Do2GVWOdV1cnmNUClW9nr6/Aacy8jGsYUvOkBz/Rpbtf4ZzOUXeiYuTnhsS2zcrFe F1SKw0esVcrfA8wiEMKCM7PT7h29SO8pXq8gb3c3Sey+q1wcUDw8DqOtrAGK34ayk5 gfZRv5HZmbLVKDWdyP/utBAPP0WcjYO3lowM+5JIiolPkL1wEDp4vFFULJpo4kgsH1 pM8mpzgkETHdTzH/nKS+3Ag0mYnNSIM1nYWoY1mJIYyjmCFZ3BNim7sTUVSn8jadIj B/h6/q79R59mwH53+QAmVOYjLN4sIJfkJnQzVO+I8KUz+OF+lzSk9oqZ4flITdq8XD s7Usfx/Ppu71A== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 6320340A51; Tue, 6 Aug 2024 13:13:27 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 4795F1BF300 for ; Tue, 6 Aug 2024 13:13:22 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 35A26403B2 for ; Tue, 6 Aug 2024 13:13:22 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id eAuMWnwdlsYd for ; Tue, 6 Aug 2024 13:13:20 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 4FE8A40201 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 4FE8A40201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 4FE8A40201 for ; Tue, 6 Aug 2024 13:13:20 +0000 (UTC) X-CSE-ConnectionGUID: w0zKZev8Sly6LlMdkVoz1g== X-CSE-MsgGUID: Lsk4UxsDQ6OQ/BkYiHi4Iw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842112" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842112" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:20 -0700 X-CSE-ConnectionGUID: PLDlWgZATYqgV++hRITc/A== X-CSE-MsgGUID: gB0anxCBRWij/3XyB7PiMA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475808" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:17 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:35 +0200 Message-ID: <20240806131240.800259-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950000; x=1754486000; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JcJWHtnMf6zQjzaudOtzA09mG88uVdnuYe+JoiytYoA=; b=MAI1Fg8KUUEcZmf0hES5q3Kx/9APxUqpDraaw/Gk+cCVxkoOcFPF5jIT F/sEIQ7rjw68GfdDJGMUujXqIinOYo6iFrfOctAOgPveLoiqYD0mNqzM9 /+W2jGWBdSdV0Wl3hxo+44mirjwk1xTNGE+oV83BWwznN3BszIJA16Uoc ADHyKKYli+JuUCm1tq5eQRDQxkhk+NO154HjpyXrmlUJznZ/p5qALlOJP eRe+rRJ+hxBIjYZsDTvTSUmfxRkZZmx62ksrn/Gr2PWZQAQgtUNne9tRI y6FuF5CWKoaSifqLP+5nCgN28X/prOgajgVdDNyCWbiz6vS7aeLnNJOR4 w==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=MAI1Fg8K Subject: [Intel-wired-lan] [PATCH iwl-next 4/9] idpf: convert to libie Tx buffer completion X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" &idpf_tx_buffer is almost identical to the previous generations, as well as the way it's handled. Moreover, relying on dma_unmap_addr() and !!buf->skb instead of explicit defining of buffer's type was never good. Use the newly added libeth helpers to do it properly and reduce the copy-paste around the Tx code. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 50 +---- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 82 +++----- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 195 ++++++------------ 3 files changed, 101 insertions(+), 226 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 6215dbee5546..fa87754c7340 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -131,7 +131,6 @@ do { \ (txq)->num_completions_pending - (txq)->complq->num_completions) #define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16 -#define IDPF_SPLITQ_TX_INVAL_COMPL_TAG -1 /* Adjust the generation for the completion tag and wrap if necessary */ #define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \ ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \ @@ -149,47 +148,7 @@ union idpf_tx_flex_desc { struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */ }; -/** - * struct idpf_tx_buf - * @next_to_watch: Next descriptor to clean - * @skb: Pointer to the skb - * @dma: DMA address - * @len: DMA length - * @bytecount: Number of bytes - * @gso_segs: Number of GSO segments - * @compl_tag: Splitq only, unique identifier for a buffer. Used to compare - * with completion tag returned in buffer completion event. - * Because the completion tag is expected to be the same in all - * data descriptors for a given packet, and a single packet can - * span multiple buffers, we need this field to track all - * buffers associated with this completion tag independently of - * the buf_id. The tag consists of a N bit buf_id and M upper - * order "generation bits". See compl_tag_bufid_m and - * compl_tag_gen_s in struct idpf_queue. We'll use a value of -1 - * to indicate the tag is not valid. - * @ctx_entry: Singleq only. Used to indicate the corresponding entry - * in the descriptor ring was used for a context descriptor and - * this buffer entry should be skipped. - */ -struct idpf_tx_buf { - void *next_to_watch; - struct sk_buff *skb; - DEFINE_DMA_UNMAP_ADDR(dma); - DEFINE_DMA_UNMAP_LEN(len); - unsigned int bytecount; - unsigned short gso_segs; - - union { - int compl_tag; - - bool ctx_entry; - }; -}; - -struct idpf_tx_stash { - struct hlist_node hlist; - struct idpf_tx_buf buf; -}; +#define idpf_tx_buf libeth_sqe /** * struct idpf_buf_lifo - LIFO for managing OOO completions @@ -496,10 +455,7 @@ struct idpf_tx_queue_stats { u64_stats_t dma_map_errs; }; -struct idpf_cleaned_stats { - u32 packets; - u32 bytes; -}; +#define idpf_cleaned_stats libeth_sq_napi_stats #define IDPF_ITR_DYNAMIC 1 #define IDPF_ITR_MAX 0x1FE0 @@ -688,7 +644,7 @@ struct idpf_tx_queue { void *desc_ring; }; - struct idpf_tx_buf *tx_buf; + struct libeth_sqe *tx_buf; struct idpf_txq_group *txq_grp; struct device *dev; void __iomem *tail; diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index fe64febf7436..98f26a4b835f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -2,6 +2,7 @@ /* Copyright (C) 2023 Intel Corporation */ #include +#include #include "idpf.h" @@ -224,6 +225,7 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, /* record length, and DMA address */ dma_unmap_len_set(tx_buf, len, size); dma_unmap_addr_set(tx_buf, dma, dma); + tx_buf->type = LIBETH_SQE_FRAG; /* align size to end of page */ max_data += -dma & (IDPF_TX_MAX_READ_REQ_SIZE - 1); @@ -245,6 +247,8 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, i = 0; } + tx_q->tx_buf[i].type = LIBETH_SQE_EMPTY; + dma += max_data; size -= max_data; @@ -282,13 +286,13 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets, size, td_tag); - IDPF_SINGLEQ_BUMP_RING_IDX(tx_q, i); + first->type = LIBETH_SQE_SKB; + first->rs_idx = i; - /* set next_to_watch value indicating a packet is present */ - first->next_to_watch = tx_desc; + IDPF_SINGLEQ_BUMP_RING_IDX(tx_q, i); nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); - netdev_tx_sent_queue(nq, first->bytecount); + netdev_tx_sent_queue(nq, first->bytes); idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more()); } @@ -306,8 +310,7 @@ idpf_tx_singleq_get_ctx_desc(struct idpf_tx_queue *txq) struct idpf_base_tx_ctx_desc *ctx_desc; int ntu = txq->next_to_use; - memset(&txq->tx_buf[ntu], 0, sizeof(struct idpf_tx_buf)); - txq->tx_buf[ntu].ctx_entry = true; + txq->tx_buf[ntu].type = LIBETH_SQE_CTX; ctx_desc = &txq->base_ctx[ntu]; @@ -396,11 +399,11 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, first->skb = skb; if (tso) { - first->gso_segs = offload.tso_segs; - first->bytecount = skb->len + ((first->gso_segs - 1) * offload.tso_hdr_len); + first->packets = offload.tso_segs; + first->bytes = skb->len + ((first->packets - 1) * offload.tso_hdr_len); } else { - first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); - first->gso_segs = 1; + first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN); + first->packets = 1; } idpf_tx_singleq_map(tx_q, first, &offload); @@ -420,10 +423,15 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, int *cleaned) { - unsigned int total_bytes = 0, total_pkts = 0; + struct libeth_sq_napi_stats ss = { }; struct idpf_base_tx_desc *tx_desc; u32 budget = tx_q->clean_budget; s16 ntc = tx_q->next_to_clean; + struct libeth_cq_pp cp = { + .dev = tx_q->dev, + .ss = &ss, + .napi = napi_budget, + }; struct idpf_netdev_priv *np; struct idpf_tx_buf *tx_buf; struct netdev_queue *nq; @@ -441,47 +449,23 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, * such. We can skip this descriptor since there is no buffer * to clean. */ - if (tx_buf->ctx_entry) { - /* Clear this flag here to avoid stale flag values when - * this buffer is used for actual data in the future. - * There are cases where the tx_buf struct / the flags - * field will not be cleared before being reused. - */ - tx_buf->ctx_entry = false; + if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) { + tx_buf->type = LIBETH_SQE_EMPTY; goto fetch_next_txq_desc; } - /* if next_to_watch is not set then no work pending */ - eop_desc = (struct idpf_base_tx_desc *)tx_buf->next_to_watch; - if (!eop_desc) - break; - - /* prevent any other reads prior to eop_desc */ + /* prevent any other reads prior to type */ smp_rmb(); + eop_desc = &tx_q->base_tx[tx_buf->rs_idx]; + /* if the descriptor isn't done, no work yet to do */ if (!(eop_desc->qw1 & cpu_to_le64(IDPF_TX_DESC_DTYPE_DESC_DONE))) break; - /* clear next_to_watch to prevent false hangs */ - tx_buf->next_to_watch = NULL; - /* update the statistics for this packet */ - total_bytes += tx_buf->bytecount; - total_pkts += tx_buf->gso_segs; - - napi_consume_skb(tx_buf->skb, napi_budget); - - /* unmap skb header data */ - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - - /* clear tx_buf data */ - tx_buf->skb = NULL; - dma_unmap_len_set(tx_buf, len, 0); + libeth_tx_complete(tx_buf, &cp); /* unmap remaining buffers */ while (tx_desc != eop_desc) { @@ -495,13 +479,7 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, } /* unmap any remaining paged data */ - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } + libeth_tx_complete(tx_buf, &cp); } /* update budget only if we did something */ @@ -521,11 +499,11 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, ntc += tx_q->desc_count; tx_q->next_to_clean = ntc; - *cleaned += total_pkts; + *cleaned += ss.packets; u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_add(&tx_q->q_stats.packets, total_pkts); - u64_stats_add(&tx_q->q_stats.bytes, total_bytes); + u64_stats_add(&tx_q->q_stats.packets, ss.packets); + u64_stats_add(&tx_q->q_stats.bytes, ss.bytes); u64_stats_update_end(&tx_q->stats_sync); np = netdev_priv(tx_q->netdev); @@ -533,7 +511,7 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, dont_wake = np->state != __IDPF_VPORT_UP || !netif_carrier_ok(tx_q->netdev); - __netif_txq_completed_wake(nq, total_pkts, total_bytes, + __netif_txq_completed_wake(nq, ss.packets, ss.bytes, IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, dont_wake); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 585c3dadd9bf..de35736143de 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2,10 +2,19 @@ /* Copyright (C) 2023 Intel Corporation */ #include +#include #include "idpf.h" #include "idpf_virtchnl.h" +struct idpf_tx_stash { + struct hlist_node hlist; + struct libeth_sqe buf; +}; + +#define idpf_tx_buf_compl_tag(buf) (*(int *)&(buf)->priv) +LIBETH_SQE_CHECK_PRIV(int); + static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, unsigned int count); @@ -60,41 +69,18 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue) } } -/** - * idpf_tx_buf_rel - Release a Tx buffer - * @tx_q: the queue that owns the buffer - * @tx_buf: the buffer to free - */ -static void idpf_tx_buf_rel(struct idpf_tx_queue *tx_q, - struct idpf_tx_buf *tx_buf) -{ - if (tx_buf->skb) { - if (dma_unmap_len(tx_buf, len)) - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dev_kfree_skb_any(tx_buf->skb); - } else if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - } - - tx_buf->next_to_watch = NULL; - tx_buf->skb = NULL; - tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; - dma_unmap_len_set(tx_buf, len, 0); -} - /** * idpf_tx_buf_rel_all - Free any empty Tx buffers * @txq: queue to be cleaned */ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) { + struct libeth_sq_napi_stats ss = { }; struct idpf_buf_lifo *buf_stack; + struct libeth_cq_pp cp = { + .dev = txq->dev, + .ss = &ss, + }; u16 i; /* Buffers already cleared, nothing to do */ @@ -103,7 +89,7 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) /* Free all the Tx buffer sk_buffs */ for (i = 0; i < txq->desc_count; i++) - idpf_tx_buf_rel(txq, &txq->tx_buf[i]); + libeth_tx_complete(&txq->tx_buf[i], &cp); kfree(txq->tx_buf); txq->tx_buf = NULL; @@ -203,10 +189,6 @@ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) if (!tx_q->tx_buf) return -ENOMEM; - /* Initialize tx_bufs with invalid completion tags */ - for (i = 0; i < tx_q->desc_count; i++) - tx_q->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; - if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) return 0; @@ -1655,37 +1637,6 @@ static void idpf_tx_handle_sw_marker(struct idpf_tx_queue *tx_q) wake_up(&vport->sw_marker_wq); } -/** - * idpf_tx_splitq_clean_hdr - Clean TX buffer resources for header portion of - * packet - * @tx_q: tx queue to clean buffer from - * @tx_buf: buffer to be cleaned - * @cleaned: pointer to stats struct to track cleaned packets/bytes - * @napi_budget: Used to determine if we are in netpoll - */ -static void idpf_tx_splitq_clean_hdr(struct idpf_tx_queue *tx_q, - struct idpf_tx_buf *tx_buf, - struct idpf_cleaned_stats *cleaned, - int napi_budget) -{ - napi_consume_skb(tx_buf->skb, napi_budget); - - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - - dma_unmap_len_set(tx_buf, len, 0); - } - - /* clear tx_buf data */ - tx_buf->skb = NULL; - - cleaned->bytes += tx_buf->bytecount; - cleaned->packets += tx_buf->gso_segs; -} - /** * idpf_tx_clean_stashed_bufs - clean bufs that were stored for * out of order completions @@ -1701,23 +1652,20 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq, { struct idpf_tx_stash *stash; struct hlist_node *tmp_buf; + struct libeth_cq_pp cp = { + .dev = txq->dev, + .ss = cleaned, + .napi = budget, + }; /* Buffer completion */ hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf, hlist, compl_tag) { - if (unlikely(stash->buf.compl_tag != (int)compl_tag)) + if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != + (int)compl_tag)) continue; - if (stash->buf.skb) { - idpf_tx_splitq_clean_hdr(txq, &stash->buf, cleaned, - budget); - } else if (dma_unmap_len(&stash->buf, len)) { - dma_unmap_page(txq->dev, - dma_unmap_addr(&stash->buf, dma), - dma_unmap_len(&stash->buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(&stash->buf, len, 0); - } + libeth_tx_complete(&stash->buf, &cp); /* Push shadow buf back onto stack */ idpf_buf_lifo_push(&txq->stash->buf_stack, stash); @@ -1737,8 +1685,7 @@ static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, { struct idpf_tx_stash *stash; - if (unlikely(!dma_unmap_addr(tx_buf, dma) && - !dma_unmap_len(tx_buf, len))) + if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) return 0; stash = idpf_buf_lifo_pop(&txq->stash->buf_stack); @@ -1751,20 +1698,18 @@ static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, /* Store buffer params in shadow buffer */ stash->buf.skb = tx_buf->skb; - stash->buf.bytecount = tx_buf->bytecount; - stash->buf.gso_segs = tx_buf->gso_segs; + stash->buf.bytes = tx_buf->bytes; + stash->buf.packets = tx_buf->packets; + stash->buf.type = tx_buf->type; dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma)); dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len)); - stash->buf.compl_tag = tx_buf->compl_tag; + idpf_tx_buf_compl_tag(&stash->buf) = idpf_tx_buf_compl_tag(tx_buf); /* Add buffer to buf_hash table to be freed later */ hash_add(txq->stash->sched_buf_hash, &stash->hlist, - stash->buf.compl_tag); - - memset(tx_buf, 0, sizeof(struct idpf_tx_buf)); + idpf_tx_buf_compl_tag(&stash->buf)); - /* Reinitialize buf_id portion of tag */ - tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; + tx_buf->type = LIBETH_SQE_EMPTY; return 0; } @@ -1806,6 +1751,11 @@ static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, union idpf_tx_flex_desc *next_pending_desc = NULL; union idpf_tx_flex_desc *tx_desc; s16 ntc = tx_q->next_to_clean; + struct libeth_cq_pp cp = { + .dev = tx_q->dev, + .ss = cleaned, + .napi = napi_budget, + }; struct idpf_tx_buf *tx_buf; tx_desc = &tx_q->flex_tx[ntc]; @@ -1821,13 +1771,10 @@ static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, * invalid completion tag since no buffer was used. We can * skip this descriptor since there is no buffer to clean. */ - if (unlikely(tx_buf->compl_tag == IDPF_SPLITQ_TX_INVAL_COMPL_TAG)) + if (tx_buf->type <= LIBETH_SQE_CTX) goto fetch_next_txq_desc; - eop_desc = (union idpf_tx_flex_desc *)tx_buf->next_to_watch; - - /* clear next_to_watch to prevent false hangs */ - tx_buf->next_to_watch = NULL; + eop_desc = &tx_q->flex_tx[tx_buf->rs_idx]; if (descs_only) { if (idpf_stash_flow_sch_buffers(tx_q, tx_buf)) @@ -1844,8 +1791,7 @@ static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, } } } else { - idpf_tx_splitq_clean_hdr(tx_q, tx_buf, cleaned, - napi_budget); + libeth_tx_complete(tx_buf, &cp); /* unmap remaining buffers */ while (tx_desc != eop_desc) { @@ -1853,13 +1799,7 @@ static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, tx_desc, tx_buf); /* unmap any remaining paged data */ - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } + libeth_tx_complete(tx_buf, &cp); } } @@ -1901,24 +1841,20 @@ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, u16 idx = compl_tag & txq->compl_tag_bufid_m; struct idpf_tx_buf *tx_buf = NULL; u16 ntc = txq->next_to_clean; + struct libeth_cq_pp cp = { + .dev = txq->dev, + .ss = cleaned, + .napi = budget, + }; u16 num_descs_cleaned = 0; u16 orig_idx = idx; tx_buf = &txq->tx_buf[idx]; + if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) + return false; - while (tx_buf->compl_tag == (int)compl_tag) { - if (tx_buf->skb) { - idpf_tx_splitq_clean_hdr(txq, tx_buf, cleaned, budget); - } else if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(txq->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } - - memset(tx_buf, 0, sizeof(struct idpf_tx_buf)); - tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; + while (idpf_tx_buf_compl_tag(tx_buf) == (int)compl_tag) { + libeth_tx_complete(tx_buf, &cp); num_descs_cleaned++; idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); @@ -2307,6 +2243,12 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *first, u16 idx) { + struct libeth_sq_napi_stats ss = { }; + struct libeth_cq_pp cp = { + .dev = txq->dev, + .ss = &ss, + }; + u64_stats_update_begin(&txq->stats_sync); u64_stats_inc(&txq->q_stats.dma_map_errs); u64_stats_update_end(&txq->stats_sync); @@ -2316,7 +2258,7 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *tx_buf; tx_buf = &txq->tx_buf[idx]; - idpf_tx_buf_rel(txq, tx_buf); + libeth_tx_complete(tx_buf, &cp); if (tx_buf == first) break; if (idx == 0) @@ -2405,7 +2347,8 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, if (dma_mapping_error(tx_q->dev, dma)) return idpf_tx_dma_map_error(tx_q, skb, first, i); - tx_buf->compl_tag = params->compl_tag; + idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; + tx_buf->type = LIBETH_SQE_FRAG; /* record length, and DMA address */ dma_unmap_len_set(tx_buf, len, size); @@ -2479,8 +2422,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, * simply pass over these holes and finish cleaning the * rest of the packet. */ - memset(&tx_q->tx_buf[i], 0, sizeof(struct idpf_tx_buf)); - tx_q->tx_buf[i].compl_tag = params->compl_tag; + tx_q->tx_buf[i].type = LIBETH_SQE_EMPTY; /* Adjust the DMA offset and the remaining size of the * fragment. On the first iteration of this loop, @@ -2525,19 +2467,19 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, /* record SW timestamp if HW timestamp is not available */ skb_tx_timestamp(skb); + first->type = LIBETH_SQE_SKB; + /* write last descriptor with RS and EOP bits */ + first->rs_idx = i; td_cmd |= params->eop_cmd; idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); i = idpf_tx_splitq_bump_ntu(tx_q, i); - /* set next_to_watch value indicating a packet is present */ - first->next_to_watch = tx_desc; - tx_q->txq_grp->num_completions_pending++; /* record bytecount for BQL */ nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); - netdev_tx_sent_queue(nq, first->bytecount); + netdev_tx_sent_queue(nq, first->bytes); idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more()); } @@ -2737,8 +2679,7 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq) struct idpf_flex_tx_ctx_desc *desc; int i = txq->next_to_use; - memset(&txq->tx_buf[i], 0, sizeof(struct idpf_tx_buf)); - txq->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; + txq->tx_buf[i].type = LIBETH_SQE_CTX; /* grab the next descriptor */ desc = &txq->flex_ctx[i]; @@ -2822,12 +2763,12 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, first->skb = skb; if (tso) { - first->gso_segs = tx_params.offload.tso_segs; - first->bytecount = skb->len + - ((first->gso_segs - 1) * tx_params.offload.tso_hdr_len); + first->packets = tx_params.offload.tso_segs; + first->bytes = skb->len + + ((first->packets - 1) * tx_params.offload.tso_hdr_len); } else { - first->gso_segs = 1; - first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); + first->packets = 1; + first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN); } if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { From patchwork Tue Aug 6 13:12:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969557 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=L3bqpQEG; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYf73Yzwz1yYD for ; Tue, 6 Aug 2024 23:13:35 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 3A54F40A27; Tue, 6 Aug 2024 13:13:31 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id xR_ElIljdsBA; Tue, 6 Aug 2024 13:13:30 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org B4FF840A10 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950010; bh=9G1FG/+oQ6kBZsYVABuGypyZUueZwf980OLrsfLkWu0=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=L3bqpQEGePFhrt32G6wpBacDPI2v0Mkg+zoML4wcL8c/yWstGEQEa0uC/Htl50CuU dQhE+BZMz+nL89VEXaWFKGhlmqojX76nsmjgX+ttQZ16+3YEI20mH9ekNigrtK/sDS JznHlQVwDpoajIhlQd2FMIweFQOj+JbzykdOjNsXH0i43RPwhVjJ+JN7VJw/E7m+8/ rcClmHDyDrxfFUd/z7AX+af+ldtvBRfcHUMorNxA6WmUxPiID/jRV6I5CIhEEVgS93 soQ/JgKimMro2KG0faW6E7uQtaFfeLpjAnbo3LLzOHfmr+SaNNf+eDWJrUxEdDSShp jCux82OGDXcWA== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id B4FF840A10; Tue, 6 Aug 2024 13:13:29 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 65FDC1BF300 for ; Tue, 6 Aug 2024 13:13:24 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 508AB403D8 for ; Tue, 6 Aug 2024 13:13:24 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id IW4PeM0SEHQY for ; Tue, 6 Aug 2024 13:13:23 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 34911403B2 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 34911403B2 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 34911403B2 for ; Tue, 6 Aug 2024 13:13:23 +0000 (UTC) X-CSE-ConnectionGUID: hC9EJVouTGKYD0FPR5F8EQ== X-CSE-MsgGUID: Doq3SpbuShuu45pvrB8tAQ== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842119" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842119" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:23 -0700 X-CSE-ConnectionGUID: xG9Oux22T3usMNGx8VYx3g== X-CSE-MsgGUID: rwwRdc0URd6tARQb2RKy1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475812" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:20 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:36 +0200 Message-ID: <20240806131240.800259-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950003; x=1754486003; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ea6dfARgeRod6WyIOKsyRDBZrgbxzdAn+2JMu0mRaz8=; b=byjG6dwI8fv5JEWnIQwHo8mQ/cB/H8F+HSYPukqeZZawCZW+VsmeiAhk lv3HisGsYI8UpS2g3fAWxpAGFYRTS08Zne9hjwwrmOt9yGPu7vbugyDfn Q5ZJLA1on+MUBo4KtjAu2cByDZW0pwDDDCJ28z0ftMx4PjVdjInH/V6U5 kYEm6uxHp/nEapmuHpGOPyiKB06JHbs4XPqWYGA4RgMcFtgcEcKg6LgUp t44oMyLLz/MIhWlwp8N7TiQMo1FjA1ChoQd/oy5E3VUouubYppXeUuoav uLff+FIn8Qw0ndg+vZkZc44Z1K/GzH+REZ0ebbFRJekMseb6DlEdJEVlk Q==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=byjG6dwI Subject: [Intel-wired-lan] [PATCH iwl-next 5/9] netdevice: add netdev_tx_reset_subqueue() shorthand X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Add a shorthand similar to other net*_subqueue() helpers for resetting the queue by its index w/o obtaining &netdev_tx_queue beforehand manually. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/netdevice.h | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index c6e0b3ca5914..573226033d34 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3562,6 +3562,17 @@ static inline void netdev_tx_reset_queue(struct netdev_queue *q) #endif } +/** + * netdev_tx_reset_subqueue - reset the BQL stats and state of a netdev queue + * @dev: network device + * @qid: stack index of the queue to reset + */ +static inline void netdev_tx_reset_subqueue(const struct net_device *dev, + u32 qid) +{ + netdev_tx_reset_queue(netdev_get_tx_queue(dev, qid)); +} + /** * netdev_reset_queue - reset the packets and bytes count of a network device * @dev_queue: network device @@ -3571,7 +3582,7 @@ static inline void netdev_tx_reset_queue(struct netdev_queue *q) */ static inline void netdev_reset_queue(struct net_device *dev_queue) { - netdev_tx_reset_queue(netdev_get_tx_queue(dev_queue, 0)); + netdev_tx_reset_subqueue(dev_queue, 0); } /** From patchwork Tue Aug 6 13:12:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969558 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=AbAAw0aX; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYf93jPsz1yYD for ; Tue, 6 Aug 2024 23:13:37 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 688CC40A82; Tue, 6 Aug 2024 13:13:34 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id wwkaV1KXbZEr; Tue, 6 Aug 2024 13:13:32 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 1625840A76 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950012; bh=TadwsPXmYdLDXUanGr6b9E4E4+lNc+MkYNvkDwZG5Tg=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=AbAAw0aXBbTxVJj0j27JcUnxp+b1LWp6dY7ZxDX82UqGgawzLO1kXGMu+gDO16NyF CRQ4HCd7yEeYaqPiloEqKJDERbg6JMo7qZbnYFK19dl253HSF6b3i0wUuVYN3sYo7D oSS4tXsr+R6SwjU+uPsXTcbktQI/gGSOpNtBx5jlBsZBJrKsHg9vFLf7f3Ljmh2cyY 7ZpcLE+xb5O3NTHsHiNu62znNd4JD+df25gQW+McjHo3TkzRZjBHO1pJ9DY4lNgIeI j4CXaqlmTmCzJyEb8E+r2yGLOVx/oxDmVHN9Syf/CFx5r7RYCmnyL22vqz/7ifMPVJ 5uMeeCGzQDyRg== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 1625840A76; Tue, 6 Aug 2024 13:13:32 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 81AD61BF300 for ; Tue, 6 Aug 2024 13:13:29 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id E86E1403B2 for ; Tue, 6 Aug 2024 13:13:28 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id bwe2bzbebGda for ; Tue, 6 Aug 2024 13:13:27 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org DDCAE40201 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org DDCAE40201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id DDCAE40201 for ; Tue, 6 Aug 2024 13:13:26 +0000 (UTC) X-CSE-ConnectionGUID: cf4CSGOHSY6VCSchzPjRmQ== X-CSE-MsgGUID: SQbQ1qXjTH+C0mYFQ1LVAg== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842128" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842128" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:26 -0700 X-CSE-ConnectionGUID: Tpypg+iWQSuDKbxCOb8p3A== X-CSE-MsgGUID: xQFUF5Q6SsiJPmfY8L/ZpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475817" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:23 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:37 +0200 Message-ID: <20240806131240.800259-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950006; x=1754486006; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eaWZyTQ2XalqFwDbW9gpTfw9XamKnJ7bg8OGlpqr06w=; b=ED3+dkN4Gaa/i0MV734BMO2z9aFdfL0hR6JMv67yWgIhEUFxbs5+zFx8 I6rdpwfPwhC+w38d6sKt89xiH72ptBwQqBFjuo0yOVds/Z6DDGH/pFzbb zs9D/aATlWczeBSY+0qXl4oElix9CfkESh0nVx0AL33msBJK75pUTe7UB MgpUIJHalpd9mZjOHLUsN3ghjVwq1e4mAdhQW6Jk4tniuyzLvHTDKOYu2 xAjhaHslUcY/RGHBO/AAwHSKPJ9Uv2gHqLOeNQ3pO6O8XC89EUEOVaw8t FIMeE3XEXyiPKC35YSJNTGVLVabDEJvBl0ykxZbRT0JMYVVxf47cg/tM4 A==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=ED3+dkN4 Subject: [Intel-wired-lan] [PATCH iwl-next 6/9] idpf: refactor Tx completion routines X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Joshua Hay Add a mechanism to guard against stashing partial packets into the hash table to make the driver more robust, with more efficient decision making when cleaning. Don't stash partial packets. This can happen when an RE (Report Event) completion is received in flow scheduling mode, or when an out of order RS (Report Status) completion is received. The first buffer with the skb is stashed, but some or all of its frags are not because the stack is out of reserve buffers. This leaves the ring in a weird state since the frags are still on the ring. Use the field libeth_sqe::nr_frags to track the number of fragments/tx_bufs representing the packet. The clean routines check to make sure there are enough reserve buffers on the stack before stashing any part of the packet. If there are not, next_to_clean is left pointing to the first buffer of the packet that failed to be stashed. This leaves the whole packet on the ring, and the next time around, cleaning will start from this packet. An RS completion is still expected for this packet in either case. So instead of being cleaned from the hash table, it will be cleaned from the ring directly. This should all still be fine since the DESC_UNUSED and BUFS_UNUSED will reflect the state of the ring. If we ever fall below the thresholds, the TxQ will still be stopped, giving the completion queue time to catch up. This may lead to stopping the queue more frequently, but it guarantees the Tx ring will always be in a good state. Also, always use the idpf_tx_splitq_clean function to clean descriptors, i.e. use it from clean_buf_ring as well. This way we avoid duplicating the logic and make sure we're using the same reserve buffers guard rail. This does require a switch from the s16 next_to_clean overflow descriptor ring wrap calculation to u16 and the normal ring size check. Signed-off-by: Joshua Hay Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 +- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 24 +-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 163 +++++++++++------- 3 files changed, 117 insertions(+), 76 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index fa87754c7340..2478f71adb95 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -127,7 +127,7 @@ do { \ */ #define IDPF_TX_COMPLQ_PENDING(txq) \ (((txq)->num_completions_pending >= (txq)->complq->num_completions ? \ - 0 : U64_MAX) + \ + 0 : U32_MAX) + \ (txq)->num_completions_pending - (txq)->complq->num_completions) #define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16 @@ -787,7 +787,7 @@ struct idpf_compl_queue { u32 next_to_use; u32 next_to_clean; - u32 num_completions; + aligned_u64 num_completions; __cacheline_group_end_aligned(read_write); __cacheline_group_begin_aligned(cold); @@ -919,7 +919,7 @@ struct idpf_txq_group { struct idpf_compl_queue *complq; - u32 num_completions_pending; + aligned_u64 num_completions_pending; }; static inline int idpf_q_vector_to_mem(const struct idpf_q_vector *q_vector) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 98f26a4b835f..947d3ff9677c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -239,15 +239,16 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, offsets, max_data, td_tag); - tx_desc++; - i++; - - if (i == tx_q->desc_count) { + if (unlikely(++i == tx_q->desc_count)) { + tx_buf = &tx_q->tx_buf[0]; tx_desc = &tx_q->base_tx[0]; i = 0; + } else { + tx_buf++; + tx_desc++; } - tx_q->tx_buf[i].type = LIBETH_SQE_EMPTY; + tx_buf->type = LIBETH_SQE_EMPTY; dma += max_data; size -= max_data; @@ -261,12 +262,14 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets, size, td_tag); - tx_desc++; - i++; - if (i == tx_q->desc_count) { + if (unlikely(++i == tx_q->desc_count)) { + tx_buf = &tx_q->tx_buf[0]; tx_desc = &tx_q->base_tx[0]; i = 0; + } else { + tx_buf++; + tx_desc++; } size = skb_frag_size(frag); @@ -274,8 +277,6 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q, dma = skb_frag_dma_map(tx_q->dev, frag, 0, size, DMA_TO_DEVICE); - - tx_buf = &tx_q->tx_buf[i]; } skb_tx_timestamp(first->skb); @@ -454,6 +455,9 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, goto fetch_next_txq_desc; } + if (unlikely(tx_buf->type != LIBETH_SQE_SKB)) + break; + /* prevent any other reads prior to type */ smp_rmb(); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index de35736143de..fd44a65a0537 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -12,8 +12,8 @@ struct idpf_tx_stash { struct libeth_sqe buf; }; -#define idpf_tx_buf_compl_tag(buf) (*(int *)&(buf)->priv) -LIBETH_SQE_CHECK_PRIV(int); +#define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv) +LIBETH_SQE_CHECK_PRIV(u32); static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, unsigned int count); @@ -77,11 +77,13 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) { struct libeth_sq_napi_stats ss = { }; struct idpf_buf_lifo *buf_stack; + struct idpf_tx_stash *stash; struct libeth_cq_pp cp = { .dev = txq->dev, .ss = &ss, }; - u16 i; + struct hlist_node *tmp; + u16 i, tag; /* Buffers already cleared, nothing to do */ if (!txq->tx_buf) @@ -101,6 +103,19 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) if (!buf_stack->bufs) return; + /* If a Tx timeout occurred, there are potentially still bufs in the + * hash table, free them here. + */ + hash_for_each_safe(txq->stash->sched_buf_hash, tag, tmp, stash, + hlist) { + if (!stash) + continue; + + libeth_tx_complete(&stash->buf, &cp); + hash_del(&stash->hlist); + idpf_buf_lifo_push(buf_stack, stash); + } + for (i = 0; i < buf_stack->size; i++) kfree(buf_stack->bufs[i]); @@ -117,6 +132,7 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) static void idpf_tx_desc_rel(struct idpf_tx_queue *txq) { idpf_tx_buf_rel_all(txq); + netdev_tx_reset_subqueue(txq->netdev, txq->idx); if (!txq->desc_ring) return; @@ -1661,16 +1677,14 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq, /* Buffer completion */ hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf, hlist, compl_tag) { - if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != - (int)compl_tag)) + if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != compl_tag)) continue; + hash_del(&stash->hlist); libeth_tx_complete(&stash->buf, &cp); /* Push shadow buf back onto stack */ idpf_buf_lifo_push(&txq->stash->buf_stack, stash); - - hash_del(&stash->hlist); } } @@ -1701,6 +1715,7 @@ static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, stash->buf.bytes = tx_buf->bytes; stash->buf.packets = tx_buf->packets; stash->buf.type = tx_buf->type; + stash->buf.nr_frags = tx_buf->nr_frags; dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma)); dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len)); idpf_tx_buf_compl_tag(&stash->buf) = idpf_tx_buf_compl_tag(tx_buf); @@ -1716,9 +1731,8 @@ static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, #define idpf_tx_splitq_clean_bump_ntc(txq, ntc, desc, buf) \ do { \ - (ntc)++; \ - if (unlikely(!(ntc))) { \ - ntc -= (txq)->desc_count; \ + if (unlikely(++(ntc) == (txq)->desc_count)) { \ + ntc = 0; \ buf = (txq)->tx_buf; \ desc = &(txq)->flex_tx[0]; \ } else { \ @@ -1742,59 +1756,65 @@ do { \ * Separate packet completion events will be reported on the completion queue, * and the buffers will be cleaned separately. The stats are not updated from * this function when using flow-based scheduling. + * + * Furthermore, in flow scheduling mode, check to make sure there are enough + * reserve buffers to stash the packet. If there are not, return early, which + * will leave next_to_clean pointing to the packet that failed to be stashed. + * Return false in this scenario. Otherwise, return true. */ -static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, +static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, int napi_budget, struct idpf_cleaned_stats *cleaned, bool descs_only) { union idpf_tx_flex_desc *next_pending_desc = NULL; union idpf_tx_flex_desc *tx_desc; - s16 ntc = tx_q->next_to_clean; + u32 ntc = tx_q->next_to_clean; struct libeth_cq_pp cp = { .dev = tx_q->dev, .ss = cleaned, .napi = napi_budget, }; struct idpf_tx_buf *tx_buf; + bool clean_complete = true; tx_desc = &tx_q->flex_tx[ntc]; next_pending_desc = &tx_q->flex_tx[end]; tx_buf = &tx_q->tx_buf[ntc]; - ntc -= tx_q->desc_count; while (tx_desc != next_pending_desc) { - union idpf_tx_flex_desc *eop_desc; + u32 eop_idx; /* If this entry in the ring was used as a context descriptor, - * it's corresponding entry in the buffer ring will have an - * invalid completion tag since no buffer was used. We can - * skip this descriptor since there is no buffer to clean. + * it's corresponding entry in the buffer ring is reserved. We + * can skip this descriptor since there is no buffer to clean. */ if (tx_buf->type <= LIBETH_SQE_CTX) goto fetch_next_txq_desc; - eop_desc = &tx_q->flex_tx[tx_buf->rs_idx]; + if (unlikely(tx_buf->type != LIBETH_SQE_SKB)) + break; + + eop_idx = tx_buf->rs_idx; if (descs_only) { - if (idpf_stash_flow_sch_buffers(tx_q, tx_buf)) + if (IDPF_TX_BUF_RSV_UNUSED(tx_q) < tx_buf->nr_frags) { + clean_complete = false; goto tx_splitq_clean_out; + } + + idpf_stash_flow_sch_buffers(tx_q, tx_buf); - while (tx_desc != eop_desc) { + while (ntc != eop_idx) { idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf); - - if (dma_unmap_len(tx_buf, len)) { - if (idpf_stash_flow_sch_buffers(tx_q, - tx_buf)) - goto tx_splitq_clean_out; - } + idpf_stash_flow_sch_buffers(tx_q, tx_buf); } } else { libeth_tx_complete(tx_buf, &cp); /* unmap remaining buffers */ - while (tx_desc != eop_desc) { + while (ntc != eop_idx) { idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf); @@ -1808,8 +1828,9 @@ static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, } tx_splitq_clean_out: - ntc += tx_q->desc_count; tx_q->next_to_clean = ntc; + + return clean_complete; } #define idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, buf) \ @@ -1840,48 +1861,60 @@ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, { u16 idx = compl_tag & txq->compl_tag_bufid_m; struct idpf_tx_buf *tx_buf = NULL; - u16 ntc = txq->next_to_clean; struct libeth_cq_pp cp = { .dev = txq->dev, .ss = cleaned, .napi = budget, }; - u16 num_descs_cleaned = 0; - u16 orig_idx = idx; + u16 ntc, orig_idx = idx; tx_buf = &txq->tx_buf[idx]; - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) + + if (unlikely(tx_buf->type <= LIBETH_SQE_CTX || + idpf_tx_buf_compl_tag(tx_buf) != compl_tag)) return false; - while (idpf_tx_buf_compl_tag(tx_buf) == (int)compl_tag) { + if (tx_buf->type == LIBETH_SQE_SKB) libeth_tx_complete(tx_buf, &cp); - num_descs_cleaned++; + idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); + + while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) { + libeth_tx_complete(tx_buf, &cp); idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); } - /* If we didn't clean anything on the ring for this completion, there's - * nothing more to do. - */ - if (unlikely(!num_descs_cleaned)) - return false; - - /* Otherwise, if we did clean a packet on the ring directly, it's safe - * to assume that the descriptors starting from the original - * next_to_clean up until the previously cleaned packet can be reused. - * Therefore, we will go back in the ring and stash any buffers still - * in the ring into the hash table to be cleaned later. + /* It's possible the packet we just cleaned was an out of order + * completion, which means we can stash the buffers starting from + * the original next_to_clean and reuse the descriptors. We need + * to compare the descriptor ring next_to_clean packet's "first" buffer + * to the "first" buffer of the packet we just cleaned to determine if + * this is the case. Howevever, next_to_clean can point to either a + * reserved buffer that corresponds to a context descriptor used for the + * next_to_clean packet (TSO packet) or the "first" buffer (single + * packet). The orig_idx from the packet we just cleaned will always + * point to the "first" buffer. If next_to_clean points to a reserved + * buffer, let's bump ntc once and start the comparison from there. */ + ntc = txq->next_to_clean; tx_buf = &txq->tx_buf[ntc]; - while (tx_buf != &txq->tx_buf[orig_idx]) { - idpf_stash_flow_sch_buffers(txq, tx_buf); + + if (tx_buf->type == LIBETH_SQE_CTX) idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf); - } - /* Finally, update next_to_clean to reflect the work that was just done - * on the ring, if any. If the packet was only cleaned from the hash - * table, the ring will not be impacted, therefore we should not touch - * next_to_clean. The updated idx is used here + /* If ntc still points to a different "first" buffer, clean the + * descriptor ring and stash all of the buffers for later cleaning. If + * we cannot stash all of the buffers, next_to_clean will point to the + * "first" buffer of the packet that could not be stashed and cleaning + * will start there next time. + */ + if (unlikely(tx_buf != &txq->tx_buf[orig_idx] && + !idpf_tx_splitq_clean(txq, orig_idx, budget, cleaned, + true))) + return true; + + /* Otherwise, update next_to_clean to reflect the cleaning that was + * done above. */ txq->next_to_clean = idx; @@ -1909,7 +1942,8 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq, if (!idpf_queue_has(FLOW_SCH_EN, txq)) { u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head); - return idpf_tx_splitq_clean(txq, head, budget, cleaned, false); + idpf_tx_splitq_clean(txq, head, budget, cleaned, false); + return; } compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag); @@ -2337,6 +2371,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); tx_buf = first; + first->nr_frags = 0; params->compl_tag = (tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i; @@ -2347,6 +2382,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, if (dma_mapping_error(tx_q->dev, dma)) return idpf_tx_dma_map_error(tx_q, skb, first, i); + first->nr_frags++; idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; tx_buf->type = LIBETH_SQE_FRAG; @@ -2402,14 +2438,15 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, max_data); - tx_desc++; - i++; - - if (i == tx_q->desc_count) { + if (unlikely(++i == tx_q->desc_count)) { + tx_buf = tx_q->tx_buf; tx_desc = &tx_q->flex_tx[0]; i = 0; tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); + } else { + tx_buf++; + tx_desc++; } /* Since this packet has a buffer that is going to span @@ -2422,7 +2459,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, * simply pass over these holes and finish cleaning the * rest of the packet. */ - tx_q->tx_buf[i].type = LIBETH_SQE_EMPTY; + tx_buf->type = LIBETH_SQE_EMPTY; /* Adjust the DMA offset and the remaining size of the * fragment. On the first iteration of this loop, @@ -2446,13 +2483,15 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, break; idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); - tx_desc++; - i++; - if (i == tx_q->desc_count) { + if (unlikely(++i == tx_q->desc_count)) { + tx_buf = tx_q->tx_buf; tx_desc = &tx_q->flex_tx[0]; i = 0; tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); + } else { + tx_buf++; + tx_desc++; } size = skb_frag_size(frag); @@ -2460,8 +2499,6 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, dma = skb_frag_dma_map(tx_q->dev, frag, 0, size, DMA_TO_DEVICE); - - tx_buf = &tx_q->tx_buf[i]; } /* record SW timestamp if HW timestamp is not available */ From patchwork Tue Aug 6 13:12:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969559 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=ND8e36pJ; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYfC3ZLbz1yYD for ; Tue, 6 Aug 2024 23:13:39 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 55F5940A13; Tue, 6 Aug 2024 13:13:35 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id EfVdNSTdzygp; Tue, 6 Aug 2024 13:13:34 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 65C2E40A78 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950014; bh=U5Si4PFO4cohtZglzS/YtCMjh4zi9WQoS4NbMVOCZIU=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=ND8e36pJel2nuvjvGrgw/5yL3fYcgto87D2JeNT8laeB7mP/n6Mr+4dR18yoduUAX eMrSjc+5Fdzmo+u5Y8bPc86nz0samZ7LY4fh1FsSEI+99wIaL31cypbdHtU6zEF8yl MC6Nr57EJzjGPlbx/wKwcc7cpznO27yEZv5ANxhIH4ddRbrkkQC6/FxVyQXyVXIHCO s5yPy1uowq2qPO7iFiGEuDZ2vgD4SWXBIfdUeHK4bE4HHs8GzTMCRrvkNxgaptHkDW JBzRNWPmtV8c0NJekTKzOcaLDobF23mvvSE6jHFjKbK6j9mmz6KvPs3sLr9EvSnYmp g1uWaxv4nCujQ== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 65C2E40A78; Tue, 6 Aug 2024 13:13:34 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 4BE591BF300 for ; Tue, 6 Aug 2024 13:13:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 37F4F403F1 for ; Tue, 6 Aug 2024 13:13:31 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id AaIiMgE4dohP for ; Tue, 6 Aug 2024 13:13:30 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 15716403B2 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 15716403B2 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 15716403B2 for ; Tue, 6 Aug 2024 13:13:30 +0000 (UTC) X-CSE-ConnectionGUID: sFucVY7FTyy0qKLoGT+Z1Q== X-CSE-MsgGUID: ziMK7bNKSt2t3Fy96Lf78A== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842136" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842136" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:29 -0700 X-CSE-ConnectionGUID: 8b7wVvi6QBitGXj/3KZWHA== X-CSE-MsgGUID: AfPCn/xoQkG2mtANgN/v3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475822" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:26 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:38 +0200 Message-ID: <20240806131240.800259-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950010; x=1754486010; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tpJ6HRHvdzGj4qFRuoUwc+Xi1PWZVyJf4WSGrHJGRng=; b=muz36Tcdc50jrnuj6gQ6Ho4A9fo56EVbmZeh5D6ck1Omy6yeOReRVXYe Bpi8SOdHpIXt3537ZLVZslfmrBkIa55hAIaAehbdto3Ohbshj/WTylp1g x0mAa4UY+4c0eU6msDS1ZUHFqhYLc1T9rn9i/4HATpoYEDXoMLZoVJYd7 aqZDXsYhy76+FecIBFnoD5/tiq0898O8QAJmDGO9asXIXPkz220whg7wh ge0EyKCGmAZNszG/hY4rpuRV0LPV2s4XgzK8MkmkjX880GD4n8lSnnDl2 B7mKVz/mezcjr20G+yddWtptfbMKRiO/nfo5EPdUAo529RowD2fTxtY6+ w==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=muz36Tcd Subject: [Intel-wired-lan] [PATCH iwl-next 7/9] idpf: fix netdev Tx queue stop/wake X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, stable@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Michal Kubiak netif_txq_maybe_stop() returns -1, 0, or 1, while idpf_tx_maybe_stop_common() says it returns 0 or -EBUSY. As a result, there sometimes are Tx queue timeout warnings despite that the queue is empty or there is at least enough space to restart it. Make idpf_tx_maybe_stop_common() inline and returning true or false, handling the return of netif_txq_maybe_stop() properly. Use a correct goto in idpf_tx_maybe_stop_splitq() to avoid stopping the queue or incrementing the stops counter twice. Fixes: 6818c4d5b3c2 ("idpf: add splitq start_xmit") Fixes: a5ab9ee0df0b ("idpf: add singleq start_xmit and napi poll") Cc: stable@vger.kernel.org # 6.7+ Signed-off-by: Michal Kubiak Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 9 ++++- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 4 +++ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 35 +++++-------------- 3 files changed, 21 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 2478f71adb95..df3574ac58c2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1020,7 +1020,6 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *first, u16 ring_idx); unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, struct sk_buff *skb); -int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size); void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue); netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, struct idpf_tx_queue *tx_q); @@ -1029,4 +1028,12 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq, u16 cleaned_count); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); +static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, + u32 needed) +{ + return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx, + IDPF_DESC_UNUSED(tx_q), + needed, needed); +} + #endif /* !_IDPF_TXRX_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 947d3ff9677c..5ba360abbe66 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -375,6 +375,10 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, IDPF_TX_DESCS_FOR_CTX)) { idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); + u64_stats_update_begin(&tx_q->stats_sync); + u64_stats_inc(&tx_q->q_stats.q_busy); + u64_stats_update_end(&tx_q->stats_sync); + return NETDEV_TX_BUSY; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index fd44a65a0537..26ef064972d4 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2127,29 +2127,6 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc, desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag); } -/** - * idpf_tx_maybe_stop_common - 1st level check for common Tx stop conditions - * @tx_q: the queue to be checked - * @size: number of descriptors we want to assure is available - * - * Returns 0 if stop is not needed - */ -int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size) -{ - struct netdev_queue *nq; - - if (likely(IDPF_DESC_UNUSED(tx_q) >= size)) - return 0; - - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.q_busy); - u64_stats_update_end(&tx_q->stats_sync); - - nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); - - return netif_txq_maybe_stop(nq, IDPF_DESC_UNUSED(tx_q), size, size); -} - /** * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions * @tx_q: the queue to be checked @@ -2161,7 +2138,7 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, unsigned int descs_needed) { if (idpf_tx_maybe_stop_common(tx_q, descs_needed)) - goto splitq_stop; + goto out; /* If there are too many outstanding completions expected on the * completion queue, stop the TX queue to give the device some time to @@ -2180,10 +2157,12 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, return 0; splitq_stop: + netif_stop_subqueue(tx_q->netdev, tx_q->idx); + +out: u64_stats_update_begin(&tx_q->stats_sync); u64_stats_inc(&tx_q->q_stats.q_busy); u64_stats_update_end(&tx_q->stats_sync); - netif_stop_subqueue(tx_q->netdev, tx_q->idx); return -EBUSY; } @@ -2206,7 +2185,11 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val, nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); tx_q->next_to_use = val; - idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED); + if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) { + u64_stats_update_begin(&tx_q->stats_sync); + u64_stats_inc(&tx_q->q_stats.q_busy); + u64_stats_update_end(&tx_q->stats_sync); + } /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only From patchwork Tue Aug 6 13:12:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969560 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=5LhncMOg; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYfF32cqz1yYD for ; Tue, 6 Aug 2024 23:13:41 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id DC66C409F4; Tue, 6 Aug 2024 13:13:37 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id xKn-Kh3W5hFi; Tue, 6 Aug 2024 13:13:36 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 9E33340A7F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950016; bh=8Qih63hqvjffAR9+AWCK+Eb3YXhGZHktgBIPhEGzkUo=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=5LhncMOg0+PN0jIzXTXlJe7vcGSK+uCQXuex6Rs852Itvyv6dbV+ZbvJWHFSHajKc M6hYxsrtFq8cBtmE0bvstyf8Cewv+T+be8KB+qFNNYxW/c1UV9fRv9G4yq7rStjtSz JF03wQeNLUUHOyHVrtJ9p8dLzw2KrHOmp2ihA89nR68WtutM2TJA4HfssJaK8hhqUW Ff5QqVQf7fZjOrOHSbLg1eb1eyT2GNpV40zTfH58wC/5hBiGZ9t5Qr/icyjZPzTv74 ORm0vS3VDPxInZekLJPBWPrqwBNMGUuDm05Pw9Sa6+mG44kWetZVMxjeF2h7rlyWyO tzVyXZZqVGx5Q== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 9E33340A7F; Tue, 6 Aug 2024 13:13:36 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id EF4391BF300 for ; Tue, 6 Aug 2024 13:13:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id D84CF403EC for ; Tue, 6 Aug 2024 13:13:34 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id iIg39lH7e9Fr for ; Tue, 6 Aug 2024 13:13:33 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 3A0DA403DA DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 3A0DA403DA Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id 3A0DA403DA for ; Tue, 6 Aug 2024 13:13:33 +0000 (UTC) X-CSE-ConnectionGUID: zr/kMIm/S0iyfUCiZbHHnw== X-CSE-MsgGUID: cqW70n0LTB6IPXMVBW3BmA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842148" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842148" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:32 -0700 X-CSE-ConnectionGUID: OleXau5/Qy+Gs3X6kJrlVA== X-CSE-MsgGUID: Cj0QYa8jQS2rfFWTRUdL7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475826" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:30 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:39 +0200 Message-ID: <20240806131240.800259-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950013; x=1754486013; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=In6QrPvQnSDhQJM97UyCLeQgTMymS/F0mWTXu7iL4fo=; b=H++nrYr4nnofgoOrKkNVZKdFDqAbTrbQzPpAGMNjwkQqEpEGNyISQ8A1 TnpwkJVK257eYYW0sgrI1R4X/vo9tOryaWUyOHVlEp+QQplm6pAq7jG6z H0CZjcNhogMeNAzHdjtzrPvbylnyU0+DrKW0jcqSu+ItpIQp0fHSvSwxZ Q0JoOjwxNqhVxFLxukKudZtyHTlGWO286B/Xnr01QoyWZDoZK68RV9jUc I0iCHUeO5fQLR13Kz9TTPRk1U9/U5fGReH1uiXw3kW1fdLL8vbo/hKkQp x9KphDaqR0TjMrKpH4Vyy3ztt4XEhSbldVj4dpvgeJByaFtuDt/Hp9atg g==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=H++nrYr4 Subject: [Intel-wired-lan] [PATCH iwl-next 8/9] idpf: enable WB_ON_ITR X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Joshua Hay Tell hardware to write back completed descriptors even when interrupts are disabled. Otherwise, descriptors might not be written back until the hardware can flush a full cacheline of descriptors. This can cause unnecessary delays when traffic is light (or even trigger Tx queue timeout). The example scenario to reproduce the Tx timeout if the fix is not applied: - configure at least 2 Tx queues to be assigned to the same q_vector, - generate a huge Tx traffic on the first Tx queue - try to send a few packets using the second Tx queue. In such a case Tx timeout will appear on the second Tx queue because no completion descriptors are written back for that queue while interrupts are disabled due to NAPI polling. Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support") Fixes: a5ab9ee0df0b ("idpf: add singleq start_xmit and napi poll") Signed-off-by: Joshua Hay Co-developed-by: Michal Kubiak Signed-off-by: Michal Kubiak Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 27 ++++++++++++++++++- drivers/net/ethernet/intel/idpf/idpf_dev.c | 2 ++ .../ethernet/intel/idpf/idpf_singleq_txrx.c | 6 ++++- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 7 ++++- drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 2 ++ 5 files changed, 41 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index df3574ac58c2..b4a87f8661a8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -349,9 +349,11 @@ struct idpf_vec_regs { * struct idpf_intr_reg * @dyn_ctl: Dynamic control interrupt register * @dyn_ctl_intena_m: Mask for dyn_ctl interrupt enable + * @dyn_ctl_intena_msk_m: Mask for dyn_ctl interrupt enable mask * @dyn_ctl_itridx_s: Register bit offset for ITR index * @dyn_ctl_itridx_m: Mask for ITR index * @dyn_ctl_intrvl_s: Register bit offset for ITR interval + * @dyn_ctl_wb_on_itr_m: Mask for WB on ITR feature * @rx_itr: RX ITR register * @tx_itr: TX ITR register * @icr_ena: Interrupt cause register offset @@ -360,9 +362,11 @@ struct idpf_vec_regs { struct idpf_intr_reg { void __iomem *dyn_ctl; u32 dyn_ctl_intena_m; + u32 dyn_ctl_intena_msk_m; u32 dyn_ctl_itridx_s; u32 dyn_ctl_itridx_m; u32 dyn_ctl_intrvl_s; + u32 dyn_ctl_wb_on_itr_m; void __iomem *rx_itr; void __iomem *tx_itr; void __iomem *icr_ena; @@ -383,6 +387,7 @@ struct idpf_intr_reg { * @intr_reg: See struct idpf_intr_reg * @napi: napi handler * @total_events: Number of interrupts processed + * @wb_on_itr: WB on ITR enabled or not * @tx_dim: Data for TX net_dim algorithm * @tx_itr_value: TX interrupt throttling rate * @tx_intr_mode: Dynamic ITR or not @@ -413,6 +418,7 @@ struct idpf_q_vector { __cacheline_group_begin_aligned(read_write); struct napi_struct napi; u16 total_events; + bool wb_on_itr; struct dim tx_dim; u16 tx_itr_value; @@ -431,7 +437,7 @@ struct idpf_q_vector { cpumask_var_t affinity_mask; __cacheline_group_end_aligned(cold); }; -libeth_cacheline_set_assert(struct idpf_q_vector, 104, +libeth_cacheline_set_assert(struct idpf_q_vector, 112, 424 + 2 * sizeof(struct dim), 8 + sizeof(cpumask_var_t)); @@ -989,6 +995,25 @@ static inline void idpf_tx_splitq_build_desc(union idpf_tx_flex_desc *desc, idpf_tx_splitq_build_flow_desc(desc, params, td_cmd, size); } +/** + * idpf_vport_intr_set_wb_on_itr - enable descriptor writeback on disabled interrupts + * @q_vector: pointer to queue vector struct + */ +static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector) +{ + struct idpf_intr_reg *reg; + + if (q_vector->wb_on_itr) + return; + + q_vector->wb_on_itr = true; + reg = &q_vector->intr_reg; + + writel(reg->dyn_ctl_wb_on_itr_m | reg->dyn_ctl_intena_msk_m | + (IDPF_NO_ITR_UPDATE_IDX << reg->dyn_ctl_itridx_s), + reg->dyn_ctl); +} + int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget); void idpf_vport_init_num_qs(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_msg); diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 3df9935685e9..6c913a703df6 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -97,8 +97,10 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) intr->dyn_ctl = idpf_get_reg_addr(adapter, reg_vals[vec_id].dyn_ctl_reg); intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M; + intr->dyn_ctl_intena_msk_m = PF_GLINT_DYN_CTL_INTENA_MSK_M; intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S; intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S; + intr->dyn_ctl_wb_on_itr_m = PF_GLINT_DYN_CTL_WB_ON_ITR_M; spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, IDPF_PF_ITR_IDX_SPACING); diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 5ba360abbe66..dfd7cf1d9aa0 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -1120,8 +1120,10 @@ int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget) &work_done); /* If work not completed, return budget and polling will return */ - if (!clean_complete) + if (!clean_complete) { + idpf_vport_intr_set_wb_on_itr(q_vector); return budget; + } work_done = min_t(int, work_done, budget - 1); @@ -1130,6 +1132,8 @@ int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget) */ if (likely(napi_complete_done(napi, work_done))) idpf_vport_intr_update_itr_ena_irq(q_vector); + else + idpf_vport_intr_set_wb_on_itr(q_vector); return work_done; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 26ef064972d4..7af4ec83fd0a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3710,6 +3710,7 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector) /* net_dim() updates ITR out-of-band using a work item */ idpf_net_dim(q_vector); + q_vector->wb_on_itr = false; intval = idpf_vport_intr_buildreg_itr(q_vector, IDPF_NO_ITR_UPDATE_IDX, 0); @@ -4012,8 +4013,10 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) clean_complete &= idpf_tx_splitq_clean_all(q_vector, budget, &work_done); /* If work not completed, return budget and polling will return */ - if (!clean_complete) + if (!clean_complete) { + idpf_vport_intr_set_wb_on_itr(q_vector); return budget; + } work_done = min_t(int, work_done, budget - 1); @@ -4022,6 +4025,8 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) */ if (likely(napi_complete_done(napi, work_done))) idpf_vport_intr_update_itr_ena_irq(q_vector); + else + idpf_vport_intr_set_wb_on_itr(q_vector); /* Switch to poll mode in the tear-down path after sending disable * queues virtchnl message, as the interrupts will be disabled after diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index 629cb5cb7c9f..99b8dbaf4225 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -97,7 +97,9 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) intr->dyn_ctl = idpf_get_reg_addr(adapter, reg_vals[vec_id].dyn_ctl_reg); intr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M; + intr->dyn_ctl_intena_msk_m = VF_INT_DYN_CTLN_INTENA_MSK_M; intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S; + intr->dyn_ctl_wb_on_itr_m = VF_INT_DYN_CTLN_WB_ON_ITR_M; spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, IDPF_VF_ITR_IDX_SPACING); From patchwork Tue Aug 6 13:12:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 1969561 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=PA8+iavq; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WdYfN3s7pz1yYD for ; Tue, 6 Aug 2024 23:13:48 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 06D2140A64; Tue, 6 Aug 2024 13:13:47 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id qf35ZLDbs909; Tue, 6 Aug 2024 13:13:42 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 5038340A39 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1722950022; bh=FmVp1y5/D4XSP2Hsnf+wAr8R9A7NrGiAqkJJp/Y9V6g=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=PA8+iavq1ZZY+mnB0/cEVwPGZkDUUpAfNFuKXCc0rvXwg/JvlGRhajvbq/PtwvbNK 5oW/c4tr2D9EMI91pGWQ5P1piJqDQzx6sapjfQ5mWM0Cy6xmeGhlyw6tFmR3AXJK2F FaPKM3xfDGhl/sdlbbxUqHM5yeBIJZygbBJ7vH0+Xo86u/JiOgZ3FgcwvvbXQ3dzTK 9AUB+MO5YCdezegR141Wj34fpOgZMtRgQMdGV/vUM0swKTiRRVr26llvvgMwcesqSf x1FirjcVBONcvRu5f6eACJPrVkJXzKfjrYT1p4q/AdRUXWErN1yNWOwHYudcHn3nJe hBj2uVQGGgU2A== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 5038340A39; Tue, 6 Aug 2024 13:13:42 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 0EF131BF300 for ; Tue, 6 Aug 2024 13:13:41 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id E1F38403B2 for ; Tue, 6 Aug 2024 13:13:39 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id B5UEt_ZFm4A3 for ; Tue, 6 Aug 2024 13:13:37 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.17; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org B675C403DA DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org B675C403DA Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by smtp4.osuosl.org (Postfix) with ESMTPS id B675C403DA for ; Tue, 6 Aug 2024 13:13:36 +0000 (UTC) X-CSE-ConnectionGUID: JHr+56PVQ5u7xVOdtvFuHA== X-CSE-MsgGUID: 1j5PIdOUTkGbBrYQQjPfbA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="20842155" X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="20842155" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2024 06:13:36 -0700 X-CSE-ConnectionGUID: NR7CAJZbT6qOnoaCWh38bA== X-CSE-MsgGUID: /DlE3SrAQoOwNyBkH3JNHg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,267,1716274800"; d="scan'208";a="56475836" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa009.fm.intel.com with ESMTP; 06 Aug 2024 06:13:33 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Date: Tue, 6 Aug 2024 15:12:40 +0200 Message-ID: <20240806131240.800259-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240806131240.800259-1-aleksander.lobakin@intel.com> References: <20240806131240.800259-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722950016; x=1754486016; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GJygAmluDGXL10hIiCB9HMVtwOaHmPS8DwiTlbb1nD4=; b=cTgE6RZYAbW8QlQCNUmjPXYBdFOXy64LsYgqyQhE9UUFeV6XYGSeNiZl x/n1Iy4M5cYntRSsBl74RYs8KkizVIImC/I1DoBZ434LidIadf3SmwauD csuzXaY/3rsFszPtJOB6BLzRzfFqTSUrmWpI+d9cFNLKLYBwjPJ3Hsafl Xh1x9s9QEsZ4dOvOyWFOW12G/fOn+3ySfW80ts4bnTuIDjIqNImK1/K7o em7JdWBeePfSanS20ZOZaD9CppGNUqiZGFjlOh/QOV9Jaqmdw7jLyCAft sEkCWR222gvG9szaH0zfAMzr5yBGPGa7NSEeuljhX25Sf4i07ZL4xEzfO A==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=cTgE6RZY Subject: [Intel-wired-lan] [PATCH iwl-next 9/9] idpf: switch to libeth generic statistics X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Przemek Kitszel , Joshua Hay , linux-kernel@vger.kernel.org, Alexander Lobakin , Eric Dumazet , Michal Kubiak , Tony Nguyen , nex.sw.ncis.osdt.itp.upstreaming@intel.com, Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , "David S. Miller" Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Fully reimplement idpf's per-queue stats using the libeth infra. Embed &libeth_netdev_priv to the beginning of &idpf_netdev_priv(), call the necessary init/deinit helpers and the corresponding Ethtool helpers. Update hotpath counters such as hsplit and tso/gso using the onstack containers instead of direct accesses to queue->stats. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf.h | 21 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 54 +- drivers/net/ethernet/intel/idpf/virtchnl2.h | 33 +- .../net/ethernet/intel/idpf/idpf_ethtool.c | 498 ++---------------- drivers/net/ethernet/intel/idpf/idpf_lib.c | 32 +- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 76 +-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 172 +++--- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 37 +- 8 files changed, 232 insertions(+), 691 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 2c31ad87587a..7df35ec7302c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -138,6 +138,7 @@ enum idpf_vport_state { /** * struct idpf_netdev_priv - Struct to store vport back pointer + * @priv: libeth private structure * @adapter: Adapter back pointer * @vport: Vport back pointer * @vport_id: Vport identifier @@ -147,6 +148,8 @@ enum idpf_vport_state { * @stats_lock: Lock to protect stats update */ struct idpf_netdev_priv { + struct libeth_netdev_priv priv; + struct idpf_adapter *adapter; struct idpf_vport *vport; u32 vport_id; @@ -155,6 +158,7 @@ struct idpf_netdev_priv { struct rtnl_link_stats64 netstats; spinlock_t stats_lock; }; +libeth_netdev_priv_assert(struct idpf_netdev_priv, priv); /** * struct idpf_reset_reg - Reset register offsets/masks @@ -232,19 +236,6 @@ enum idpf_vport_flags { IDPF_VPORT_FLAGS_NBITS, }; -struct idpf_port_stats { - struct u64_stats_sync stats_sync; - u64_stats_t rx_hw_csum_err; - u64_stats_t rx_hsplit; - u64_stats_t rx_hsplit_hbo; - u64_stats_t rx_bad_descs; - u64_stats_t tx_linearize; - u64_stats_t tx_busy; - u64_stats_t tx_drops; - u64_stats_t tx_dma_map_errs; - struct virtchnl2_vport_stats vport_stats; -}; - /** * struct idpf_vport - Handle for netdevices and queue resources * @num_txq: Number of allocated TX queues @@ -285,7 +276,7 @@ struct idpf_port_stats { * @default_mac_addr: device will give a default MAC to use * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation - * @port_stats: per port csum, header split, and other offload stats + * @vport_stats: vport stats reported by HW * @link_up: True if link is up * @link_speed_mbps: Link speed in mbps * @sw_marker_wq: workqueue for marker packets @@ -328,7 +319,7 @@ struct idpf_vport { u8 default_mac_addr[ETH_ALEN]; u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS]; - struct idpf_port_stats port_stats; + struct virtchnl2_vport_stats vport_stats; bool link_up; u32 link_speed_mbps; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index b4a87f8661a8..8188f5cb418b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -143,6 +144,8 @@ do { \ #define IDPF_TX_FLAGS_IPV6 BIT(2) #define IDPF_TX_FLAGS_TUNNEL BIT(3) +struct libeth_sq_xmit_stats; + union idpf_tx_flex_desc { struct idpf_flex_tx_desc q; /* queue based scheduling */ struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */ @@ -441,28 +444,6 @@ libeth_cacheline_set_assert(struct idpf_q_vector, 112, 424 + 2 * sizeof(struct dim), 8 + sizeof(cpumask_var_t)); -struct idpf_rx_queue_stats { - u64_stats_t packets; - u64_stats_t bytes; - u64_stats_t rsc_pkts; - u64_stats_t hw_csum_err; - u64_stats_t hsplit_pkts; - u64_stats_t hsplit_buf_ovf; - u64_stats_t bad_descs; -}; - -struct idpf_tx_queue_stats { - u64_stats_t packets; - u64_stats_t bytes; - u64_stats_t lso_pkts; - u64_stats_t linearize; - u64_stats_t q_busy; - u64_stats_t skb_drops; - u64_stats_t dma_map_errs; -}; - -#define idpf_cleaned_stats libeth_sq_napi_stats - #define IDPF_ITR_DYNAMIC 1 #define IDPF_ITR_MAX 0x1FE0 #define IDPF_ITR_20K 0x0032 @@ -508,10 +489,9 @@ struct idpf_txq_stash { * @next_to_use: Next descriptor to use * @next_to_clean: Next descriptor to clean * @next_to_alloc: RX buffer to allocate at - * @skb: Pointer to the skb * @truesize: data buffer truesize in singleq - * @stats_sync: See struct u64_stats_sync - * @q_stats: See union idpf_rx_queue_stats + * @skb: Pointer to the skb + * @stats: per-queue RQ stats * @q_id: Queue id * @size: Length of descriptor ring in bytes * @dma: Physical address of ring @@ -551,15 +531,14 @@ struct idpf_rx_queue { __cacheline_group_end_aligned(read_mostly); __cacheline_group_begin_aligned(read_write); - u16 next_to_use; - u16 next_to_clean; - u16 next_to_alloc; + u32 next_to_use; + u32 next_to_clean; + u32 next_to_alloc; - struct sk_buff *skb; u32 truesize; + struct sk_buff *skb; - struct u64_stats_sync stats_sync; - struct idpf_rx_queue_stats q_stats; + struct libeth_rq_stats stats; __cacheline_group_end_aligned(read_write); __cacheline_group_begin_aligned(cold); @@ -576,7 +555,7 @@ struct idpf_rx_queue { __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_rx_queue, 64, - 80 + sizeof(struct u64_stats_sync), + 32 + sizeof(struct libeth_rq_stats), 32); /** @@ -633,8 +612,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64, * @compl_tag_bufid_m: Completion tag buffer id mask * @compl_tag_cur_gen: Used to keep track of current completion tag generation * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset - * @stats_sync: See struct u64_stats_sync - * @q_stats: See union idpf_tx_queue_stats + * @stats: per-queue SQ stats * @q_id: Queue id * @size: Length of descriptor ring in bytes * @dma: Physical address of ring @@ -682,8 +660,7 @@ struct idpf_tx_queue { u16 compl_tag_cur_gen; u16 compl_tag_gen_max; - struct u64_stats_sync stats_sync; - struct idpf_tx_queue_stats q_stats; + struct libeth_sq_stats stats; __cacheline_group_end_aligned(read_write); __cacheline_group_begin_aligned(cold); @@ -695,7 +672,7 @@ struct idpf_tx_queue { __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_tx_queue, 64, - 88 + sizeof(struct u64_stats_sync), + 32 + sizeof(struct libeth_sq_stats), 24); /** @@ -1051,7 +1028,8 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev); bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq, u16 cleaned_count); -int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); +int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off, + struct libeth_sq_xmit_stats *ss); static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, u32 needed) diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index 63deb120359c..19d62cfc17be 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -1021,6 +1021,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_ptype_info); * struct virtchnl2_vport_stats - Vport statistics. * @vport_id: Vport id. * @pad: Padding. + * @counters: grouped counters for bulk operations * @rx_bytes: Received bytes. * @rx_unicast: Received unicast packets. * @rx_multicast: Received multicast packets. @@ -1045,21 +1046,23 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_ptype_info); struct virtchnl2_vport_stats { __le32 vport_id; u8 pad[4]; - __le64 rx_bytes; - __le64 rx_unicast; - __le64 rx_multicast; - __le64 rx_broadcast; - __le64 rx_discards; - __le64 rx_errors; - __le64 rx_unknown_protocol; - __le64 tx_bytes; - __le64 tx_unicast; - __le64 tx_multicast; - __le64 tx_broadcast; - __le64 tx_discards; - __le64 tx_errors; - __le64 rx_invalid_frame_length; - __le64 rx_overflow_drop; + struct_group(counters, + __le64 rx_bytes; + __le64 rx_unicast; + __le64 rx_multicast; + __le64 rx_broadcast; + __le64 rx_discards; + __le64 rx_errors; + __le64 rx_unknown_protocol; + __le64 tx_bytes; + __le64 tx_unicast; + __le64 tx_multicast; + __le64 tx_broadcast; + __le64 tx_discards; + __le64 tx_errors; + __le64 rx_invalid_frame_length; + __le64 rx_overflow_drop; + ); }; VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats); diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 3806ddd3ce4a..5aba64b0b7ec 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright (C) 2023 Intel Corporation */ +#include + #include "idpf.h" /** @@ -399,172 +401,37 @@ static int idpf_set_ringparam(struct net_device *netdev, return err; } -/** - * struct idpf_stats - definition for an ethtool statistic - * @stat_string: statistic name to display in ethtool -S output - * @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64) - * @stat_offset: offsetof() the stat from a base pointer - * - * This structure defines a statistic to be added to the ethtool stats buffer. - * It defines a statistic as offset from a common base pointer. Stats should - * be defined in constant arrays using the IDPF_STAT macro, with every element - * of the array using the same _type for calculating the sizeof_stat and - * stat_offset. - * - * The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or - * sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from - * the idpf_add_ethtool_stat() helper function. - * - * The @stat_string is interpreted as a format string, allowing formatted - * values to be inserted while looping over multiple structures for a given - * statistics array. Thus, every statistic string in an array should have the - * same type and number of format specifiers, to be formatted by variadic - * arguments to the idpf_add_stat_string() helper function. - */ -struct idpf_stats { - char stat_string[ETH_GSTRING_LEN]; - int sizeof_stat; - int stat_offset; -}; - -/* Helper macro to define an idpf_stat structure with proper size and type. - * Use this when defining constant statistics arrays. Note that @_type expects - * only a type name and is used multiple times. - */ -#define IDPF_STAT(_type, _name, _stat) { \ - .stat_string = _name, \ - .sizeof_stat = sizeof_field(_type, _stat), \ - .stat_offset = offsetof(_type, _stat) \ -} - -/* Helper macros for defining some statistics related to queues */ -#define IDPF_RX_QUEUE_STAT(_name, _stat) \ - IDPF_STAT(struct idpf_rx_queue, _name, _stat) -#define IDPF_TX_QUEUE_STAT(_name, _stat) \ - IDPF_STAT(struct idpf_tx_queue, _name, _stat) - -/* Stats associated with a Tx queue */ -static const struct idpf_stats idpf_gstrings_tx_queue_stats[] = { - IDPF_TX_QUEUE_STAT("pkts", q_stats.packets), - IDPF_TX_QUEUE_STAT("bytes", q_stats.bytes), - IDPF_TX_QUEUE_STAT("lso_pkts", q_stats.lso_pkts), -}; - -/* Stats associated with an Rx queue */ -static const struct idpf_stats idpf_gstrings_rx_queue_stats[] = { - IDPF_RX_QUEUE_STAT("pkts", q_stats.packets), - IDPF_RX_QUEUE_STAT("bytes", q_stats.bytes), - IDPF_RX_QUEUE_STAT("rx_gro_hw_pkts", q_stats.rsc_pkts), -}; - -#define IDPF_TX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_tx_queue_stats) -#define IDPF_RX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_rx_queue_stats) - -#define IDPF_PORT_STAT(_name, _stat) \ - IDPF_STAT(struct idpf_vport, _name, _stat) - -static const struct idpf_stats idpf_gstrings_port_stats[] = { - IDPF_PORT_STAT("rx-csum_errors", port_stats.rx_hw_csum_err), - IDPF_PORT_STAT("rx-hsplit", port_stats.rx_hsplit), - IDPF_PORT_STAT("rx-hsplit_hbo", port_stats.rx_hsplit_hbo), - IDPF_PORT_STAT("rx-bad_descs", port_stats.rx_bad_descs), - IDPF_PORT_STAT("tx-skb_drops", port_stats.tx_drops), - IDPF_PORT_STAT("tx-dma_map_errs", port_stats.tx_dma_map_errs), - IDPF_PORT_STAT("tx-linearized_pkts", port_stats.tx_linearize), - IDPF_PORT_STAT("tx-busy_events", port_stats.tx_busy), - IDPF_PORT_STAT("rx-unicast_pkts", port_stats.vport_stats.rx_unicast), - IDPF_PORT_STAT("rx-multicast_pkts", port_stats.vport_stats.rx_multicast), - IDPF_PORT_STAT("rx-broadcast_pkts", port_stats.vport_stats.rx_broadcast), - IDPF_PORT_STAT("rx-unknown_protocol", port_stats.vport_stats.rx_unknown_protocol), - IDPF_PORT_STAT("tx-unicast_pkts", port_stats.vport_stats.tx_unicast), - IDPF_PORT_STAT("tx-multicast_pkts", port_stats.vport_stats.tx_multicast), - IDPF_PORT_STAT("tx-broadcast_pkts", port_stats.vport_stats.tx_broadcast), +static const char * const idpf_gstrings_port_stats[] = { + "rx_bytes", + "rx_unicast", + "rx_multicast", + "rx_broadcast", + "rx_discards", + "rx_errors", + "rx_unknown_protocol", + "tx_bytes", + "tx_unicast", + "tx_multicast", + "tx_broadcast", + "tx_discards", + "tx_errors", + "rx_invalid_frame_length", + "rx_overflow_drop", }; +static_assert(ARRAY_SIZE(idpf_gstrings_port_stats) == + sizeof_field(struct virtchnl2_vport_stats, counters) / + sizeof(__le64)); #define IDPF_PORT_STATS_LEN ARRAY_SIZE(idpf_gstrings_port_stats) -/** - * __idpf_add_qstat_strings - copy stat strings into ethtool buffer - * @p: ethtool supplied buffer - * @stats: stat definitions array - * @size: size of the stats array - * @type: stat type - * @idx: stat index - * - * Format and copy the strings described by stats into the buffer pointed at - * by p. - */ -static void __idpf_add_qstat_strings(u8 **p, const struct idpf_stats *stats, - const unsigned int size, const char *type, - unsigned int idx) -{ - unsigned int i; - - for (i = 0; i < size; i++) - ethtool_sprintf(p, "%s_q-%u_%s", - type, idx, stats[i].stat_string); -} - -/** - * idpf_add_qstat_strings - Copy queue stat strings into ethtool buffer - * @p: ethtool supplied buffer - * @stats: stat definitions array - * @type: stat type - * @idx: stat idx - * - * Format and copy the strings described by the const static stats value into - * the buffer pointed at by p. - * - * The parameter @stats is evaluated twice, so parameters with side effects - * should be avoided. Additionally, stats must be an array such that - * ARRAY_SIZE can be called on it. - */ -#define idpf_add_qstat_strings(p, stats, type, idx) \ - __idpf_add_qstat_strings(p, stats, ARRAY_SIZE(stats), type, idx) - /** * idpf_add_stat_strings - Copy port stat strings into ethtool buffer * @p: ethtool buffer - * @stats: struct to copy from - * @size: size of stats array to copy from */ -static void idpf_add_stat_strings(u8 **p, const struct idpf_stats *stats, - const unsigned int size) +static void idpf_add_stat_strings(u8 **p) { - unsigned int i; - - for (i = 0; i < size; i++) - ethtool_puts(p, stats[i].stat_string); -} - -/** - * idpf_get_stat_strings - Get stat strings - * @netdev: network interface device structure - * @data: buffer for string data - * - * Builds the statistics string table - */ -static void idpf_get_stat_strings(struct net_device *netdev, u8 *data) -{ - struct idpf_netdev_priv *np = netdev_priv(netdev); - struct idpf_vport_config *vport_config; - unsigned int i; - - idpf_add_stat_strings(&data, idpf_gstrings_port_stats, - IDPF_PORT_STATS_LEN); - - vport_config = np->adapter->vport_config[np->vport_idx]; - /* It's critical that we always report a constant number of strings and - * that the strings are reported in the same order regardless of how - * many queues are actually in use. - */ - for (i = 0; i < vport_config->max_q.max_txq; i++) - idpf_add_qstat_strings(&data, idpf_gstrings_tx_queue_stats, - "tx", i); - - for (i = 0; i < vport_config->max_q.max_rxq; i++) - idpf_add_qstat_strings(&data, idpf_gstrings_rx_queue_stats, - "rx", i); + for (u32 i = 0; i < IDPF_PORT_STATS_LEN; i++) + ethtool_puts(p, idpf_gstrings_port_stats[i]); } /** @@ -579,7 +446,8 @@ static void idpf_get_strings(struct net_device *netdev, u32 sset, u8 *data) { switch (sset) { case ETH_SS_STATS: - idpf_get_stat_strings(netdev, data); + idpf_add_stat_strings(&data); + libeth_ethtool_get_strings(netdev, sset, data); break; default: break; @@ -595,146 +463,15 @@ static void idpf_get_strings(struct net_device *netdev, u32 sset, u8 *data) */ static int idpf_get_sset_count(struct net_device *netdev, int sset) { - struct idpf_netdev_priv *np = netdev_priv(netdev); - struct idpf_vport_config *vport_config; - u16 max_txq, max_rxq; + u32 count; if (sset != ETH_SS_STATS) return -EINVAL; - vport_config = np->adapter->vport_config[np->vport_idx]; - /* This size reported back here *must* be constant throughout the - * lifecycle of the netdevice, i.e. we must report the maximum length - * even for queues that don't technically exist. This is due to the - * fact that this userspace API uses three separate ioctl calls to get - * stats data but has no way to communicate back to userspace when that - * size has changed, which can typically happen as a result of changing - * number of queues. If the number/order of stats change in the middle - * of this call chain it will lead to userspace crashing/accessing bad - * data through buffer under/overflow. - */ - max_txq = vport_config->max_q.max_txq; - max_rxq = vport_config->max_q.max_rxq; - - return IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) + - (IDPF_RX_QUEUE_STATS_LEN * max_rxq); -} - -/** - * idpf_add_one_ethtool_stat - copy the stat into the supplied buffer - * @data: location to store the stat value - * @pstat: old stat pointer to copy from - * @stat: the stat definition - * - * Copies the stat data defined by the pointer and stat structure pair into - * the memory supplied as data. If the pointer is null, data will be zero'd. - */ -static void idpf_add_one_ethtool_stat(u64 *data, const void *pstat, - const struct idpf_stats *stat) -{ - char *p; - - if (!pstat) { - /* Ensure that the ethtool data buffer is zero'd for any stats - * which don't have a valid pointer. - */ - *data = 0; - return; - } + count = IDPF_PORT_STATS_LEN; + count += libeth_ethtool_get_sset_count(netdev, sset); - p = (char *)pstat + stat->stat_offset; - switch (stat->sizeof_stat) { - case sizeof(u64): - *data = *((u64 *)p); - break; - case sizeof(u32): - *data = *((u32 *)p); - break; - case sizeof(u16): - *data = *((u16 *)p); - break; - case sizeof(u8): - *data = *((u8 *)p); - break; - default: - WARN_ONCE(1, "unexpected stat size for %s", - stat->stat_string); - *data = 0; - } -} - -/** - * idpf_add_queue_stats - copy queue statistics into supplied buffer - * @data: ethtool stats buffer - * @q: the queue to copy - * @type: type of the queue - * - * Queue statistics must be copied while protected by u64_stats_fetch_begin, - * so we can't directly use idpf_add_ethtool_stats. Assumes that queue stats - * are defined in idpf_gstrings_queue_stats. If the queue pointer is null, - * zero out the queue stat values and update the data pointer. Otherwise - * safely copy the stats from the queue into the supplied buffer and update - * the data pointer when finished. - * - * This function expects to be called while under rcu_read_lock(). - */ -static void idpf_add_queue_stats(u64 **data, const void *q, - enum virtchnl2_queue_type type) -{ - const struct u64_stats_sync *stats_sync; - const struct idpf_stats *stats; - unsigned int start; - unsigned int size; - unsigned int i; - - if (type == VIRTCHNL2_QUEUE_TYPE_RX) { - size = IDPF_RX_QUEUE_STATS_LEN; - stats = idpf_gstrings_rx_queue_stats; - stats_sync = &((const struct idpf_rx_queue *)q)->stats_sync; - } else { - size = IDPF_TX_QUEUE_STATS_LEN; - stats = idpf_gstrings_tx_queue_stats; - stats_sync = &((const struct idpf_tx_queue *)q)->stats_sync; - } - - /* To avoid invalid statistics values, ensure that we keep retrying - * the copy until we get a consistent value according to - * u64_stats_fetch_retry. - */ - do { - start = u64_stats_fetch_begin(stats_sync); - for (i = 0; i < size; i++) - idpf_add_one_ethtool_stat(&(*data)[i], q, &stats[i]); - } while (u64_stats_fetch_retry(stats_sync, start)); - - /* Once we successfully copy the stats in, update the data pointer */ - *data += size; -} - -/** - * idpf_add_empty_queue_stats - Add stats for a non-existent queue - * @data: pointer to data buffer - * @qtype: type of data queue - * - * We must report a constant length of stats back to userspace regardless of - * how many queues are actually in use because stats collection happens over - * three separate ioctls and there's no way to notify userspace the size - * changed between those calls. This adds empty to data to the stats since we - * don't have a real queue to refer to for this stats slot. - */ -static void idpf_add_empty_queue_stats(u64 **data, u16 qtype) -{ - unsigned int i; - int stats_len; - - if (qtype == VIRTCHNL2_QUEUE_TYPE_RX) - stats_len = IDPF_RX_QUEUE_STATS_LEN; - else - stats_len = IDPF_TX_QUEUE_STATS_LEN; - - for (i = 0; i < stats_len; i++) - (*data)[i] = 0; - *data += stats_len; + return count; } /** @@ -744,116 +481,15 @@ static void idpf_add_empty_queue_stats(u64 **data, u16 qtype) */ static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data) { - unsigned int size = IDPF_PORT_STATS_LEN; - unsigned int start; - unsigned int i; - - do { - start = u64_stats_fetch_begin(&vport->port_stats.stats_sync); - for (i = 0; i < size; i++) - idpf_add_one_ethtool_stat(&(*data)[i], vport, - &idpf_gstrings_port_stats[i]); - } while (u64_stats_fetch_retry(&vport->port_stats.stats_sync, start)); - - *data += size; -} + u64 *stats = *data; -/** - * idpf_collect_queue_stats - accumulate various per queue stats - * into port level stats - * @vport: pointer to vport struct - **/ -static void idpf_collect_queue_stats(struct idpf_vport *vport) -{ - struct idpf_port_stats *pstats = &vport->port_stats; - int i, j; + memcpy(stats, &vport->vport_stats.counters, + sizeof(vport->vport_stats.counters)); - /* zero out port stats since they're actually tracked in per - * queue stats; this is only for reporting - */ - u64_stats_update_begin(&pstats->stats_sync); - u64_stats_set(&pstats->rx_hw_csum_err, 0); - u64_stats_set(&pstats->rx_hsplit, 0); - u64_stats_set(&pstats->rx_hsplit_hbo, 0); - u64_stats_set(&pstats->rx_bad_descs, 0); - u64_stats_set(&pstats->tx_linearize, 0); - u64_stats_set(&pstats->tx_busy, 0); - u64_stats_set(&pstats->tx_drops, 0); - u64_stats_set(&pstats->tx_dma_map_errs, 0); - u64_stats_update_end(&pstats->stats_sync); - - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; - u16 num_rxq; - - if (idpf_is_queue_model_split(vport->rxq_model)) - num_rxq = rxq_grp->splitq.num_rxq_sets; - else - num_rxq = rxq_grp->singleq.num_rxq; - - for (j = 0; j < num_rxq; j++) { - u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs; - struct idpf_rx_queue_stats *stats; - struct idpf_rx_queue *rxq; - unsigned int start; - - if (idpf_is_queue_model_split(vport->rxq_model)) - rxq = &rxq_grp->splitq.rxq_sets[j]->rxq; - else - rxq = rxq_grp->singleq.rxqs[j]; - - if (!rxq) - continue; - - do { - start = u64_stats_fetch_begin(&rxq->stats_sync); - - stats = &rxq->q_stats; - hw_csum_err = u64_stats_read(&stats->hw_csum_err); - hsplit = u64_stats_read(&stats->hsplit_pkts); - hsplit_hbo = u64_stats_read(&stats->hsplit_buf_ovf); - bad_descs = u64_stats_read(&stats->bad_descs); - } while (u64_stats_fetch_retry(&rxq->stats_sync, start)); - - u64_stats_update_begin(&pstats->stats_sync); - u64_stats_add(&pstats->rx_hw_csum_err, hw_csum_err); - u64_stats_add(&pstats->rx_hsplit, hsplit); - u64_stats_add(&pstats->rx_hsplit_hbo, hsplit_hbo); - u64_stats_add(&pstats->rx_bad_descs, bad_descs); - u64_stats_update_end(&pstats->stats_sync); - } - } + for (u32 i = 0; i < IDPF_PORT_STATS_LEN; i++) + le64_to_cpus(&stats[i]); - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; - - for (j = 0; j < txq_grp->num_txq; j++) { - u64 linearize, qbusy, skb_drops, dma_map_errs; - struct idpf_tx_queue *txq = txq_grp->txqs[j]; - struct idpf_tx_queue_stats *stats; - unsigned int start; - - if (!txq) - continue; - - do { - start = u64_stats_fetch_begin(&txq->stats_sync); - - stats = &txq->q_stats; - linearize = u64_stats_read(&stats->linearize); - qbusy = u64_stats_read(&stats->q_busy); - skb_drops = u64_stats_read(&stats->skb_drops); - dma_map_errs = u64_stats_read(&stats->dma_map_errs); - } while (u64_stats_fetch_retry(&txq->stats_sync, start)); - - u64_stats_update_begin(&pstats->stats_sync); - u64_stats_add(&pstats->tx_linearize, linearize); - u64_stats_add(&pstats->tx_busy, qbusy); - u64_stats_add(&pstats->tx_drops, skb_drops); - u64_stats_add(&pstats->tx_dma_map_errs, dma_map_errs); - u64_stats_update_end(&pstats->stats_sync); - } - } + *data += IDPF_PORT_STATS_LEN; } /** @@ -869,12 +505,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, u64 *data) { struct idpf_netdev_priv *np = netdev_priv(netdev); - struct idpf_vport_config *vport_config; struct idpf_vport *vport; - unsigned int total = 0; - unsigned int i, j; - bool is_splitq; - u16 qtype; idpf_vport_ctrl_lock(netdev); vport = idpf_netdev_to_vport(netdev); @@ -887,63 +518,8 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, rcu_read_lock(); - idpf_collect_queue_stats(vport); idpf_add_port_stats(vport, &data); - - for (i = 0; i < vport->num_txq_grp; i++) { - struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; - - qtype = VIRTCHNL2_QUEUE_TYPE_TX; - - for (j = 0; j < txq_grp->num_txq; j++, total++) { - struct idpf_tx_queue *txq = txq_grp->txqs[j]; - - if (!txq) - idpf_add_empty_queue_stats(&data, qtype); - else - idpf_add_queue_stats(&data, txq, qtype); - } - } - - vport_config = vport->adapter->vport_config[vport->idx]; - /* It is critical we provide a constant number of stats back to - * userspace regardless of how many queues are actually in use because - * there is no way to inform userspace the size has changed between - * ioctl calls. This will fill in any missing stats with zero. - */ - for (; total < vport_config->max_q.max_txq; total++) - idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX); - total = 0; - - is_splitq = idpf_is_queue_model_split(vport->rxq_model); - - for (i = 0; i < vport->num_rxq_grp; i++) { - struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i]; - u16 num_rxq; - - qtype = VIRTCHNL2_QUEUE_TYPE_RX; - - if (is_splitq) - num_rxq = rxq_grp->splitq.num_rxq_sets; - else - num_rxq = rxq_grp->singleq.num_rxq; - - for (j = 0; j < num_rxq; j++, total++) { - struct idpf_rx_queue *rxq; - - if (is_splitq) - rxq = &rxq_grp->splitq.rxq_sets[j]->rxq; - else - rxq = rxq_grp->singleq.rxqs[j]; - if (!rxq) - idpf_add_empty_queue_stats(&data, qtype); - else - idpf_add_queue_stats(&data, rxq, qtype); - } - } - - for (; total < vport_config->max_q.max_rxq; total++) - idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_RX); + libeth_ethtool_get_stats(netdev, stats, data); rcu_read_unlock(); diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 0b6c8fd5bc90..fd4cea2763e2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright (C) 2023 Intel Corporation */ +#include + #include "idpf.h" #include "idpf_virtchnl.h" @@ -739,9 +741,9 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) return idpf_init_mac_addr(vport, netdev); } - netdev = alloc_etherdev_mqs(sizeof(struct idpf_netdev_priv), - vport_config->max_q.max_txq, - vport_config->max_q.max_rxq); + netdev = libeth_netdev_alloc(sizeof(struct idpf_netdev_priv), + vport_config->max_q.max_rxq, + vport_config->max_q.max_txq); if (!netdev) return -ENOMEM; @@ -756,7 +758,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) err = idpf_init_mac_addr(vport, netdev); if (err) { - free_netdev(vport->netdev); + libeth_netdev_free(vport->netdev); vport->netdev = NULL; return err; @@ -945,7 +947,7 @@ static void idpf_decfg_netdev(struct idpf_vport *vport) vport->rx_ptype_lkup = NULL; unregister_netdev(vport->netdev); - free_netdev(vport->netdev); + libeth_netdev_free(vport->netdev); vport->netdev = NULL; adapter->netdevs[vport->idx] = NULL; @@ -1266,23 +1268,6 @@ static void idpf_restore_features(struct idpf_vport *vport) idpf_restore_mac_filters(vport); } -/** - * idpf_set_real_num_queues - set number of queues for netdev - * @vport: virtual port structure - * - * Returns 0 on success, negative on failure. - */ -static int idpf_set_real_num_queues(struct idpf_vport *vport) -{ - int err; - - err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq); - if (err) - return err; - - return netif_set_real_num_tx_queues(vport->netdev, vport->num_txq); -} - /** * idpf_up_complete - Complete interface up sequence * @vport: virtual port structure @@ -1924,7 +1909,8 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, if (reset_cause == IDPF_SR_Q_CHANGE) idpf_vport_alloc_vec_indexes(vport); - err = idpf_set_real_num_queues(vport); + err = libeth_set_real_num_queues(vport->netdev, vport->num_rxq, + vport->num_txq); if (err) goto err_open; diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index dfd7cf1d9aa0..c6ce819b3528 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -337,10 +337,6 @@ static void idpf_tx_singleq_build_ctx_desc(struct idpf_tx_queue *txq, qw1 |= FIELD_PREP(IDPF_TXD_CTX_QW1_TSO_LEN_M, offload->tso_len); qw1 |= FIELD_PREP(IDPF_TXD_CTX_QW1_MSS_M, offload->mss); - - u64_stats_update_begin(&txq->stats_sync); - u64_stats_inc(&txq->q_stats.lso_pkts); - u64_stats_update_end(&txq->stats_sync); } desc->qw0.tunneling_params = cpu_to_le32(offload->cd_tunneling); @@ -361,6 +357,7 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, struct idpf_tx_queue *tx_q) { struct idpf_tx_offload_params offload = { }; + struct libeth_sq_xmit_stats ss = { }; struct idpf_tx_buf *first; unsigned int count; __be16 protocol; @@ -374,10 +371,7 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, count + IDPF_TX_DESCS_PER_CACHE_LINE + IDPF_TX_DESCS_FOR_CTX)) { idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); - - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.q_busy); - u64_stats_update_end(&tx_q->stats_sync); + libeth_stats_inc_one(&tx_q->stats, busy); return NETDEV_TX_BUSY; } @@ -388,7 +382,7 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, else if (protocol == htons(ETH_P_IPV6)) offload.tx_flags |= IDPF_TX_FLAGS_IPV6; - tso = idpf_tso(skb, &offload); + tso = idpf_tso(skb, &offload, &ss); if (tso < 0) goto out_drop; @@ -412,6 +406,10 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, } idpf_tx_singleq_map(tx_q, first, &offload); + libeth_stats_add_frags(&ss, count); + libeth_sq_xmit_stats_csum(&ss, skb); + libeth_sq_xmit_stats_add(&tx_q->stats, &ss); + return NETDEV_TX_OK; out_drop: @@ -508,20 +506,18 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget, tx_q->next_to_clean = ntc; *cleaned += ss.packets; - - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_add(&tx_q->q_stats.packets, ss.packets); - u64_stats_add(&tx_q->q_stats.bytes, ss.bytes); - u64_stats_update_end(&tx_q->stats_sync); + libeth_sq_napi_stats_add(&tx_q->stats, &ss); np = netdev_priv(tx_q->netdev); nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); dont_wake = np->state != __IDPF_VPORT_UP || !netif_carrier_ok(tx_q->netdev); - __netif_txq_completed_wake(nq, ss.packets, ss.bytes, - IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, - dont_wake); + if (!__netif_txq_completed_wake(nq, ss.packets, ss.bytes, + IDPF_DESC_UNUSED(tx_q), + IDPF_TX_WAKE_THRESH, + dont_wake)) + libeth_stats_inc_one(&tx_q->stats, wake); return !!budget; } @@ -590,23 +586,25 @@ static bool idpf_rx_singleq_is_non_eop(const union virtchnl2_rx_desc *rx_desc) * @skb: skb currently being received and modified * @csum_bits: checksum bits from descriptor * @decoded: the packet type decoded by hardware + * @rs: RQ polling onstack stats * * skb->protocol must be set before this function is called */ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, struct idpf_rx_csum_decoded csum_bits, - struct libeth_rx_pt decoded) + struct libeth_rx_pt decoded, + struct libeth_rq_napi_stats *rs) { bool ipv4, ipv6; /* check if Rx checksum is enabled */ if (!libeth_rx_pt_has_checksum(rxq->netdev, decoded)) - return; + goto none; /* check if HW has decoded the packet and checksum */ if (unlikely(!csum_bits.l3l4p)) - return; + goto none; ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; @@ -619,7 +617,7 @@ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, * headers as indicated by setting IPV6EXADD bit */ if (unlikely(ipv6 && csum_bits.ipv6exadd)) - return; + goto none; /* check for L4 errors and handle packets that were not able to be * checksummed due to arrival speed @@ -634,7 +632,7 @@ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, * speed, in this case the stack can compute the csum. */ if (unlikely(csum_bits.pprs)) - return; + goto none; /* If there is an outer header present that might contain a checksum * we need to bump the checksum level by 1 to reflect the fact that @@ -644,12 +642,16 @@ static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, skb->csum_level = 1; skb->ip_summed = CHECKSUM_UNNECESSARY; + rs->csum_unnecessary++; + + return; + +none: + libeth_stats_inc_one(&rxq->stats, csum_none); return; checksum_fail: - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.hw_csum_err); - u64_stats_update_end(&rxq->stats_sync); + libeth_stats_inc_one(&rxq->stats, csum_bad); } /** @@ -786,6 +788,7 @@ static void idpf_rx_singleq_flex_hash(struct idpf_rx_queue *rx_q, * @skb: pointer to current skb being populated * @rx_desc: descriptor for skb * @ptype: packet type + * @rs: RQ polling onstack stats * * This function checks the ring, descriptor, and packet information in * order to populate the hash, checksum, VLAN, protocol, and @@ -795,7 +798,8 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_rx_queue *rx_q, struct sk_buff *skb, const union virtchnl2_rx_desc *rx_desc, - u16 ptype) + u16 ptype, + struct libeth_rq_napi_stats *rs) { struct libeth_rx_pt decoded = rx_q->rx_ptype_lkup[ptype]; struct idpf_rx_csum_decoded csum_bits; @@ -812,7 +816,7 @@ idpf_rx_singleq_process_skb_fields(struct idpf_rx_queue *rx_q, csum_bits = idpf_rx_singleq_flex_csum(rx_desc); } - idpf_rx_singleq_csum(rx_q, skb, csum_bits, decoded); + idpf_rx_singleq_csum(rx_q, skb, csum_bits, decoded, rs); skb_record_rx_queue(skb, rx_q->idx); } @@ -958,14 +962,14 @@ idpf_rx_singleq_extract_fields(const struct idpf_rx_queue *rx_q, */ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) { - unsigned int total_rx_bytes = 0, total_rx_pkts = 0; + struct libeth_rq_napi_stats rs = { }; struct sk_buff *skb = rx_q->skb; u16 ntc = rx_q->next_to_clean; u16 cleaned_count = 0; bool failure = false; /* Process Rx packets bounded by budget */ - while (likely(total_rx_pkts < (unsigned int)budget)) { + while (likely(rs.packets < budget)) { struct idpf_rx_extracted fields = { }; union virtchnl2_rx_desc *rx_desc; struct idpf_rx_buf *rx_buf; @@ -1030,18 +1034,19 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) } /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + rs.bytes += skb->len; /* protocol */ idpf_rx_singleq_process_skb_fields(rx_q, skb, - rx_desc, fields.rx_ptype); + rx_desc, fields.rx_ptype, + &rs); /* send completed skb up the stack */ napi_gro_receive(rx_q->pp->p.napi, skb); skb = NULL; /* update budget accounting */ - total_rx_pkts++; + rs.packets++; } rx_q->skb = skb; @@ -1052,13 +1057,10 @@ static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget) if (cleaned_count) failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count); - u64_stats_update_begin(&rx_q->stats_sync); - u64_stats_add(&rx_q->q_stats.packets, total_rx_pkts); - u64_stats_add(&rx_q->q_stats.bytes, total_rx_bytes); - u64_stats_update_end(&rx_q->stats_sync); + libeth_rq_napi_stats_add(&rx_q->stats, &rs); /* guarantee a trip back through this routine if there was a failure */ - return failure ? budget : (int)total_rx_pkts; + return failure ? budget : rs.packets; } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 7af4ec83fd0a..5a4455f72efc 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -132,6 +132,8 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) static void idpf_tx_desc_rel(struct idpf_tx_queue *txq) { idpf_tx_buf_rel_all(txq); + + libeth_sq_stats_deinit(txq->netdev, txq->idx); netdev_tx_reset_subqueue(txq->netdev, txq->idx); if (!txq->desc_ring) @@ -265,6 +267,8 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport, tx_q->next_to_clean = 0; idpf_queue_set(GEN_CHK, tx_q); + libeth_sq_stats_init(vport->netdev, &tx_q->stats, tx_q->idx); + return 0; err_alloc: @@ -480,6 +484,8 @@ static void idpf_rx_desc_rel(struct idpf_rx_queue *rxq, struct device *dev, if (!idpf_is_queue_model_split(model)) idpf_rx_buf_rel_all(rxq); + libeth_rq_stats_deinit(rxq->netdev, rxq->idx); + rxq->next_to_alloc = 0; rxq->next_to_clean = 0; rxq->next_to_use = 0; @@ -880,6 +886,8 @@ static int idpf_rx_desc_alloc(const struct idpf_vport *vport, rxq->next_to_use = 0; idpf_queue_set(GEN_CHK, rxq); + libeth_rq_stats_init(vport->netdev, &rxq->stats, rxq->idx); + return 0; } @@ -1663,7 +1671,7 @@ static void idpf_tx_handle_sw_marker(struct idpf_tx_queue *tx_q) */ static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq, u16 compl_tag, - struct idpf_cleaned_stats *cleaned, + struct libeth_sq_napi_stats *cleaned, int budget) { struct idpf_tx_stash *stash; @@ -1764,7 +1772,7 @@ do { \ */ static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, int napi_budget, - struct idpf_cleaned_stats *cleaned, + struct libeth_sq_napi_stats *cleaned, bool descs_only) { union idpf_tx_flex_desc *next_pending_desc = NULL; @@ -1856,7 +1864,7 @@ do { \ * this completion tag. */ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, - struct idpf_cleaned_stats *cleaned, + struct libeth_sq_napi_stats *cleaned, int budget) { u16 idx = compl_tag & txq->compl_tag_bufid_m; @@ -1934,7 +1942,7 @@ static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, */ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq, struct idpf_splitq_tx_compl_desc *desc, - struct idpf_cleaned_stats *cleaned, + struct libeth_sq_napi_stats *cleaned, int budget) { u16 compl_tag; @@ -1978,7 +1986,7 @@ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget, ntc -= complq->desc_count; do { - struct idpf_cleaned_stats cleaned_stats = { }; + struct libeth_sq_napi_stats cleaned_stats = { }; struct idpf_tx_queue *tx_q; int rel_tx_qid; u16 hw_head; @@ -2024,13 +2032,10 @@ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget, goto fetch_next_desc; } - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_add(&tx_q->q_stats.packets, cleaned_stats.packets); - u64_stats_add(&tx_q->q_stats.bytes, cleaned_stats.bytes); + libeth_sq_napi_stats_add(&tx_q->stats, &cleaned_stats); tx_q->cleaned_pkts += cleaned_stats.packets; tx_q->cleaned_bytes += cleaned_stats.bytes; complq->num_completions++; - u64_stats_update_end(&tx_q->stats_sync); fetch_next_desc: tx_desc++; @@ -2073,9 +2078,12 @@ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget, np->state != __IDPF_VPORT_UP || !netif_carrier_ok(tx_q->netdev); /* Check if the TXQ needs to and can be restarted */ - __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, - IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, - dont_wake); + if (!__netif_txq_completed_wake(nq, tx_q->cleaned_pkts, + tx_q->cleaned_bytes, + IDPF_DESC_UNUSED(tx_q), + IDPF_TX_WAKE_THRESH, + dont_wake)) + libeth_stats_inc_one(&tx_q->stats, wake); /* Reset cleaned stats for the next time this queue is * cleaned @@ -2158,11 +2166,8 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, splitq_stop: netif_stop_subqueue(tx_q->netdev, tx_q->idx); - out: - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.q_busy); - u64_stats_update_end(&tx_q->stats_sync); + libeth_stats_inc_one(&tx_q->stats, stop); return -EBUSY; } @@ -2185,11 +2190,8 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val, nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); tx_q->next_to_use = val; - if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) { - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.q_busy); - u64_stats_update_end(&tx_q->stats_sync); - } + if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) + libeth_stats_inc_one(&tx_q->stats, stop); /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only @@ -2242,9 +2244,7 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, return 0; count = idpf_size_to_txd_count(skb->len); - u64_stats_update_begin(&txq->stats_sync); - u64_stats_inc(&txq->q_stats.linearize); - u64_stats_update_end(&txq->stats_sync); + libeth_stats_inc_one(&txq->stats, linearized); } return count; @@ -2266,9 +2266,7 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, .ss = &ss, }; - u64_stats_update_begin(&txq->stats_sync); - u64_stats_inc(&txq->q_stats.dma_map_errs); - u64_stats_update_end(&txq->stats_sync); + libeth_stats_inc_one(&txq->stats, dma_map_errs); /* clear dma mappings for failed tx_buf map */ for (;;) { @@ -2508,11 +2506,13 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q, * idpf_tso - computes mss and TSO length to prepare for TSO * @skb: pointer to skb * @off: pointer to struct that holds offload parameters + * @ss: SQ xmit onstack stats * * Returns error (negative) if TSO was requested but cannot be applied to the * given skb, 0 if TSO does not apply to the given skb, or 1 otherwise. */ -int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) +int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off, + struct libeth_sq_xmit_stats *ss) { const struct skb_shared_info *shinfo; union { @@ -2559,6 +2559,8 @@ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) csum_replace_by_diff(&l4.tcp->check, (__force __wsum)htonl(paylen)); off->tso_hdr_len = __tcp_hdrlen(l4.tcp) + l4_start; + + ss->tso++; break; case SKB_GSO_UDP_L4: csum_replace_by_diff(&l4.udp->check, @@ -2566,6 +2568,8 @@ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) /* compute length of segmentation header */ off->tso_hdr_len = sizeof(struct udphdr) + l4_start; l4.udp->len = htons(shinfo->gso_size + sizeof(struct udphdr)); + + ss->uso++; break; default: return -EINVAL; @@ -2577,6 +2581,9 @@ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off) off->tx_flags |= IDPF_TX_FLAGS_TSO; + ss->hw_gso_packets++; + ss->hw_gso_bytes += skb->len; + return 1; } @@ -2715,10 +2722,7 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq) */ netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb) { - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.skb_drops); - u64_stats_update_end(&tx_q->stats_sync); - + libeth_stats_inc_one(&tx_q->stats, drops); idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); dev_kfree_skb(skb); @@ -2737,6 +2741,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, struct idpf_tx_queue *tx_q) { struct idpf_tx_splitq_params tx_params = { }; + struct libeth_sq_xmit_stats ss = { }; struct idpf_tx_buf *first; unsigned int count; int tso; @@ -2745,7 +2750,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, if (unlikely(!count)) return idpf_tx_drop_skb(tx_q, skb); - tso = idpf_tso(skb, &tx_params.offload); + tso = idpf_tso(skb, &tx_params.offload, &ss); if (unlikely(tso < 0)) return idpf_tx_drop_skb(tx_q, skb); @@ -2753,6 +2758,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso); if (idpf_tx_maybe_stop_splitq(tx_q, count)) { idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); + libeth_stats_inc_one(&tx_q->stats, busy); return NETDEV_TX_BUSY; } @@ -2772,10 +2778,6 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, cpu_to_le16(tx_params.offload.mss & IDPF_TXD_FLEX_CTX_MSS_RT_M); ctx_desc->tso.qw0.hdr_len = tx_params.offload.tso_hdr_len; - - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.lso_pkts); - u64_stats_update_end(&tx_q->stats_sync); } /* record the location of the first descriptor for this packet */ @@ -2817,6 +2819,10 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, idpf_tx_splitq_map(tx_q, &tx_params, first); + libeth_stats_add_frags(&ss, count); + libeth_sq_xmit_stats_csum(&ss, skb); + libeth_sq_xmit_stats_add(&tx_q->stats, &ss); + return NETDEV_TX_OK; } @@ -2885,41 +2891,44 @@ idpf_rx_hash(const struct idpf_rx_queue *rxq, struct sk_buff *skb, * @skb: pointer to current skb being populated * @csum_bits: checksum fields extracted from the descriptor * @decoded: Decoded Rx packet type related fields + * @rs: RQ polling onstack stats * * skb->protocol must be set before this function is called */ static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, struct idpf_rx_csum_decoded csum_bits, - struct libeth_rx_pt decoded) + struct libeth_rx_pt decoded, + struct libeth_rq_napi_stats *rs) { bool ipv4, ipv6; /* check if Rx checksum is enabled */ if (!libeth_rx_pt_has_checksum(rxq->netdev, decoded)) - return; + goto none; /* check if HW has decoded the packet and checksum */ if (unlikely(!csum_bits.l3l4p)) - return; + goto none; ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; if (unlikely(ipv4 && (csum_bits.ipe || csum_bits.eipe))) - goto checksum_fail; + goto bad; if (unlikely(ipv6 && csum_bits.ipv6exadd)) - return; + goto none; /* check for L4 errors and handle packets that were not able to be * checksummed */ if (unlikely(csum_bits.l4e)) - goto checksum_fail; + goto bad; if (csum_bits.raw_csum_inv || decoded.inner_prot == LIBETH_RX_PT_INNER_SCTP) { skb->ip_summed = CHECKSUM_UNNECESSARY; + rs->csum_unnecessary++; return; } @@ -2928,10 +2937,12 @@ static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb, return; -checksum_fail: - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.hw_csum_err); - u64_stats_update_end(&rxq->stats_sync); +none: + libeth_stats_inc_one(&rxq->stats, csum_none); + return; + +bad: + libeth_stats_inc_one(&rxq->stats, csum_bad); } /** @@ -2973,6 +2984,7 @@ idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 * * @skb : pointer to current skb being populated * @rx_desc: Receive descriptor * @decoded: Decoded Rx packet type related fields + * @rs: RQ polling onstack stats * * Return 0 on success and error code on failure * @@ -2981,7 +2993,8 @@ idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 * */ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb, const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, - struct libeth_rx_pt decoded) + struct libeth_rx_pt decoded, + struct libeth_rq_napi_stats *rs) { u16 rsc_segments, rsc_seg_len; bool ipv4, ipv6; @@ -3033,9 +3046,8 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb, tcp_gro_complete(skb); - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.rsc_pkts); - u64_stats_update_end(&rxq->stats_sync); + rs->hw_gro_packets++; + rs->hw_gro_bytes += skb->len; return 0; } @@ -3045,6 +3057,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb, * @rxq: Rx descriptor ring packet is being transacted on * @skb: pointer to current skb being populated * @rx_desc: Receive descriptor + * @rs: RQ polling onstack stats * * This function checks the ring, descriptor, and packet information in * order to populate the hash, checksum, protocol, and @@ -3052,7 +3065,8 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb, */ static int idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb, - const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, + struct libeth_rq_napi_stats *rs) { struct idpf_rx_csum_decoded csum_bits; struct libeth_rx_pt decoded; @@ -3069,10 +3083,10 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb, if (le16_get_bits(rx_desc->hdrlen_flags, VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M)) - return idpf_rx_rsc(rxq, skb, rx_desc, decoded); + return idpf_rx_rsc(rxq, skb, rx_desc, decoded, rs); csum_bits = idpf_rx_splitq_extract_csum_bits(rx_desc); - idpf_rx_csum(rxq, skb, csum_bits, decoded); + idpf_rx_csum(rxq, skb, csum_bits, decoded, rs); skb_record_rx_queue(skb, rxq->idx); @@ -3203,13 +3217,13 @@ static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_de */ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) { - int total_rx_bytes = 0, total_rx_pkts = 0; struct idpf_buf_queue *rx_bufq = NULL; + struct libeth_rq_napi_stats rs = { }; struct sk_buff *skb = rxq->skb; u16 ntc = rxq->next_to_clean; /* Process Rx packets bounded by budget */ - while (likely(total_rx_pkts < budget)) { + while (likely(rs.packets < budget)) { struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc; struct libeth_fqe *hdr, *rx_buf = NULL; struct idpf_sw_queue *refillq = NULL; @@ -3239,9 +3253,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) rx_desc->rxdid_ucast); if (rxdid != VIRTCHNL2_RXDID_2_FLEX_SPLITQ) { IDPF_RX_BUMP_NTC(rxq, ntc); - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.bad_descs); - u64_stats_update_end(&rxq->stats_sync); + libeth_stats_inc_one(&rxq->stats, dma_errs); continue; } @@ -3283,9 +3295,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) hdr_len = idpf_rx_hsplit_wa(hdr, rx_buf, pkt_len); pkt_len -= hdr_len; - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf); - u64_stats_update_end(&rxq->stats_sync); + libeth_stats_inc_one(&rxq->stats, hsplit_errs); } if (libeth_rx_sync_for_cpu(hdr, hdr_len)) { @@ -3293,13 +3303,14 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) if (!skb) break; - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.hsplit_pkts); - u64_stats_update_end(&rxq->stats_sync); + rs.hsplit++; } hdr->page = NULL; + if (!pkt_len) + rs.hsplit_linear++; + payload: if (!libeth_rx_sync_for_cpu(rx_buf, pkt_len)) goto skip_data; @@ -3330,10 +3341,11 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) } /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + rs.bytes += skb->len; /* protocol */ - if (unlikely(idpf_rx_process_skb_fields(rxq, skb, rx_desc))) { + if (unlikely(idpf_rx_process_skb_fields(rxq, skb, rx_desc, + &rs))) { dev_kfree_skb_any(skb); skb = NULL; continue; @@ -3344,19 +3356,15 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget) skb = NULL; /* update budget accounting */ - total_rx_pkts++; + rs.packets++; } rxq->next_to_clean = ntc; rxq->skb = skb; - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_add(&rxq->q_stats.packets, total_rx_pkts); - u64_stats_add(&rxq->q_stats.bytes, total_rx_bytes); - u64_stats_update_end(&rxq->stats_sync); + libeth_rq_napi_stats_add(&rxq->stats, &rs); - /* guarantee a trip back through this routine if there was a failure */ - return total_rx_pkts; + return rs.packets; } /** @@ -3666,10 +3674,10 @@ static void idpf_net_dim(struct idpf_q_vector *q_vector) unsigned int start; do { - start = u64_stats_fetch_begin(&txq->stats_sync); - packets += u64_stats_read(&txq->q_stats.packets); - bytes += u64_stats_read(&txq->q_stats.bytes); - } while (u64_stats_fetch_retry(&txq->stats_sync, start)); + start = u64_stats_fetch_begin(&txq->stats.syncp); + packets += u64_stats_read(&txq->stats.packets); + bytes += u64_stats_read(&txq->stats.bytes); + } while (u64_stats_fetch_retry(&txq->stats.syncp, start)); } idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->tx_dim, @@ -3685,10 +3693,10 @@ static void idpf_net_dim(struct idpf_q_vector *q_vector) unsigned int start; do { - start = u64_stats_fetch_begin(&rxq->stats_sync); - packets += u64_stats_read(&rxq->q_stats.packets); - bytes += u64_stats_read(&rxq->q_stats.bytes); - } while (u64_stats_fetch_retry(&rxq->stats_sync, start)); + start = u64_stats_fetch_begin(&rxq->stats.syncp); + packets += u64_stats_read(&rxq->stats.packets); + bytes += u64_stats_read(&rxq->stats.bytes); + } while (u64_stats_fetch_retry(&rxq->stats.syncp, start)); } idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->rx_dim, diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 70986e12da28..3fcc8ac70b74 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -2292,47 +2292,44 @@ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs) */ int idpf_send_get_stats_msg(struct idpf_vport *vport) { + struct virtchnl2_vport_stats *stats_msg = &vport->vport_stats; struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct rtnl_link_stats64 *netstats = &np->netstats; - struct virtchnl2_vport_stats stats_msg = {}; struct idpf_vc_xn_params xn_params = {}; ssize_t reply_sz; - /* Don't send get_stats message if the link is down */ if (np->state <= __IDPF_VPORT_DOWN) return 0; - stats_msg.vport_id = cpu_to_le32(vport->vport_id); + stats_msg->vport_id = cpu_to_le32(vport->vport_id); xn_params.vc_op = VIRTCHNL2_OP_GET_STATS; - xn_params.send_buf.iov_base = &stats_msg; - xn_params.send_buf.iov_len = sizeof(stats_msg); + xn_params.send_buf.iov_base = stats_msg; + xn_params.send_buf.iov_len = sizeof(*stats_msg); xn_params.recv_buf = xn_params.send_buf; xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params); if (reply_sz < 0) return reply_sz; - if (reply_sz < sizeof(stats_msg)) + if (reply_sz < sizeof(*stats_msg)) return -EIO; spin_lock_bh(&np->stats_lock); - netstats->rx_packets = le64_to_cpu(stats_msg.rx_unicast) + - le64_to_cpu(stats_msg.rx_multicast) + - le64_to_cpu(stats_msg.rx_broadcast); - netstats->tx_packets = le64_to_cpu(stats_msg.tx_unicast) + - le64_to_cpu(stats_msg.tx_multicast) + - le64_to_cpu(stats_msg.tx_broadcast); - netstats->rx_bytes = le64_to_cpu(stats_msg.rx_bytes); - netstats->tx_bytes = le64_to_cpu(stats_msg.tx_bytes); - netstats->rx_errors = le64_to_cpu(stats_msg.rx_errors); - netstats->tx_errors = le64_to_cpu(stats_msg.tx_errors); - netstats->rx_dropped = le64_to_cpu(stats_msg.rx_discards); - netstats->tx_dropped = le64_to_cpu(stats_msg.tx_discards); - - vport->port_stats.vport_stats = stats_msg; + netstats->rx_packets = le64_to_cpu(stats_msg->rx_unicast) + + le64_to_cpu(stats_msg->rx_multicast) + + le64_to_cpu(stats_msg->rx_broadcast); + netstats->tx_packets = le64_to_cpu(stats_msg->tx_unicast) + + le64_to_cpu(stats_msg->tx_multicast) + + le64_to_cpu(stats_msg->tx_broadcast); + netstats->rx_bytes = le64_to_cpu(stats_msg->rx_bytes); + netstats->tx_bytes = le64_to_cpu(stats_msg->tx_bytes); + netstats->rx_errors = le64_to_cpu(stats_msg->rx_errors); + netstats->tx_errors = le64_to_cpu(stats_msg->tx_errors); + netstats->rx_dropped = le64_to_cpu(stats_msg->rx_discards); + netstats->tx_dropped = le64_to_cpu(stats_msg->tx_discards); spin_unlock_bh(&np->stats_lock);