From patchwork Mon Oct 7 12:31:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 1993588 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=kb029FdV; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMdnN4MR5z1xtV for ; Mon, 7 Oct 2024 23:31:52 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 3BC4D4091E; Mon, 7 Oct 2024 12:31:44 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id NM39BU7j59hO; Mon, 7 Oct 2024 12:31:43 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 5580C408B0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1728304303; bh=Cz4kPA6dKsgqq0HrMQA9PFkTNe0rHOz9xSJaT+6WwYI=; h=From:Date:References:In-Reply-To:To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=kb029FdVHs59s+ZYH4Q4CvNVBNUro6XPb0dzMBLxFPu44b1U5g0u2XQV0jNOxgONp cPyih52ekZ83FeN0B1eP6mX/R+caNvGwNqUY9FfB8Qg+SCO5lTM7NlDCugVU6jqxZ7 BzDFUMitX0VyEcudIhdG0rkSw3lW9B2Xf2kh2I0s0AlPyqgjF909aqhLr5NvxCb7Jz Y+Fyq4BeP/RmgVntGslLMsPHbnmh5Z8PACT/tlOaqW50yzLtM03dEjC/ko7uxNrWRI Jt4PzBCfQYc+VLqtYL37xobaLwMxoL8/wftuowuFsO5jzxiF/M9v7QjyyRkXxd9n3c wPjGplDtZMfPw== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 5580C408B0; Mon, 7 Oct 2024 12:31:43 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id AB1341BF46D for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 97C2940646 for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id oLw0XftX0mBd for ; Mon, 7 Oct 2024 12:31:36 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=193.142.43.55; helo=galois.linutronix.de; envelope-from=kurt@linutronix.de; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org E3D1D4063B DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org E3D1D4063B Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by smtp2.osuosl.org (Postfix) with ESMTPS id E3D1D4063B for ; Mon, 7 Oct 2024 12:31:35 +0000 (UTC) From: Kurt Kanzenbach Date: Mon, 07 Oct 2024 14:31:23 +0200 MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-1-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel X-Developer-Signature: v=1; a=openpgp-sha256; l=6388; i=kurt@linutronix.de; h=from:subject:message-id; bh=BUo9bjWjvJHruoupHD0KEda6yWDg4kg5Kz4Gw3OfUZg=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SfY/ZqC+n53/lMBEg3poTgH2A/fa4gFky71 LWRtyBLvnaJAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUnwAKCRDBk9HyqkZz glg6D/9SGOE+XMUm3Vk9Xa+oheEJKv4Qv4MlgIU0TVoRiLlKmbQp7DR0670vN71nn6V05918z9t eiks6LVmQfinAuiUB2dmN5MiGwu/Ck0CJhAiMvkSkJ9tQJRKz7lCvCoE3pHzORPPrbAqONqctMb izLXVGU8PCE7UCAaL9efqxxWwpg0y18/JgDE8K15ZaZ2qnfxES3V+SitGcK5thfxGP17knoh/vD botpa1TCnboVGVc+YYwTnG3WVLb1TUflI9hMz8UYlcZRbEOpK25SMUWN4ZuC6oiMdxHfANSMWcZ 8sp5SPBAkgZRxJc8jZNwvy3jIv6YLHJhDvwGBGkBYP5Se46VLJWPuBYKnXsJWmuXBBJTbeW/Uqd DwuAHUlXM6lF050iCCdVavL1/3ZXRd3W6OboiJynFELuK18B3XKpQuM4vN7Zwx36QvP9YdKem+K wiUSpx8iZroNoEC4sf5wn3bg713QZQjS8hBqqExzF7Y1+pQXGqTQJcTn4Qibuvu98yhFBf2IBgh 2F1sTs1rKmrT6fpmeF+pXHNqvJzoOFcIYc980pnGfcxlo74ZiopUSnmbpHoJF4fdKRywoOEAJ+J H1G2anEWrnPggHfO+P3asIfu1+Se0ZadKoC8WSAFUpp+38kw69K0F6naDBFhDOViuhQTbmDfw/0 9rpZAV721nT7P8Q== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cz4kPA6dKsgqq0HrMQA9PFkTNe0rHOz9xSJaT+6WwYI=; b=fCCkiTi+iwRKvJRex6M0fydKG+EnU0oGr2P1u2zS4xpVKhUGRR2LlKbMj7wuHRERTwTsvS 06n9yE1NHR/5Wg7uwOwm5+3jLLn04alvXqvfZQdmn8KpW1orfnCVFkvrAFFZlLs7a0m1YG U4VNGo7E3dGAO7a8mwiUJFRsAE3t21OM9vHZW8X/WXb983+hFQdUmaBySEwbgt8NLpW25E tO9ap+mKinSdiPQixrqheuNw8ENpto0vw7lqTg1Ij6K5w1nLQXIukzc29AcbJqTRSB8j1v YsmNUiWsXzg7jt7r+6vNIyhV4wdvmhZDQ7C3b+9Wzg1OKbFE+NSEP1DdVwDF9Q== X-Mailman-Original-DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cz4kPA6dKsgqq0HrMQA9PFkTNe0rHOz9xSJaT+6WwYI=; b=JFdX7VBGIvRcjAdyXacqiEJPYMBK3YqRZDwlHNqRgCwF9YbC2bQY4MxYOnwGGjm2PSEbLv EZB3cu1iSwgLEMAw== X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=linutronix.de X-Mailman-Original-Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=fCCkiTi+; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=JFdX7VBG Subject: [Intel-wired-lan] [PATCH iwl-next v7 1/5] igb: Remove static qualifiers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Maciej Fijalkowski , Jesper Dangaard Brouer , Daniel Borkmann , Sriram Yagnaraman , Richard Cochran , Kurt Kanzenbach , John Fastabend , Alexei Starovoitov , Benjamin Steinke , Eric Dumazet , Sriram Yagnaraman , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, Jakub Kicinski , bpf@vger.kernel.org, Paolo Abeni , "David S. Miller" , Sebastian Andrzej Siewior Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Sriram Yagnaraman Remove static qualifiers on the following functions to be able to call from XSK specific file that is added in the later patches: - igb_xdp_tx_queue_mapping() - igb_xdp_ring_update_tail() - igb_clean_tx_ring() - igb_clean_rx_ring() - igb_xdp_xmit_back() - igb_process_skb_fields() While at it, inline igb_xdp_tx_queue_mapping() and igb_xdp_ring_update_tail(). These functions are small enough and used in XDP hot paths. Signed-off-by: Sriram Yagnaraman [Kurt: Split patches, inline small XDP functions] Signed-off-by: Kurt Kanzenbach Acked-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/igb/igb.h | 29 ++++++++++++++++++++++++ drivers/net/ethernet/intel/igb/igb_main.c | 37 +++++-------------------------- 2 files changed, 35 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 3c2dc7bdebb5..1bfe703e73d9 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -18,6 +18,7 @@ #include #include #include +#include #include @@ -731,12 +732,18 @@ int igb_setup_tx_resources(struct igb_ring *); int igb_setup_rx_resources(struct igb_ring *); void igb_free_tx_resources(struct igb_ring *); void igb_free_rx_resources(struct igb_ring *); +void igb_clean_tx_ring(struct igb_ring *tx_ring); +void igb_clean_rx_ring(struct igb_ring *rx_ring); void igb_configure_tx_ring(struct igb_adapter *, struct igb_ring *); void igb_configure_rx_ring(struct igb_adapter *, struct igb_ring *); void igb_setup_tctl(struct igb_adapter *); void igb_setup_rctl(struct igb_adapter *); void igb_setup_srrctl(struct igb_adapter *, struct igb_ring *); netdev_tx_t igb_xmit_frame_ring(struct sk_buff *, struct igb_ring *); +int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp); +void igb_process_skb_fields(struct igb_ring *rx_ring, + union e1000_adv_rx_desc *rx_desc, + struct sk_buff *skb); void igb_alloc_rx_buffers(struct igb_ring *, u16); void igb_update_stats(struct igb_adapter *); bool igb_has_link(struct igb_adapter *adapter); @@ -797,6 +804,28 @@ static inline struct netdev_queue *txring_txq(const struct igb_ring *tx_ring) return netdev_get_tx_queue(tx_ring->netdev, tx_ring->queue_index); } +/* This function assumes __netif_tx_lock is held by the caller. */ +static inline void igb_xdp_ring_update_tail(struct igb_ring *ring) +{ + lockdep_assert_held(&txring_txq(ring)->_xmit_lock); + + /* Force memory writes to complete before letting h/w know there + * are new descriptors to fetch. + */ + wmb(); + writel(ring->next_to_use, ring->tail); +} + +static inline struct igb_ring *igb_xdp_tx_queue_mapping(struct igb_adapter *adapter) +{ + unsigned int r_idx = smp_processor_id(); + + if (r_idx >= adapter->num_tx_queues) + r_idx = r_idx % adapter->num_tx_queues; + + return adapter->tx_ring[r_idx]; +} + int igb_add_filter(struct igb_adapter *adapter, struct igb_nfc_filter *input); int igb_erase_filter(struct igb_adapter *adapter, diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 1ef4cb871452..71addc0eac96 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -33,7 +33,6 @@ #include #include #include -#include #ifdef CONFIG_IGB_DCA #include #endif @@ -116,8 +115,6 @@ static void igb_configure_tx(struct igb_adapter *); static void igb_configure_rx(struct igb_adapter *); static void igb_clean_all_tx_rings(struct igb_adapter *); static void igb_clean_all_rx_rings(struct igb_adapter *); -static void igb_clean_tx_ring(struct igb_ring *); -static void igb_clean_rx_ring(struct igb_ring *); static void igb_set_rx_mode(struct net_device *); static void igb_update_phy_info(struct timer_list *); static void igb_watchdog(struct timer_list *); @@ -2915,29 +2912,7 @@ static int igb_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } -/* This function assumes __netif_tx_lock is held by the caller. */ -static void igb_xdp_ring_update_tail(struct igb_ring *ring) -{ - lockdep_assert_held(&txring_txq(ring)->_xmit_lock); - - /* Force memory writes to complete before letting h/w know there - * are new descriptors to fetch. - */ - wmb(); - writel(ring->next_to_use, ring->tail); -} - -static struct igb_ring *igb_xdp_tx_queue_mapping(struct igb_adapter *adapter) -{ - unsigned int r_idx = smp_processor_id(); - - if (r_idx >= adapter->num_tx_queues) - r_idx = r_idx % adapter->num_tx_queues; - - return adapter->tx_ring[r_idx]; -} - -static int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) +int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) { struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); int cpu = smp_processor_id(); @@ -4884,7 +4859,7 @@ static void igb_free_all_tx_resources(struct igb_adapter *adapter) * igb_clean_tx_ring - Free Tx Buffers * @tx_ring: ring to be cleaned **/ -static void igb_clean_tx_ring(struct igb_ring *tx_ring) +void igb_clean_tx_ring(struct igb_ring *tx_ring) { u16 i = tx_ring->next_to_clean; struct igb_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i]; @@ -5003,7 +4978,7 @@ static void igb_free_all_rx_resources(struct igb_adapter *adapter) * igb_clean_rx_ring - Free Rx Buffers per Queue * @rx_ring: ring to free buffers from **/ -static void igb_clean_rx_ring(struct igb_ring *rx_ring) +void igb_clean_rx_ring(struct igb_ring *rx_ring) { u16 i = rx_ring->next_to_clean; @@ -8782,9 +8757,9 @@ static bool igb_cleanup_headers(struct igb_ring *rx_ring, * order to populate the hash, checksum, VLAN, timestamp, protocol, and * other fields within the skb. **/ -static void igb_process_skb_fields(struct igb_ring *rx_ring, - union e1000_adv_rx_desc *rx_desc, - struct sk_buff *skb) +void igb_process_skb_fields(struct igb_ring *rx_ring, + union e1000_adv_rx_desc *rx_desc, + struct sk_buff *skb) { struct net_device *dev = rx_ring->netdev; From patchwork Mon Oct 7 12:31:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 1993584 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=30obnpfd; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMdn85CK5z1xtV for ; Mon, 7 Oct 2024 23:31:39 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 4EE584091E; Mon, 7 Oct 2024 12:31:38 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id K8SgQuKkpcqe; Mon, 7 Oct 2024 12:31:36 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 0DCD040647 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1728304296; bh=ghMwZtaMDneSHodyZdYQuHFHRR3/IaH3buOGb+HJs9Q=; h=From:Date:References:In-Reply-To:To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=30obnpfdRvhYsbNclneIWVVjwZMgoXr+Z8wJlwo9s6dTKQdu3nr+6+A0FVkNIoBxm cPORP9+Zx/a6pgxj9i1+LNIct+C4m08gst/1g07QSMHJ9IqCkKQQtk1O2UtyYC3Uqb VPhHQIBRYOdAOLko8GXwa+B7xXOo30pOULUZCvMjEowFSGYwX1tuHZmO25jopsNSB5 SWvWPIwf+f405YCI/8P+Glhc4jxPOlFKvqA1aC6Z09XKOG8tnO+8EJAJ3ARRNnGVXs WCVIo0oh8fGGfSaQ1EHooWaFZqt9meC9KBWsJfitz03xPUIegmCeUNFYW444qfE0VU vcOK+QBVmSntA== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 0DCD040647; Mon, 7 Oct 2024 12:31:36 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id A1B4B1BF46D for ; Mon, 7 Oct 2024 12:31:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 9CDCC805A8 for ; Mon, 7 Oct 2024 12:31:34 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id mRyb7lC6W5Jq for ; Mon, 7 Oct 2024 12:31:34 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=193.142.43.55; helo=galois.linutronix.de; envelope-from=kurt@linutronix.de; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp1.osuosl.org AB2848056D DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org AB2848056D Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by smtp1.osuosl.org (Postfix) with ESMTPS id AB2848056D for ; Mon, 7 Oct 2024 12:31:33 +0000 (UTC) From: Kurt Kanzenbach Date: Mon, 07 Oct 2024 14:31:24 +0200 MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-2-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel X-Developer-Signature: v=1; a=openpgp-sha256; l=2664; i=kurt@linutronix.de; h=from:subject:message-id; bh=P27UEKCtJR0md9g63pfwA7XxMrHb2hJUetl3/hadzuo=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SfmHq3fUFvj2DJBb7Cm4eF0/fp1NJWb1Rzb ONZ7URKo22JAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUnwAKCRDBk9HyqkZz gtrAEACZTpqev+q5mvlBTkzNaDmFgvNAJQwLDpIDG2AILQVY7oUBD75SFAuik7VyLp9fFj+GJpt Qtne7tFkReZUSNZuNI5Ol3qD2q08xAuzf0EgmdCHnuLinpN80Zg7DQMkbN5ezoqRI97mLnwL/8l bkhmhgHpQasC71pBVoWtIqtD+n53ApOHvSIGCRX8snz0qLfuHpdE5XYQjizA6PGSkZcqFjnxK8b K2XFlceviRouhRYhaZBYsMWoEFXMgBoMssNzPRMB+Y+vLzAwPDMPDpPn185wJuFPX6bW1L5mG1L MUYTemfEwrSXZ6l/fWZu/SEUNBhkwhI9MAG6bKFgeL5D1Z8NfGlHLcruUl/kKUMPeaS/Ube0dJF B/ciwh6jkl0xow2nW4yhWDWsGR/a0HdXPqEoAOctTtSXlYKORY8eFFqOqoJShgp8zceU5FHG+dP 0TO80NPT1ufSP8vBEaIA1W/r6BpPuQ/8VgDNKluIzgtdGsYh52+oaKIWmJtvRpoUPAXKe8Wfe7c 1MqOPiKhVXKNRKhwGJ48nL/Osvox6X33YzeN4NcOJT8UXQ/sxH5ez1tlOTNEA6lnrBLzutvTMxO Mjk7PnqA0J0wB5x3vTA+LnJs/LYKqi+m1cC7HclZiOLPnygDzkRC+vsIQLMUyc31tFEh+3jjncR X1nOEjkPLnQsQWQ== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304291; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ghMwZtaMDneSHodyZdYQuHFHRR3/IaH3buOGb+HJs9Q=; b=LPy1SnLs3WgToFhpzG8K4YpdJpBAtSgvVMgWL1NXq5SDFQJvhYswB4js2Ys8b0d7pLjrxv VmNVqMncZDhxEFC7s1/ANgqb+EwkxM4NdGvok1sgOn+dR0JRNyb9cgZGRr/do3AKTDh8ZT FzVg0Db5BliibhqvTlaXn80TI7RXlR+55CDPwNCbyJx4rUUzQHROfM7LB0Dwbv/yvYvwTp LE5MbvrQwE0At+y6cvwk+0RXSSVX199TPpoogDFFfVDo8swjl5qqITxLYBruFuNm1NKdTP ITkQdOY1d0eTC57y09M64zCWwB7KR1pYQQKJBR/5142fJpXViREsK87PQ7iaUA== X-Mailman-Original-DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304291; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ghMwZtaMDneSHodyZdYQuHFHRR3/IaH3buOGb+HJs9Q=; b=Im5AlMQCz+eVAAQ4yDhC+nUhUPqEfXd/gXR4QNQL+1qUkDMfvnP72iKZDbJVhqNaYUm8wL L6Mw9ka5Ow7JDLBw== X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dmarc=pass (p=none dis=none) header.from=linutronix.de X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=LPy1SnLs; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=Im5AlMQC Subject: [Intel-wired-lan] [PATCH iwl-next v7 2/5] igb: Introduce igb_xdp_is_enabled() X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Maciej Fijalkowski , Jesper Dangaard Brouer , Daniel Borkmann , Sriram Yagnaraman , Richard Cochran , Kurt Kanzenbach , John Fastabend , Alexei Starovoitov , Benjamin Steinke , Eric Dumazet , Sriram Yagnaraman , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, Jakub Kicinski , bpf@vger.kernel.org, Paolo Abeni , "David S. Miller" , Sebastian Andrzej Siewior Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Sriram Yagnaraman Introduce igb_xdp_is_enabled() to check if an XDP program is assigned to the device. Use that wherever xdp_prog is read and evaluated. Signed-off-by: Sriram Yagnaraman [Kurt: Split patches and use READ_ONCE()] Signed-off-by: Kurt Kanzenbach Acked-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/igb/igb.h | 5 +++++ drivers/net/ethernet/intel/igb/igb_main.c | 8 +++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 1bfe703e73d9..6e2b61ecff68 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -826,6 +826,11 @@ static inline struct igb_ring *igb_xdp_tx_queue_mapping(struct igb_adapter *adap return adapter->tx_ring[r_idx]; } +static inline bool igb_xdp_is_enabled(struct igb_adapter *adapter) +{ + return !!READ_ONCE(adapter->xdp_prog); +} + int igb_add_filter(struct igb_adapter *adapter, struct igb_nfc_filter *input); int igb_erase_filter(struct igb_adapter *adapter, diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 71addc0eac96..34a3f172d86c 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2926,7 +2926,8 @@ int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) /* During program transitions its possible adapter->xdp_prog is assigned * but ring has not been configured yet. In this case simply abort xmit. */ - tx_ring = adapter->xdp_prog ? igb_xdp_tx_queue_mapping(adapter) : NULL; + tx_ring = igb_xdp_is_enabled(adapter) ? + igb_xdp_tx_queue_mapping(adapter) : NULL; if (unlikely(!tx_ring)) return IGB_XDP_CONSUMED; @@ -2959,7 +2960,8 @@ static int igb_xdp_xmit(struct net_device *dev, int n, /* During program transitions its possible adapter->xdp_prog is assigned * but ring has not been configured yet. In this case simply abort xmit. */ - tx_ring = adapter->xdp_prog ? igb_xdp_tx_queue_mapping(adapter) : NULL; + tx_ring = igb_xdp_is_enabled(adapter) ? + igb_xdp_tx_queue_mapping(adapter) : NULL; if (unlikely(!tx_ring)) return -ENXIO; @@ -6593,7 +6595,7 @@ static int igb_change_mtu(struct net_device *netdev, int new_mtu) struct igb_adapter *adapter = netdev_priv(netdev); int max_frame = new_mtu + IGB_ETH_PKT_HDR_PAD; - if (adapter->xdp_prog) { + if (igb_xdp_is_enabled(adapter)) { int i; for (i = 0; i < adapter->num_rx_queues; i++) { From patchwork Mon Oct 7 12:31:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 1993602 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=9gSNP2H9; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMdzM5bWjz1xtV for ; Mon, 7 Oct 2024 23:40:31 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 8BE9040646; Mon, 7 Oct 2024 12:40:29 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 2RPWI4QHAMrn; Mon, 7 Oct 2024 12:40:27 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 52C3B4063B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1728304827; bh=1uktNlSe4Ue2ttN8ooIEePoSYyHmjtpo7BRKCPg3mIw=; h=From:Date:References:In-Reply-To:To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=9gSNP2H9Pbmc1xEXRAaagpLCeWPTTF8NwfzNoCjrtRfdEFGG/GaIXpgzVfzNTLbcG l9/7hUu/gJxhwSKRZWc33RpMv/NH87Vyj5fzQHJGwGKbsd0RND74pC1O4oZh020Fls blb1DdObTRqvTnZnIs96wOdppD4deGqcwG6yHeC4SeDNBwmC0qlnBHnH8XDxJqJ50c Lewt2u+jK+A8Z/Gyg8EX4BCjVWOtPZ+AuFEaURdfX6owuo7tJM5oGJr/Qv9FskvwVc GmQGhvBBErMVoFNgM/AIKyYMjmcZ+uQ9vPOaVCYLjESR6CJ4LNALO+HrLNldUa/6YY 8yjWqmwS6cxUg== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 52C3B4063B; Mon, 7 Oct 2024 12:40:27 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 113801BF46D for ; Mon, 7 Oct 2024 12:40:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id F2B91605D5 for ; Mon, 7 Oct 2024 12:40:25 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id VfbZiRKarY3a for ; Mon, 7 Oct 2024 12:40:24 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=193.142.43.55; helo=galois.linutronix.de; envelope-from=kurt@linutronix.de; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp3.osuosl.org 8874960073 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 8874960073 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by smtp3.osuosl.org (Postfix) with ESMTPS id 8874960073 for ; Mon, 7 Oct 2024 12:40:24 +0000 (UTC) From: Kurt Kanzenbach Date: Mon, 07 Oct 2024 14:31:25 +0200 MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-3-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel X-Developer-Signature: v=1; a=openpgp-sha256; l=10281; i=kurt@linutronix.de; h=from:subject:message-id; bh=tVQyAgNKNQh1sfWLTT+CdtJCSRuud0W1t6dGpYCll2o=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SgAZfA5oVQofs6CHGi4L5+Pc1KeV1Mx5dpr bVVXfatNl+JAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUoAAKCRDBk9HyqkZz ggpMD/4zuR3yOkVTPeM/LCOzN2BeawHqCaLteCWj5LXx+ajmWemCBFMCqRPDzTTc4ERVTRK7+tO HS6j/P1avLpS6kC5IaUK5ZBOOxDscLk7yUcuvPJ5lllg2hrwhlWKDkmONAKveT2xWSsT3qa6+by dWZKBL5i7xrEg7ypBgNbwvVZygvFNfwl8F/JJorUAqAo07HoWJU29KTAz8bNZRDTsCVbJ6JcH70 WdYFAB0lyJ8S420KmeIWK9EQQ1KmyfC496gYDyI+d6cUlVnk/dIAoSbSc+bEDtD1RhrNd2imkaI xcesGGXGc4pKFwTfJUnQo+WKNAnF2iJjlLFSCNWjcoUGnHxYsNqeaVfBjCUDZxyu+Rihe9sQJvi gHay+5ebCD8ZWyo8H/2GASg8Dg1R/Hi+1rTrPp+U8DryhAuh8ywvg8jI4Pkt7hb/JiwvGGh00SD OI5x2ivu10jgNRh5sYgci9Zl4ASM0kLPs3RKbkAgoxczMvv06kvtIeCDCmGnIQH75FR3KQwkFcP Pfpo4YNWePTVCnHUor0O2J5z1oqzoHoRFkpQbKcIWVhWnlvRnep63jYRSl7iV+rE14jDI76ITXc /Ob4p6RmFAh2GZJ1QVDt2Www8v6FVVsMoF7m9D/WP8Polr+QjxXU4FQPq5e/fKw637c8/FyNaYi WVf2UWRwrikm0lw== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304292; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1uktNlSe4Ue2ttN8ooIEePoSYyHmjtpo7BRKCPg3mIw=; b=u5ui+zGgBQGWE8+o3+z6y7IaY7EjrpbOTFmYwYCKsrn5EyveRUerzPv3RZAZze19fub4+4 r9sXPI/rjBTFHWwQP5yK3G4wA3TY3kk3zhpSHDt6UNWLmnoUvDOSoPbOhv5nOrCOFonOoS 6k/A5x9PlNhM/zM0Q0V3sg4eHycbQbah6HLYtaBs/ldY1Se4IxtNK3rk68hmvydu7rCWqX +bS5NSvREYsBOX97JsjAx3zOSWgMHWvfH27hSxsd0XI6IrdC3s1rquTo28ssTFXMw56d22 urDlDkiKSnUxhQEDuaXWAzAZmY0XzOspX+IjhUmgCVp+GZr2O5cLX75hp45IGg== X-Mailman-Original-DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304292; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1uktNlSe4Ue2ttN8ooIEePoSYyHmjtpo7BRKCPg3mIw=; b=zRw0cphpfohfTTwQ3ivpmbl8L42hFFQ6zYw4BarGEphU+VhIcZrRH+tpLMba81cE8h+pwl vC0gRn6JPXKntiCQ== X-Mailman-Original-Authentication-Results: smtp3.osuosl.org; dmarc=pass (p=none dis=none) header.from=linutronix.de X-Mailman-Original-Authentication-Results: smtp3.osuosl.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=u5ui+zGg; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=zRw0cphp Subject: [Intel-wired-lan] [PATCH iwl-next v7 3/5] igb: Introduce XSK data structures and helpers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Maciej Fijalkowski , Jesper Dangaard Brouer , Daniel Borkmann , Sriram Yagnaraman , Richard Cochran , Kurt Kanzenbach , John Fastabend , Alexei Starovoitov , Benjamin Steinke , Eric Dumazet , Sriram Yagnaraman , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, Jakub Kicinski , bpf@vger.kernel.org, Paolo Abeni , "David S. Miller" , Sebastian Andrzej Siewior Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Sriram Yagnaraman Add the following ring flag: - IGB_RING_FLAG_TX_DISABLED (when xsk pool is being setup) Add a xdp_buff array for use with XSK receive batch API, and a pointer to xsk_pool in igb_adapter. Add enable/disable functions for TX and RX rings. Add enable/disable functions for XSK pool. Add xsk wakeup function. None of the above functionality will be active until NETDEV_XDP_ACT_XSK_ZEROCOPY is advertised in netdev->xdp_features. Signed-off-by: Sriram Yagnaraman [Kurt: Add READ/WRITE_ONCE(), synchronize_net(), remove IGB_RING_FLAG_AF_XDP_ZC] Signed-off-by: Kurt Kanzenbach Reviewed-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/igb/Makefile | 2 +- drivers/net/ethernet/intel/igb/igb.h | 13 +- drivers/net/ethernet/intel/igb/igb_main.c | 9 ++ drivers/net/ethernet/intel/igb/igb_xsk.c | 207 ++++++++++++++++++++++++++++++ 4 files changed, 229 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/Makefile b/drivers/net/ethernet/intel/igb/Makefile index 463c0d26b9d4..6c1b702fd992 100644 --- a/drivers/net/ethernet/intel/igb/Makefile +++ b/drivers/net/ethernet/intel/igb/Makefile @@ -8,4 +8,4 @@ obj-$(CONFIG_IGB) += igb.o igb-y := igb_main.o igb_ethtool.o e1000_82575.o \ e1000_mac.o e1000_nvm.o e1000_phy.o e1000_mbx.o \ - e1000_i210.o igb_ptp.o igb_hwmon.o + e1000_i210.o igb_ptp.o igb_hwmon.o igb_xsk.o diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 6e2b61ecff68..c30d6f9708f8 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -21,6 +21,7 @@ #include #include +#include struct igb_adapter; @@ -321,6 +322,7 @@ struct igb_ring { union { /* array of buffer info structs */ struct igb_tx_buffer *tx_buffer_info; struct igb_rx_buffer *rx_buffer_info; + struct xdp_buff **rx_buffer_info_zc; }; void *desc; /* descriptor ring memory */ unsigned long flags; /* ring specific flags */ @@ -358,6 +360,7 @@ struct igb_ring { }; }; struct xdp_rxq_info xdp_rxq; + struct xsk_buff_pool *xsk_pool; } ____cacheline_internodealigned_in_smp; struct igb_q_vector { @@ -385,7 +388,8 @@ enum e1000_ring_flags_t { IGB_RING_FLAG_RX_SCTP_CSUM, IGB_RING_FLAG_RX_LB_VLAN_BSWAP, IGB_RING_FLAG_TX_CTX_IDX, - IGB_RING_FLAG_TX_DETECT_HANG + IGB_RING_FLAG_TX_DETECT_HANG, + IGB_RING_FLAG_TX_DISABLED }; #define ring_uses_large_buffer(ring) \ @@ -841,4 +845,11 @@ int igb_add_mac_steering_filter(struct igb_adapter *adapter, int igb_del_mac_steering_filter(struct igb_adapter *adapter, const u8 *addr, u8 queue, u8 flags); +struct xsk_buff_pool *igb_xsk_pool(struct igb_adapter *adapter, + struct igb_ring *ring); +int igb_xsk_pool_setup(struct igb_adapter *adapter, + struct xsk_buff_pool *pool, + u16 qid); +int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags); + #endif /* _IGB_H_ */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 34a3f172d86c..bdba5c5861be 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2904,9 +2904,14 @@ static int igb_xdp_setup(struct net_device *dev, struct netdev_bpf *bpf) static int igb_xdp(struct net_device *dev, struct netdev_bpf *xdp) { + struct igb_adapter *adapter = netdev_priv(dev); + switch (xdp->command) { case XDP_SETUP_PROG: return igb_xdp_setup(dev, xdp); + case XDP_SETUP_XSK_POOL: + return igb_xsk_pool_setup(adapter, xdp->xsk.pool, + xdp->xsk.queue_id); default: return -EINVAL; } @@ -3015,6 +3020,7 @@ static const struct net_device_ops igb_netdev_ops = { .ndo_setup_tc = igb_setup_tc, .ndo_bpf = igb_xdp, .ndo_xdp_xmit = igb_xdp_xmit, + .ndo_xsk_wakeup = igb_xsk_wakeup, }; /** @@ -4337,6 +4343,8 @@ void igb_configure_tx_ring(struct igb_adapter *adapter, u64 tdba = ring->dma; int reg_idx = ring->reg_idx; + WRITE_ONCE(ring->xsk_pool, igb_xsk_pool(adapter, ring)); + wr32(E1000_TDLEN(reg_idx), ring->count * sizeof(union e1000_adv_tx_desc)); wr32(E1000_TDBAL(reg_idx), @@ -4732,6 +4740,7 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_SHARED, NULL)); + WRITE_ONCE(ring->xsk_pool, igb_xsk_pool(adapter, ring)); /* disable the queue */ wr32(E1000_RXDCTL(reg_idx), 0); diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c new file mode 100644 index 000000000000..7b632be3e7e3 --- /dev/null +++ b/drivers/net/ethernet/intel/igb/igb_xsk.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright(c) 2018 Intel Corporation. */ + +#include +#include +#include + +#include "e1000_hw.h" +#include "igb.h" + +static int igb_realloc_rx_buffer_info(struct igb_ring *ring, bool pool_present) +{ + int size = pool_present ? + sizeof(*ring->rx_buffer_info_zc) * ring->count : + sizeof(*ring->rx_buffer_info) * ring->count; + void *buff_info = vmalloc(size); + + if (!buff_info) + return -ENOMEM; + + if (pool_present) { + vfree(ring->rx_buffer_info); + ring->rx_buffer_info = NULL; + ring->rx_buffer_info_zc = buff_info; + } else { + vfree(ring->rx_buffer_info_zc); + ring->rx_buffer_info_zc = NULL; + ring->rx_buffer_info = buff_info; + } + + return 0; +} + +static void igb_txrx_ring_disable(struct igb_adapter *adapter, u16 qid) +{ + struct igb_ring *tx_ring = adapter->tx_ring[qid]; + struct igb_ring *rx_ring = adapter->rx_ring[qid]; + struct e1000_hw *hw = &adapter->hw; + + set_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags); + + wr32(E1000_TXDCTL(tx_ring->reg_idx), 0); + wr32(E1000_RXDCTL(rx_ring->reg_idx), 0); + + synchronize_net(); + + /* Rx/Tx share the same napi context. */ + napi_disable(&rx_ring->q_vector->napi); + + igb_clean_tx_ring(tx_ring); + igb_clean_rx_ring(rx_ring); + + memset(&rx_ring->rx_stats, 0, sizeof(rx_ring->rx_stats)); + memset(&tx_ring->tx_stats, 0, sizeof(tx_ring->tx_stats)); +} + +static void igb_txrx_ring_enable(struct igb_adapter *adapter, u16 qid) +{ + struct igb_ring *tx_ring = adapter->tx_ring[qid]; + struct igb_ring *rx_ring = adapter->rx_ring[qid]; + + igb_configure_tx_ring(adapter, tx_ring); + igb_configure_rx_ring(adapter, rx_ring); + + synchronize_net(); + + clear_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags); + + /* call igb_desc_unused which always leaves + * at least 1 descriptor unused to make sure + * next_to_use != next_to_clean + */ + igb_alloc_rx_buffers(rx_ring, igb_desc_unused(rx_ring)); + + /* Rx/Tx share the same napi context. */ + napi_enable(&rx_ring->q_vector->napi); +} + +struct xsk_buff_pool *igb_xsk_pool(struct igb_adapter *adapter, + struct igb_ring *ring) +{ + int qid = ring->queue_index; + struct xsk_buff_pool *pool; + + pool = xsk_get_pool_from_qid(adapter->netdev, qid); + + if (!igb_xdp_is_enabled(adapter)) + return NULL; + + return (pool && pool->dev) ? pool : NULL; +} + +static int igb_xsk_pool_enable(struct igb_adapter *adapter, + struct xsk_buff_pool *pool, + u16 qid) +{ + struct net_device *netdev = adapter->netdev; + struct igb_ring *rx_ring; + bool if_running; + int err; + + if (qid >= adapter->num_rx_queues) + return -EINVAL; + + if (qid >= netdev->real_num_rx_queues || + qid >= netdev->real_num_tx_queues) + return -EINVAL; + + err = xsk_pool_dma_map(pool, &adapter->pdev->dev, IGB_RX_DMA_ATTR); + if (err) + return err; + + rx_ring = adapter->rx_ring[qid]; + if_running = netif_running(adapter->netdev) && igb_xdp_is_enabled(adapter); + if (if_running) + igb_txrx_ring_disable(adapter, qid); + + if (if_running) { + err = igb_realloc_rx_buffer_info(rx_ring, true); + if (!err) { + igb_txrx_ring_enable(adapter, qid); + /* Kick start the NAPI context so that receiving will start */ + err = igb_xsk_wakeup(adapter->netdev, qid, XDP_WAKEUP_RX); + } + + if (err) { + xsk_pool_dma_unmap(pool, IGB_RX_DMA_ATTR); + return err; + } + } + + return 0; +} + +static int igb_xsk_pool_disable(struct igb_adapter *adapter, u16 qid) +{ + struct xsk_buff_pool *pool; + struct igb_ring *rx_ring; + bool if_running; + int err; + + pool = xsk_get_pool_from_qid(adapter->netdev, qid); + if (!pool) + return -EINVAL; + + rx_ring = adapter->rx_ring[qid]; + if_running = netif_running(adapter->netdev) && igb_xdp_is_enabled(adapter); + if (if_running) + igb_txrx_ring_disable(adapter, qid); + + xsk_pool_dma_unmap(pool, IGB_RX_DMA_ATTR); + + if (if_running) { + err = igb_realloc_rx_buffer_info(rx_ring, false); + if (err) + return err; + + igb_txrx_ring_enable(adapter, qid); + } + + return 0; +} + +int igb_xsk_pool_setup(struct igb_adapter *adapter, + struct xsk_buff_pool *pool, + u16 qid) +{ + return pool ? igb_xsk_pool_enable(adapter, pool, qid) : + igb_xsk_pool_disable(adapter, qid); +} + +int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) +{ + struct igb_adapter *adapter = netdev_priv(dev); + struct e1000_hw *hw = &adapter->hw; + struct igb_ring *ring; + u32 eics = 0; + + if (test_bit(__IGB_DOWN, &adapter->state)) + return -ENETDOWN; + + if (!igb_xdp_is_enabled(adapter)) + return -EINVAL; + + if (qid >= adapter->num_tx_queues) + return -EINVAL; + + ring = adapter->tx_ring[qid]; + + if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags)) + return -ENETDOWN; + + if (!READ_ONCE(ring->xsk_pool)) + return -EINVAL; + + if (!napi_if_scheduled_mark_missed(&ring->q_vector->napi)) { + /* Cause software interrupt */ + if (adapter->flags & IGB_FLAG_HAS_MSIX) { + eics |= ring->q_vector->eims_value; + wr32(E1000_EICS, eics); + } else { + wr32(E1000_ICS, E1000_ICS_RXDMT0); + } + } + + return 0; +} From patchwork Mon Oct 7 12:31:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 1993586 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=B6qzdu2V; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMdnG3mppz1xtV for ; Mon, 7 Oct 2024 23:31:46 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 44EBD4093B; Mon, 7 Oct 2024 12:31:42 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id IwT8IUjlTwCz; Mon, 7 Oct 2024 12:31:39 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 6AFF3408E5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1728304299; bh=Hz6iQKH4d88PFPOcSZJfkfGGZwx2hpISZJ6AEyvo43k=; h=From:Date:References:In-Reply-To:To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=B6qzdu2VkVEQ6j/E/CVmzHKEofj9oMueRB0nsgdknoW8qmrE8sh6ArlUU3hN2pfEc KrHjGFkEAoyis3o8We7eEvkLFOWNjGW79yt7gKfGqmAtlw1yWaM5onuiIi1CEhXxRA 2UyysROrACn2bkSocF0etzlQzOvgCHpMiYsavI7WBibSWfpXbNmPt7zHjyoIY/y7zG 4QIc5mN6N9DA22TSr5ldbpV05Cb4hsv6zCjS+iSW0S/0rb/vNEq/mvzT3VwKckFlkE bfHnZqiOchnD7dtBQ9Y5ZhGsCTPfMweyZQmWNir+Kh2qY/AW+pkD5iQntx55RNalUV PF4Os5rhD4Ktg== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 6AFF3408E5; Mon, 7 Oct 2024 12:31:39 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 41D391BF46D for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 30C5B4027C for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id re6EyhjmpP9H for ; Mon, 7 Oct 2024 12:31:35 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=193.142.43.55; helo=galois.linutronix.de; envelope-from=kurt@linutronix.de; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 4ECD54027E DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 4ECD54027E Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by smtp4.osuosl.org (Postfix) with ESMTPS id 4ECD54027E for ; Mon, 7 Oct 2024 12:31:35 +0000 (UTC) From: Kurt Kanzenbach Date: Mon, 07 Oct 2024 14:31:26 +0200 MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-4-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel X-Developer-Signature: v=1; a=openpgp-sha256; l=19451; i=kurt@linutronix.de; h=from:subject:message-id; bh=4pqMPS9Ph4Ul2xoX5VRCpyd36wJzmo7i8SzCoNk2u3o=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SgUt/ZU3ONADuIK0m0Ux23sGKIqfCZBdDQ3 g4IZXDPY/CJAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUoAAKCRDBk9HyqkZz gmVQD/0Wihyz/PVuAm9E9nHe67ouYmCpHa3PJrYhdGFmxC0HMqVbhoJ2unS4xTq2h2kb+oa9B3Z vyCXc3WlWTyVzSEylX6D3mI2TkGIF1VkwPYnUALYlSgCiqz7Lg7U/oLIEPTYVv1b1ZlvPI/TJit B11foJ7NSYd8J0YWJDWSQ5Jws2woOPOwzwng5/e/3HX8gWI+EDZ/vjKrjBqfTu3uzOxC7NZR5+F Zy4XgC3wfOHYVbk716K/io0Kj07s7zSm102F1XQHBNPohVA/avfTTPchlWeUANKWqwrpYjaCy8q m0krH9mAmV5gBOhIGzOVu+e0STaQidigEGc7ckYjYXs2iMTyHplN4HhpRr3BL9f/IqNMlkGdioe V2cZWfl+Otil1wIoA9TNdHHgEeKtVtD8c1Z08N3W/ZkZ0pY2/7+JqmNNGkk+iGzqgDHlX4+Qy34 fzY9yjJFn/wkFhikijfREunQm+2rJxQAbNR8CqdAsMh3GKirTUgFggDTr6Lr3QlXmQx5XROWCU7 D9QpEgY6zCQkR0KWi8gITqYQ1Mt6wznC3u3UDEjzjSJsFPhxDsIxE3a7hk6GmGck3lpZGN697xk knYsWXQFC0CbsKiFB/u2lV06/QsX0jVqhUYcJp19g78sop9LLB8ptizRsIj1oosK+cJfSxl++AK FSIoG5Kn7y+Fb+A== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hz6iQKH4d88PFPOcSZJfkfGGZwx2hpISZJ6AEyvo43k=; b=LyHjvgZwyB8M9U1x97pyV8FDNuaH1S0VibhMBnIwnewP4MrvLnqepeu3QKYda13w54OeAm keUTsME2vB0taKZFOb4URXb/+/hcp4pp8u8obyKS3ytp8l9at3x5ZTYakOIpkfzT7Fg+s6 DMfO8Teomnrepi1Zyu8O/6svdA410TbAfMC0pZAEB70luR+sWQJB5DNKSzQbkkDggWWgJO orPgZXL/66//6wWQVWjGNSvYBXbVqafEqFPs6hFwj1SUhqQPaKycRlPVKKMs8FEbtNsBLp ygs/KAxZihq2S8Bw83y7OTQIlyuHzg+LJBP79OHb5jzNa3eojNVXcdhv0N4z7Q== X-Mailman-Original-DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hz6iQKH4d88PFPOcSZJfkfGGZwx2hpISZJ6AEyvo43k=; b=QSD3S6uJe1ecZtq1SdVRzLWAI5D2BDWfjlFfCNjGMnqt7vZhtpXTbdC2VDbTCzUuXFBc10 8sRku2i84I6oAlAg== X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=linutronix.de X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=LyHjvgZw; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=QSD3S6uJ Subject: [Intel-wired-lan] [PATCH iwl-next v7 4/5] igb: Add AF_XDP zero-copy Rx support X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Maciej Fijalkowski , Jesper Dangaard Brouer , Daniel Borkmann , Sriram Yagnaraman , Richard Cochran , Kurt Kanzenbach , John Fastabend , Alexei Starovoitov , Benjamin Steinke , Eric Dumazet , Sriram Yagnaraman , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, Jakub Kicinski , bpf@vger.kernel.org, Paolo Abeni , "David S. Miller" , Sebastian Andrzej Siewior Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Sriram Yagnaraman Add support for AF_XDP zero-copy receive path. When AF_XDP zero-copy is enabled, the rx buffers are allocated from the xsk buff pool using igb_alloc_rx_buffers_zc(). Use xsk_pool_get_rx_frame_size() to set SRRCTL rx buf size when zero-copy is enabled. Signed-off-by: Sriram Yagnaraman [Kurt: Port to v6.10 and provide napi_id for xdp_rxq_info_reg(), RCT, remove NETDEV_XDP_ACT_XSK_ZEROCOPY, update NTC handling, move stats update and xdp finalize to common functions, READ_ONCE() xsk_pool, likelyfy for XDP_REDIRECT case] Signed-off-by: Kurt Kanzenbach --- drivers/net/ethernet/intel/igb/igb.h | 8 + drivers/net/ethernet/intel/igb/igb_main.c | 132 +++++++++---- drivers/net/ethernet/intel/igb/igb_xsk.c | 296 +++++++++++++++++++++++++++++- 3 files changed, 398 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index c30d6f9708f8..ea3977b313fc 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -88,6 +88,7 @@ struct igb_adapter; #define IGB_XDP_CONSUMED BIT(0) #define IGB_XDP_TX BIT(1) #define IGB_XDP_REDIR BIT(2) +#define IGB_XDP_EXIT BIT(3) struct vf_data_storage { unsigned char vf_mac_addresses[ETH_ALEN]; @@ -740,6 +741,9 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring); void igb_clean_rx_ring(struct igb_ring *rx_ring); void igb_configure_tx_ring(struct igb_adapter *, struct igb_ring *); void igb_configure_rx_ring(struct igb_adapter *, struct igb_ring *); +void igb_finalize_xdp(struct igb_adapter *adapter, unsigned int status); +void igb_update_rx_stats(struct igb_q_vector *q_vector, unsigned int packets, + unsigned int bytes); void igb_setup_tctl(struct igb_adapter *); void igb_setup_rctl(struct igb_adapter *); void igb_setup_srrctl(struct igb_adapter *, struct igb_ring *); @@ -850,6 +854,10 @@ struct xsk_buff_pool *igb_xsk_pool(struct igb_adapter *adapter, int igb_xsk_pool_setup(struct igb_adapter *adapter, struct xsk_buff_pool *pool, u16 qid); +bool igb_alloc_rx_buffers_zc(struct igb_ring *rx_ring, u16 count); +void igb_clean_rx_ring_zc(struct igb_ring *rx_ring); +int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, + struct xsk_buff_pool *xsk_pool, const int budget); int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags); #endif /* _IGB_H_ */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index bdba5c5861be..449ee794b3c9 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -472,12 +472,17 @@ static void igb_dump(struct igb_adapter *adapter) for (i = 0; i < rx_ring->count; i++) { const char *next_desc; - struct igb_rx_buffer *buffer_info; - buffer_info = &rx_ring->rx_buffer_info[i]; + dma_addr_t dma = (dma_addr_t)0; + struct igb_rx_buffer *buffer_info = NULL; rx_desc = IGB_RX_DESC(rx_ring, i); u0 = (struct my_u0 *)rx_desc; staterr = le32_to_cpu(rx_desc->wb.upper.status_error); + if (!rx_ring->xsk_pool) { + buffer_info = &rx_ring->rx_buffer_info[i]; + dma = buffer_info->dma; + } + if (i == rx_ring->next_to_use) next_desc = " NTU"; else if (i == rx_ring->next_to_clean) @@ -497,11 +502,11 @@ static void igb_dump(struct igb_adapter *adapter) "R ", i, le64_to_cpu(u0->a), le64_to_cpu(u0->b), - (u64)buffer_info->dma, + (u64)dma, next_desc); if (netif_msg_pktdata(adapter) && - buffer_info->dma && buffer_info->page) { + buffer_info && dma && buffer_info->page) { print_hex_dump(KERN_INFO, "", DUMP_PREFIX_ADDRESS, 16, 1, @@ -1983,7 +1988,10 @@ static void igb_configure(struct igb_adapter *adapter) */ for (i = 0; i < adapter->num_rx_queues; i++) { struct igb_ring *ring = adapter->rx_ring[i]; - igb_alloc_rx_buffers(ring, igb_desc_unused(ring)); + if (ring->xsk_pool) + igb_alloc_rx_buffers_zc(ring, igb_desc_unused(ring)); + else + igb_alloc_rx_buffers(ring, igb_desc_unused(ring)); } } @@ -4405,7 +4413,8 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring) if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) xdp_rxq_info_unreg(&rx_ring->xdp_rxq); res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev, - rx_ring->queue_index, 0); + rx_ring->queue_index, + rx_ring->q_vector->napi.napi_id); if (res < 0) { dev_err(dev, "Failed to register xdp_rxq index %u\n", rx_ring->queue_index); @@ -4701,12 +4710,17 @@ void igb_setup_srrctl(struct igb_adapter *adapter, struct igb_ring *ring) struct e1000_hw *hw = &adapter->hw; int reg_idx = ring->reg_idx; u32 srrctl = 0; + u32 buf_size; - srrctl = IGB_RX_HDR_LEN << E1000_SRRCTL_BSIZEHDRSIZE_SHIFT; - if (ring_uses_large_buffer(ring)) - srrctl |= IGB_RXBUFFER_3072 >> E1000_SRRCTL_BSIZEPKT_SHIFT; + if (ring->xsk_pool) + buf_size = xsk_pool_get_rx_frame_size(ring->xsk_pool); + else if (ring_uses_large_buffer(ring)) + buf_size = IGB_RXBUFFER_3072; else - srrctl |= IGB_RXBUFFER_2048 >> E1000_SRRCTL_BSIZEPKT_SHIFT; + buf_size = IGB_RXBUFFER_2048; + + srrctl = IGB_RX_HDR_LEN << E1000_SRRCTL_BSIZEHDRSIZE_SHIFT; + srrctl |= buf_size >> E1000_SRRCTL_BSIZEPKT_SHIFT; srrctl |= E1000_SRRCTL_DESCTYPE_ADV_ONEBUF; if (hw->mac.type >= e1000_82580) srrctl |= E1000_SRRCTL_TIMESTAMP; @@ -4738,9 +4752,17 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, u32 rxdctl = 0; xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); - WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, - MEM_TYPE_PAGE_SHARED, NULL)); WRITE_ONCE(ring->xsk_pool, igb_xsk_pool(adapter, ring)); + if (ring->xsk_pool) { + WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL)); + xsk_pool_set_rxq_info(ring->xsk_pool, &ring->xdp_rxq); + } else { + WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, + MEM_TYPE_PAGE_SHARED, + NULL)); + } /* disable the queue */ wr32(E1000_RXDCTL(reg_idx), 0); @@ -4767,9 +4789,12 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, rxdctl |= IGB_RX_HTHRESH << 8; rxdctl |= IGB_RX_WTHRESH << 16; - /* initialize rx_buffer_info */ - memset(ring->rx_buffer_info, 0, - sizeof(struct igb_rx_buffer) * ring->count); + if (ring->xsk_pool) + memset(ring->rx_buffer_info_zc, 0, + sizeof(*ring->rx_buffer_info_zc) * ring->count); + else + memset(ring->rx_buffer_info, 0, + sizeof(*ring->rx_buffer_info) * ring->count); /* initialize Rx descriptor 0 */ rx_desc = IGB_RX_DESC(ring, 0); @@ -4957,8 +4982,13 @@ void igb_free_rx_resources(struct igb_ring *rx_ring) rx_ring->xdp_prog = NULL; xdp_rxq_info_unreg(&rx_ring->xdp_rxq); - vfree(rx_ring->rx_buffer_info); - rx_ring->rx_buffer_info = NULL; + if (rx_ring->xsk_pool) { + vfree(rx_ring->rx_buffer_info_zc); + rx_ring->rx_buffer_info_zc = NULL; + } else { + vfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info = NULL; + } /* if not set, then don't free */ if (!rx_ring->desc) @@ -4996,6 +5026,11 @@ void igb_clean_rx_ring(struct igb_ring *rx_ring) dev_kfree_skb(rx_ring->skb); rx_ring->skb = NULL; + if (rx_ring->xsk_pool) { + igb_clean_rx_ring_zc(rx_ring); + goto skip_for_xsk; + } + /* Free all the Rx ring sk_buffs */ while (i != rx_ring->next_to_alloc) { struct igb_rx_buffer *buffer_info = &rx_ring->rx_buffer_info[i]; @@ -5023,6 +5058,7 @@ void igb_clean_rx_ring(struct igb_ring *rx_ring) i = 0; } +skip_for_xsk: rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; @@ -8177,6 +8213,7 @@ static int igb_poll(struct napi_struct *napi, int budget) struct igb_q_vector *q_vector = container_of(napi, struct igb_q_vector, napi); + struct xsk_buff_pool *xsk_pool; bool clean_complete = true; int work_done = 0; @@ -8188,7 +8225,12 @@ static int igb_poll(struct napi_struct *napi, int budget) clean_complete = igb_clean_tx_irq(q_vector, budget); if (q_vector->rx.ring) { - int cleaned = igb_clean_rx_irq(q_vector, budget); + int cleaned; + + xsk_pool = READ_ONCE(q_vector->rx.ring->xsk_pool); + cleaned = xsk_pool ? + igb_clean_rx_irq_zc(q_vector, xsk_pool, budget) : + igb_clean_rx_irq(q_vector, budget); work_done += cleaned; if (cleaned >= budget) @@ -8852,6 +8894,38 @@ static void igb_put_rx_buffer(struct igb_ring *rx_ring, rx_buffer->page = NULL; } +void igb_finalize_xdp(struct igb_adapter *adapter, unsigned int status) +{ + int cpu = smp_processor_id(); + struct netdev_queue *nq; + + if (status & IGB_XDP_REDIR) + xdp_do_flush(); + + if (status & IGB_XDP_TX) { + struct igb_ring *tx_ring = igb_xdp_tx_queue_mapping(adapter); + + nq = txring_txq(tx_ring); + __netif_tx_lock(nq, cpu); + igb_xdp_ring_update_tail(tx_ring); + __netif_tx_unlock(nq); + } +} + +void igb_update_rx_stats(struct igb_q_vector *q_vector, unsigned int packets, + unsigned int bytes) +{ + struct igb_ring *ring = q_vector->rx.ring; + + u64_stats_update_begin(&ring->rx_syncp); + ring->rx_stats.packets += packets; + ring->rx_stats.bytes += bytes; + u64_stats_update_end(&ring->rx_syncp); + + q_vector->rx.total_packets += packets; + q_vector->rx.total_bytes += bytes; +} + static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) { unsigned int total_bytes = 0, total_packets = 0; @@ -8859,9 +8933,7 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) struct igb_ring *rx_ring = q_vector->rx.ring; u16 cleaned_count = igb_desc_unused(rx_ring); struct sk_buff *skb = rx_ring->skb; - int cpu = smp_processor_id(); unsigned int xdp_xmit = 0; - struct netdev_queue *nq; struct xdp_buff xdp; u32 frame_sz = 0; int rx_buf_pgcnt; @@ -8983,24 +9055,10 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) /* place incomplete frames back on ring for completion */ rx_ring->skb = skb; - if (xdp_xmit & IGB_XDP_REDIR) - xdp_do_flush(); - - if (xdp_xmit & IGB_XDP_TX) { - struct igb_ring *tx_ring = igb_xdp_tx_queue_mapping(adapter); - - nq = txring_txq(tx_ring); - __netif_tx_lock(nq, cpu); - igb_xdp_ring_update_tail(tx_ring); - __netif_tx_unlock(nq); - } + if (xdp_xmit) + igb_finalize_xdp(adapter, xdp_xmit); - u64_stats_update_begin(&rx_ring->rx_syncp); - rx_ring->rx_stats.packets += total_packets; - rx_ring->rx_stats.bytes += total_bytes; - u64_stats_update_end(&rx_ring->rx_syncp); - q_vector->rx.total_packets += total_packets; - q_vector->rx.total_bytes += total_bytes; + igb_update_rx_stats(q_vector, total_packets, total_bytes); if (cleaned_count) igb_alloc_rx_buffers(rx_ring, cleaned_count); diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c index 7b632be3e7e3..9fd094a799fa 100644 --- a/drivers/net/ethernet/intel/igb/igb_xsk.c +++ b/drivers/net/ethernet/intel/igb/igb_xsk.c @@ -70,7 +70,10 @@ static void igb_txrx_ring_enable(struct igb_adapter *adapter, u16 qid) * at least 1 descriptor unused to make sure * next_to_use != next_to_clean */ - igb_alloc_rx_buffers(rx_ring, igb_desc_unused(rx_ring)); + if (rx_ring->xsk_pool) + igb_alloc_rx_buffers_zc(rx_ring, igb_desc_unused(rx_ring)); + else + igb_alloc_rx_buffers(rx_ring, igb_desc_unused(rx_ring)); /* Rx/Tx share the same napi context. */ napi_enable(&rx_ring->q_vector->napi); @@ -169,6 +172,297 @@ int igb_xsk_pool_setup(struct igb_adapter *adapter, igb_xsk_pool_disable(adapter, qid); } +static u16 igb_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, + union e1000_adv_rx_desc *rx_desc, u16 count) +{ + dma_addr_t dma; + u16 buffs; + int i; + + /* nothing to do */ + if (!count) + return 0; + + buffs = xsk_buff_alloc_batch(pool, xdp, count); + for (i = 0; i < buffs; i++) { + dma = xsk_buff_xdp_get_dma(*xdp); + rx_desc->read.pkt_addr = cpu_to_le64(dma); + rx_desc->wb.upper.length = 0; + + rx_desc++; + xdp++; + } + + return buffs; +} + +bool igb_alloc_rx_buffers_zc(struct igb_ring *rx_ring, u16 count) +{ + u32 nb_buffs_extra = 0, nb_buffs = 0; + union e1000_adv_rx_desc *rx_desc; + u16 ntu = rx_ring->next_to_use; + u16 total_count = count; + struct xdp_buff **xdp; + + rx_desc = IGB_RX_DESC(rx_ring, ntu); + xdp = &rx_ring->rx_buffer_info_zc[ntu]; + + if (ntu + count >= rx_ring->count) { + nb_buffs_extra = igb_fill_rx_descs(rx_ring->xsk_pool, xdp, + rx_desc, + rx_ring->count - ntu); + if (nb_buffs_extra != rx_ring->count - ntu) { + ntu += nb_buffs_extra; + goto exit; + } + rx_desc = IGB_RX_DESC(rx_ring, 0); + xdp = rx_ring->rx_buffer_info_zc; + ntu = 0; + count -= nb_buffs_extra; + } + + nb_buffs = igb_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); + ntu += nb_buffs; + if (ntu == rx_ring->count) + ntu = 0; + + /* clear the length for the next_to_use descriptor */ + rx_desc = IGB_RX_DESC(rx_ring, ntu); + rx_desc->wb.upper.length = 0; + +exit: + if (rx_ring->next_to_use != ntu) { + rx_ring->next_to_use = ntu; + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel(ntu, rx_ring->tail); + } + + return total_count == (nb_buffs + nb_buffs_extra); +} + +void igb_clean_rx_ring_zc(struct igb_ring *rx_ring) +{ + u16 ntc = rx_ring->next_to_clean; + u16 ntu = rx_ring->next_to_use; + + while (ntc != ntu) { + struct xdp_buff *xdp = rx_ring->rx_buffer_info_zc[ntc]; + + xsk_buff_free(xdp); + ntc++; + if (ntc >= rx_ring->count) + ntc = 0; + } +} + +static struct sk_buff *igb_construct_skb_zc(struct igb_ring *rx_ring, + struct xdp_buff *xdp, + ktime_t timestamp) +{ + unsigned int totalsize = xdp->data_end - xdp->data_meta; + unsigned int metasize = xdp->data - xdp->data_meta; + struct sk_buff *skb; + + net_prefetch(xdp->data_meta); + + /* allocate a skb to store the frags */ + skb = napi_alloc_skb(&rx_ring->q_vector->napi, totalsize); + if (unlikely(!skb)) + return NULL; + + if (timestamp) + skb_hwtstamps(skb)->hwtstamp = timestamp; + + memcpy(__skb_put(skb, totalsize), xdp->data_meta, + ALIGN(totalsize, sizeof(long))); + + if (metasize) { + skb_metadata_set(skb, metasize); + __skb_pull(skb, metasize); + } + + return skb; +} + +static struct sk_buff *igb_run_xdp_zc(struct igb_adapter *adapter, + struct igb_ring *rx_ring, + struct xdp_buff *xdp, + struct xsk_buff_pool *xsk_pool, + struct bpf_prog *xdp_prog) +{ + int err, result = IGB_XDP_PASS; + u32 act; + + prefetchw(xdp->data_hard_start); /* xdp_frame write */ + + act = bpf_prog_run_xdp(xdp_prog, xdp); + + if (likely(act == XDP_REDIRECT)) { + err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog); + if (!err) { + result = IGB_XDP_REDIR; + goto xdp_out; + } + + if (xsk_uses_need_wakeup(xsk_pool) && + err == -ENOBUFS) + result = IGB_XDP_EXIT; + else + result = IGB_XDP_CONSUMED; + goto out_failure; + } + + switch (act) { + case XDP_PASS: + break; + case XDP_TX: + result = igb_xdp_xmit_back(adapter, xdp); + if (result == IGB_XDP_CONSUMED) + goto out_failure; + break; + default: + bpf_warn_invalid_xdp_action(adapter->netdev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: +out_failure: + trace_xdp_exception(rx_ring->netdev, xdp_prog, act); + fallthrough; + case XDP_DROP: + result = IGB_XDP_CONSUMED; + break; + } +xdp_out: + return ERR_PTR(-result); +} + +int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, + struct xsk_buff_pool *xsk_pool, const int budget) +{ + struct igb_adapter *adapter = q_vector->adapter; + unsigned int total_bytes = 0, total_packets = 0; + struct igb_ring *rx_ring = q_vector->rx.ring; + u32 ntc = rx_ring->next_to_clean; + struct bpf_prog *xdp_prog; + unsigned int xdp_xmit = 0; + bool failure = false; + u16 entries_to_alloc; + struct sk_buff *skb; + + /* xdp_prog cannot be NULL in the ZC path */ + xdp_prog = READ_ONCE(rx_ring->xdp_prog); + + while (likely(total_packets < budget)) { + union e1000_adv_rx_desc *rx_desc; + ktime_t timestamp = 0; + struct xdp_buff *xdp; + unsigned int size; + + rx_desc = IGB_RX_DESC(rx_ring, ntc); + size = le16_to_cpu(rx_desc->wb.upper.length); + if (!size) + break; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc until we know the + * descriptor has been written back + */ + dma_rmb(); + + xdp = rx_ring->rx_buffer_info_zc[ntc]; + xsk_buff_set_size(xdp, size); + xsk_buff_dma_sync_for_cpu(xdp); + + /* pull rx packet timestamp if available and valid */ + if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { + int ts_hdr_len; + + ts_hdr_len = igb_ptp_rx_pktstamp(rx_ring->q_vector, + xdp->data, + ×tamp); + + xdp->data += ts_hdr_len; + xdp->data_meta += ts_hdr_len; + size -= ts_hdr_len; + } + + skb = igb_run_xdp_zc(adapter, rx_ring, xdp, xsk_pool, xdp_prog); + + if (IS_ERR(skb)) { + unsigned int xdp_res = -PTR_ERR(skb); + + if (likely(xdp_res & (IGB_XDP_TX | IGB_XDP_REDIR))) { + xdp_xmit |= xdp_res; + } else if (xdp_res == IGB_XDP_EXIT) { + failure = true; + break; + } else if (xdp_res == IGB_XDP_CONSUMED) { + xsk_buff_free(xdp); + } + + total_packets++; + total_bytes += size; + ntc++; + if (ntc == rx_ring->count) + ntc = 0; + continue; + } + + skb = igb_construct_skb_zc(rx_ring, xdp, timestamp); + + /* exit if we failed to retrieve a buffer */ + if (!skb) { + rx_ring->rx_stats.alloc_failed++; + break; + } + + xsk_buff_free(xdp); + ntc++; + if (ntc == rx_ring->count) + ntc = 0; + + if (eth_skb_pad(skb)) + continue; + + /* probably a little skewed due to removing CRC */ + total_bytes += skb->len; + + /* populate checksum, timestamp, VLAN, and protocol */ + igb_process_skb_fields(rx_ring, rx_desc, skb); + + napi_gro_receive(&q_vector->napi, skb); + + /* update budget accounting */ + total_packets++; + } + + rx_ring->next_to_clean = ntc; + + if (xdp_xmit) + igb_finalize_xdp(adapter, xdp_xmit); + + igb_update_rx_stats(q_vector, total_packets, total_bytes); + + entries_to_alloc = igb_desc_unused(rx_ring); + if (entries_to_alloc >= IGB_RX_BUFFER_WRITE) + failure |= !igb_alloc_rx_buffers_zc(rx_ring, entries_to_alloc); + + if (xsk_uses_need_wakeup(xsk_pool)) { + if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + xsk_set_rx_need_wakeup(xsk_pool); + else + xsk_clear_rx_need_wakeup(xsk_pool); + + return (int)total_packets; + } + return failure ? budget : (int)total_packets; +} + int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) { struct igb_adapter *adapter = netdev_priv(dev); From patchwork Mon Oct 7 12:31:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 1993587 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=MfUB6KNe; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMdnK5Pd5z1xtV for ; Mon, 7 Oct 2024 23:31:49 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 3371A4094F; Mon, 7 Oct 2024 12:31:43 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id cluT2_bjX1TH; Mon, 7 Oct 2024 12:31:41 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.34; helo=ash.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 34BC8408B0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1728304301; bh=y3YpoPQWh2H2nFs2BUOORvmb+4OqkmJzTT1KB64jT6k=; h=From:Date:References:In-Reply-To:To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=MfUB6KNe834tr178gVhhsO3CO913MHHMJwEoksNFlUh+m/w12Lqce9uGJDEYXAV1l Fac4r+/SLfd0VSDVg8KH0hQslxP8lGb6s2H9XloPyvS5pUqU/KLZiYTynych7VflR+ mMxwQEMCS+uwvX19OJFHm2B7yx/vfUhGexGcKhaD5OXx8zI+GB+L09NPOSWOJunenv QmTJ15vbeYHTk6yE9L9O8hz6TMoVxf4piriYe8vZItE4Q4Nv0IO4cDmMMHezA9iXb3 L1Ilq47oyBy9C3hDyMADazFqd3BpFxW0MhVOLrtJT5RXSwgw2+i21J6CtnToMyktDt wOFOtKVpeCVnA== Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 34BC8408B0; Mon, 7 Oct 2024 12:31:41 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id 4822E1BF955 for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 44D6A805B0 for ; Mon, 7 Oct 2024 12:31:37 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id FcdfDQbvAEpR for ; Mon, 7 Oct 2024 12:31:36 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=193.142.43.55; helo=galois.linutronix.de; envelope-from=kurt@linutronix.de; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp1.osuosl.org 0F6B1805A8 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 0F6B1805A8 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by smtp1.osuosl.org (Postfix) with ESMTPS id 0F6B1805A8 for ; Mon, 7 Oct 2024 12:31:35 +0000 (UTC) From: Kurt Kanzenbach Date: Mon, 07 Oct 2024 14:31:27 +0200 MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-5-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel X-Developer-Signature: v=1; a=openpgp-sha256; l=9542; i=kurt@linutronix.de; h=from:subject:message-id; bh=0ZSr8jKlcYEQVEu0D6meRCjQ1c3s0MhZnKB/tbmq2eA=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SgD75YheXZJbED8J34O2fMnUy4PowBqJd/N AiFlx8dX12JAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUoAAKCRDBk9HyqkZz ghPVD/0RFN/t4nC4jYKGp4LY5zPW4Hs84LVNuW1bY9Kq97i/qAmpSid0RZDv8nk2PNtpq5hrhEL bO2NgiF51bzkRuy+AENZ/CQg6HqbveBTj4O/lYGwnXjWVYZ1yUgQEr0auAIFAg5qAqgmS2VuDnh i/YBMh5ElaucmB+XfpO4lcZ35x1a9cVXpfUhca4ieFTPSaDxmCnGkrhtRqGYn1b252uBcnVNdga 2t7QRVXhB2sy9lEVfjpIxGciWlmxJi3NJOiQOMF6zObyYrojepcXv3CLyYBMO8s+Ll1aPR2yI9L AqnsTHujrSxNciOVrFSHWqNsS9ihMfqZ7D6+yngMkaNsK5kvAa/cu7IqAoBEozQOPDvabQ+fgPp q9HoKGnYcQJXHkMKFjUAKC2j+2rxpzk7hAofG1Q+QjiEiq4GqmN88vsM9bYZPK0RqOFVW1+Xv9c C07d9cgxD5QcpawGmrUSUPFeLI65zuVmZUz9B091oH+VIdVUnn0700P0usKzCoZv7Hhflpfa+L8 fWgnWx+6+obnP+lhzhNJj0Dy1etxKHKmBqwrQyBemdvzXLN/Gj0hmxhDIM1XNOtcEmX+qBzK/P1 QYu3Od1zAs/l6fHCbS/ejOgZ8BQXX/btEojHl2OrHV8PwGLElESCeR35lMa0lnZjaInqQ31GD6X zSM/bj0IkRButfA== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304294; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y3YpoPQWh2H2nFs2BUOORvmb+4OqkmJzTT1KB64jT6k=; b=K7gX0ChlscJt1ZBwEU3ISLEXcqUF226NUdQbSGbEJJwJ5jbwu9Y99OhmMNLzHtIOZs7NXR bpPwX+ndn8+nN0vrK3pPgJRTPgP43G3dKFYdYbIZ73c3io+acn8Xg4OVLOXIpNIb5MjTiO DI2nGtBAJX1zpEup9EP5eC+iGWgyiOinjxQgzgu0xg+P23DDeuT+FHBwfKAXaWufe1UiMf zFmXC4il/RsCOFEDh95oZx/kYkihlzu2UuizEP7ws7SL/Q89cXkoDFIVpxR+24s7QHF8vW DitdAR1xykGi/013wHNX5G7+HIpQU5ttcsBPZIOjSPS03epuB3O1+Su7KLzF2w== X-Mailman-Original-DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304294; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y3YpoPQWh2H2nFs2BUOORvmb+4OqkmJzTT1KB64jT6k=; b=HMZnHEvTS/kSEQ2bKbOkNFhsKqHVL200EYvXYKugMec+6qs7POftZgMnUS8wXF8pVgbRrk ElDO598VHIfQMvDQ== X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dmarc=pass (p=none dis=none) header.from=linutronix.de X-Mailman-Original-Authentication-Results: smtp1.osuosl.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=K7gX0Chl; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=HMZnHEvT Subject: [Intel-wired-lan] [PATCH iwl-next v7 5/5] igb: Add AF_XDP zero-copy Tx support X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Maciej Fijalkowski , Jesper Dangaard Brouer , Daniel Borkmann , Sriram Yagnaraman , Richard Cochran , Kurt Kanzenbach , John Fastabend , Alexei Starovoitov , Benjamin Steinke , Eric Dumazet , Sriram Yagnaraman , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, Jakub Kicinski , bpf@vger.kernel.org, Paolo Abeni , "David S. Miller" , Sebastian Andrzej Siewior Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Sriram Yagnaraman Add support for AF_XDP zero-copy transmit path. A new TX buffer type IGB_TYPE_XSK is introduced to indicate that the Tx frame was allocated from the xsk buff pool, so igb_clean_tx_ring() and igb_clean_tx_irq() can clean the buffers correctly based on type. igb_xmit_zc() performs the actual packet transmit when AF_XDP zero-copy is enabled. We share the TX ring between slow path, XDP and AF_XDP zero-copy, so we use the netdev queue lock to ensure mutual exclusion. Signed-off-by: Sriram Yagnaraman [Kurt: Set olinfo_status in igb_xmit_zc() so that frames are transmitted, Use READ_ONCE() for xsk_pool and check Tx disabled and carrier in igb_xmit_zc()] Signed-off-by: Kurt Kanzenbach Reviewed-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/igb/igb.h | 2 + drivers/net/ethernet/intel/igb/igb_main.c | 61 ++++++++++++++++++++++++++----- drivers/net/ethernet/intel/igb/igb_xsk.c | 59 ++++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index ea3977b313fc..d65bf0dc634f 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -258,6 +258,7 @@ enum igb_tx_flags { enum igb_tx_buf_type { IGB_TYPE_SKB = 0, IGB_TYPE_XDP, + IGB_TYPE_XSK }; /* wrapper around a pointer to a socket buffer, @@ -858,6 +859,7 @@ bool igb_alloc_rx_buffers_zc(struct igb_ring *rx_ring, u16 count); void igb_clean_rx_ring_zc(struct igb_ring *rx_ring); int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, struct xsk_buff_pool *xsk_pool, const int budget); +bool igb_xmit_zc(struct igb_ring *tx_ring); int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags); #endif /* _IGB_H_ */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 449ee794b3c9..7a07071d9d16 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2978,6 +2978,9 @@ static int igb_xdp_xmit(struct net_device *dev, int n, if (unlikely(!tx_ring)) return -ENXIO; + if (unlikely(test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags))) + return -ENXIO; + nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); @@ -3325,7 +3328,8 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netdev->priv_flags |= IFF_SUPP_NOFCS; netdev->priv_flags |= IFF_UNICAST_FLT; - netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; /* MTU range: 68 - 9216 */ netdev->min_mtu = ETH_MIN_MTU; @@ -4899,15 +4903,20 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) { u16 i = tx_ring->next_to_clean; struct igb_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i]; + u32 xsk_frames = 0; while (i != tx_ring->next_to_use) { union e1000_adv_tx_desc *eop_desc, *tx_desc; /* Free all the Tx ring sk_buffs or xdp frames */ - if (tx_buffer->type == IGB_TYPE_SKB) + if (tx_buffer->type == IGB_TYPE_SKB) { dev_kfree_skb_any(tx_buffer->skb); - else + } else if (tx_buffer->type == IGB_TYPE_XDP) { xdp_return_frame(tx_buffer->xdpf); + } else if (tx_buffer->type == IGB_TYPE_XSK) { + xsk_frames++; + goto skip_for_xsk; + } /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -4938,6 +4947,7 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) DMA_TO_DEVICE); } +skip_for_xsk: tx_buffer->next_to_watch = NULL; /* move us one more past the eop_desc for start of next pkt */ @@ -4952,6 +4962,9 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) /* reset BQL for queue */ netdev_tx_reset_queue(txring_txq(tx_ring)); + if (tx_ring->xsk_pool && xsk_frames) + xsk_tx_completed(tx_ring->xsk_pool, xsk_frames); + /* reset next_to_use and next_to_clean */ tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; @@ -6485,6 +6498,9 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb, return NETDEV_TX_BUSY; } + if (unlikely(test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags))) + return NETDEV_TX_BUSY; + /* record the location of the first descriptor for this packet */ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; first->type = IGB_TYPE_SKB; @@ -8259,13 +8275,18 @@ static int igb_poll(struct napi_struct *napi, int budget) **/ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) { - struct igb_adapter *adapter = q_vector->adapter; - struct igb_ring *tx_ring = q_vector->tx.ring; - struct igb_tx_buffer *tx_buffer; - union e1000_adv_tx_desc *tx_desc; unsigned int total_bytes = 0, total_packets = 0; + struct igb_adapter *adapter = q_vector->adapter; unsigned int budget = q_vector->tx.work_limit; + struct igb_ring *tx_ring = q_vector->tx.ring; unsigned int i = tx_ring->next_to_clean; + union e1000_adv_tx_desc *tx_desc; + struct igb_tx_buffer *tx_buffer; + struct xsk_buff_pool *xsk_pool; + int cpu = smp_processor_id(); + bool xsk_xmit_done = true; + struct netdev_queue *nq; + u32 xsk_frames = 0; if (test_bit(__IGB_DOWN, &adapter->state)) return true; @@ -8296,10 +8317,14 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) total_packets += tx_buffer->gso_segs; /* free the skb */ - if (tx_buffer->type == IGB_TYPE_SKB) + if (tx_buffer->type == IGB_TYPE_SKB) { napi_consume_skb(tx_buffer->skb, napi_budget); - else + } else if (tx_buffer->type == IGB_TYPE_XDP) { xdp_return_frame(tx_buffer->xdpf); + } else if (tx_buffer->type == IGB_TYPE_XSK) { + xsk_frames++; + goto skip_for_xsk; + } /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -8331,6 +8356,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) } } +skip_for_xsk: /* move us one more past the eop_desc for start of next pkt */ tx_buffer++; tx_desc++; @@ -8359,6 +8385,21 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) q_vector->tx.total_bytes += total_bytes; q_vector->tx.total_packets += total_packets; + xsk_pool = READ_ONCE(tx_ring->xsk_pool); + if (xsk_pool) { + if (xsk_frames) + xsk_tx_completed(xsk_pool, xsk_frames); + if (xsk_uses_need_wakeup(xsk_pool)) + xsk_set_tx_need_wakeup(xsk_pool); + + nq = txring_txq(tx_ring); + __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + txq_trans_cond_update(nq); + xsk_xmit_done = igb_xmit_zc(tx_ring); + __netif_tx_unlock(nq); + } + if (test_bit(IGB_RING_FLAG_TX_DETECT_HANG, &tx_ring->flags)) { struct e1000_hw *hw = &adapter->hw; @@ -8421,7 +8462,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) } } - return !!budget; + return !!budget && xsk_xmit_done; } /** diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c index 9fd094a799fa..6c56af9548e1 100644 --- a/drivers/net/ethernet/intel/igb/igb_xsk.c +++ b/drivers/net/ethernet/intel/igb/igb_xsk.c @@ -463,6 +463,65 @@ int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, return failure ? budget : (int)total_packets; } +bool igb_xmit_zc(struct igb_ring *tx_ring) +{ + unsigned int budget = igb_desc_unused(tx_ring); + struct xsk_buff_pool *pool = tx_ring->xsk_pool; + u32 cmd_type, olinfo_status, nb_pkts, i = 0; + struct xdp_desc *descs = pool->tx_descs; + union e1000_adv_tx_desc *tx_desc = NULL; + struct igb_tx_buffer *tx_buffer_info; + unsigned int total_bytes = 0; + dma_addr_t dma; + + if (!netif_carrier_ok(tx_ring->netdev)) + return true; + + if (test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags)) + return true; + + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget); + if (!nb_pkts) + return true; + + while (nb_pkts-- > 0) { + dma = xsk_buff_raw_get_dma(pool, descs[i].addr); + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len); + + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + tx_buffer_info->bytecount = descs[i].len; + tx_buffer_info->type = IGB_TYPE_XSK; + tx_buffer_info->xdpf = NULL; + tx_buffer_info->gso_segs = 1; + tx_buffer_info->time_stamp = jiffies; + + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use); + tx_desc->read.buffer_addr = cpu_to_le64(dma); + + /* put descriptor type bits */ + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT | + E1000_ADVTXD_DCMD_IFCS; + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT; + + cmd_type |= descs[i].len | IGB_TXD_DCMD; + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type); + tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status); + + total_bytes += descs[i].len; + + i++; + tx_ring->next_to_use++; + tx_buffer_info->next_to_watch = tx_desc; + if (tx_ring->next_to_use == tx_ring->count) + tx_ring->next_to_use = 0; + } + + netdev_tx_sent_queue(txring_txq(tx_ring), total_bytes); + igb_xdp_ring_update_tail(tx_ring); + + return nb_pkts < budget; +} + int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) { struct igb_adapter *adapter = netdev_priv(dev);