From patchwork Mon Sep 14 17:32:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363840 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Bqtkm1kP0z9sTC for ; Tue, 15 Sep 2020 03:33:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726387AbgINRdv (ORCPT ); Mon, 14 Sep 2020 13:33:51 -0400 Received: from mga09.intel.com ([134.134.136.24]:61353 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726171AbgINRcb (ORCPT ); Mon, 14 Sep 2020 13:32:31 -0400 IronPort-SDR: 7J2yT+i2F6J7RsYX5E+O3ay9FvtvrCgc9RCPXCPay9cqTUOJx8BFMFRu8tGenae/N3DX4jfxvN WfgjtvsLVqQg== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056111" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056111" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: gGD8QZRyomqc9u1p+8HiWYx8UgppjAEGR02dQVmCXmjJCqSv+01wZb7Ji5Op6TUIVybfOmFvBs wMkWWSzCyYrQ== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137311" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Li RongQing , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next v2 1/5] i40e: not compute affinity_mask for IRQ Date: Mon, 14 Sep 2020 10:32:20 -0700 Message-Id: <20200914173224.692707-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Li RongQing After commit 759dc4a7e605 ("i40e: initialize our affinity_mask based on cpu_possible_mask"), NAPI IRQ affinity_mask is bind to all possible CPUs, not a fixed CPU Signed-off-by: Li RongQing Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_main.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 05c6d3ea11e6..9cfaa99da4e6 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -11186,11 +11186,10 @@ static int i40e_init_msix(struct i40e_pf *pf) * i40e_vsi_alloc_q_vector - Allocate memory for a single interrupt vector * @vsi: the VSI being configured * @v_idx: index of the vector in the vsi struct - * @cpu: cpu to be used on affinity_mask * * We allocate one q_vector. If allocation fails we return -ENOMEM. **/ -static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx, int cpu) +static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx) { struct i40e_q_vector *q_vector; @@ -11223,7 +11222,7 @@ static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx, int cpu) static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi) { struct i40e_pf *pf = vsi->back; - int err, v_idx, num_q_vectors, current_cpu; + int err, v_idx, num_q_vectors; /* if not MSIX, give the one vector only to the LAN VSI */ if (pf->flags & I40E_FLAG_MSIX_ENABLED) @@ -11233,15 +11232,10 @@ static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi) else return -EINVAL; - current_cpu = cpumask_first(cpu_online_mask); - for (v_idx = 0; v_idx < num_q_vectors; v_idx++) { - err = i40e_vsi_alloc_q_vector(vsi, v_idx, current_cpu); + err = i40e_vsi_alloc_q_vector(vsi, v_idx); if (err) goto err_out; - current_cpu = cpumask_next(current_cpu, cpu_online_mask); - if (unlikely(current_cpu >= nr_cpu_ids)) - current_cpu = cpumask_first(cpu_online_mask); } return 0; From patchwork Mon Sep 14 17:32:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363835 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Bqtjg5Ljdz9sTs for ; Tue, 15 Sep 2020 03:32:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726359AbgINRcs (ORCPT ); Mon, 14 Sep 2020 13:32:48 -0400 Received: from mga09.intel.com ([134.134.136.24]:61362 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726273AbgINRci (ORCPT ); Mon, 14 Sep 2020 13:32:38 -0400 IronPort-SDR: NJJopbuFf92Z5MwpyoN0TVJvogD6je2eW4CRgl7f+kT+RC3KJS2xJgUc13ruaRVD/7fZMUm1bA R4udQUPJo1zQ== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056113" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056113" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: I0y1i/y270dnuRoB8b+tq2+R/DlqfxFdAk1gHMauygn1OipKB7IlxKM12uumoHYHpmU6MnoHIL np6JvmExcEBg== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137314" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Li RongQing , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, kernel test robot , Jakub Kicinski , Jesse Brandeburg , Aaron Brown Subject: [net-next v2 2/5] i40e: optimise prefetch page refcount Date: Mon, 14 Sep 2020 10:32:21 -0700 Message-Id: <20200914173224.692707-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Li RongQing refcount of rx_buffer page will be added here originally, so prefetchw is needed, but after commit 1793668c3b8c ("i40e/i40evf: Update code to better handle incrementing page count"), and refcount is not added every time, so change prefetchw as prefetch. Now it mainly services page_address(), but which accesses struct page only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise it returns address based on offset, so we prefetch it conditionally. Jakub suggested to define prefetch_page_address in a common header. Reported-by: kernel test robot Suggested-by: Jakub Kicinski Signed-off-by: Li RongQing Reviewed-by: Jesse Brandeburg Tested-by: Aaron Brown Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +- include/linux/prefetch.h | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 91ab824926b9..8500e1c1a16b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1953,7 +1953,7 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring, struct i40e_rx_buffer *rx_buffer; rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean); - prefetchw(rx_buffer->page); + prefetch_page_address(rx_buffer->page); /* we are reusing so sync this buffer for CPU use */ dma_sync_single_range_for_cpu(rx_ring->dev, diff --git a/include/linux/prefetch.h b/include/linux/prefetch.h index 13eafebf3549..b83a3f944f28 100644 --- a/include/linux/prefetch.h +++ b/include/linux/prefetch.h @@ -15,6 +15,7 @@ #include #include +struct page; /* prefetch(x) attempts to pre-emptively get the memory pointed to by address "x" into the CPU L1 cache. @@ -62,4 +63,11 @@ static inline void prefetch_range(void *addr, size_t len) #endif } +static inline void prefetch_page_address(struct page *page) +{ +#if defined(WANT_PAGE_VIRTUAL) || defined(HASHED_PAGE_VIRTUAL) + prefetch(page); +#endif +} + #endif From patchwork Mon Sep 14 17:32:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363839 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Bqtkk1JRRz9sTC for ; Tue, 15 Sep 2020 03:33:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726038AbgINRdr (ORCPT ); Mon, 14 Sep 2020 13:33:47 -0400 Received: from mga09.intel.com ([134.134.136.24]:61353 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726306AbgINRcl (ORCPT ); Mon, 14 Sep 2020 13:32:41 -0400 IronPort-SDR: KHm7dtWl1kJCyqI22z5p6PjPKFqjOs98BHNSLQzoZBjUwnRC4zLmP/ZdIpqKHXQ2svrY4GOM42 ZTM2lx8wpmqw== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056114" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056114" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: ZjdwJWkvWTxX2r/h91TWvxi10LE62XFsYLZXCIqWZODXTbFix+allIQ78EVBlCVv2YrFDxtTx0 JCMpmyE4H8QA== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137317" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Aaron Brown Subject: [net-next v2 3/5] i40e, xsk: remove HW descriptor prefetch in AF_XDP path Date: Mon, 14 Sep 2020 10:32:22 -0700 Message-Id: <20200914173224.692707-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The software prefetching of HW descriptors has a negative impact on the performance. Therefore, it is now removed. Performance for the rx_drop benchmark increased with 2%. Signed-off-by: Björn Töpel Tested-by: Aaron Brown Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 13 +++++++++++++ drivers/net/ethernet/intel/i40e/i40e_txrx_common.h | 13 ------------- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 12 ++++++++++++ 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 8500e1c1a16b..b43bc20f701d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2295,6 +2295,19 @@ void i40e_finalize_xdp_rx(struct i40e_ring *rx_ring, unsigned int xdp_res) } } +/** + * i40e_inc_ntc: Advance the next_to_clean index + * @rx_ring: Rx ring + **/ +static void i40e_inc_ntc(struct i40e_ring *rx_ring) +{ + u32 ntc = rx_ring->next_to_clean + 1; + + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; + prefetch(I40E_RX_DESC(rx_ring, ntc)); +} + /** * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf * @rx_ring: rx descriptor ring to transact packets on diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h index 667c4dc4b39f..1397dd3c1c57 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h @@ -99,19 +99,6 @@ static inline bool i40e_rx_is_programming_status(u64 qword1) return qword1 & I40E_RXD_QW1_LENGTH_SPH_MASK; } -/** - * i40e_inc_ntc: Advance the next_to_clean index - * @rx_ring: Rx ring - **/ -static inline void i40e_inc_ntc(struct i40e_ring *rx_ring) -{ - u32 ntc = rx_ring->next_to_clean + 1; - - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - prefetch(I40E_RX_DESC(rx_ring, ntc)); -} - void i40e_xsk_clean_rx_ring(struct i40e_ring *rx_ring); void i40e_xsk_clean_tx_ring(struct i40e_ring *tx_ring); bool i40e_xsk_any_rx_ring_enabled(struct i40e_vsi *vsi); diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 2a1153d8957b..cf48758447c2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -257,6 +257,18 @@ static struct sk_buff *i40e_construct_skb_zc(struct i40e_ring *rx_ring, return skb; } +/** + * i40e_inc_ntc: Advance the next_to_clean index + * @rx_ring: Rx ring + **/ +static void i40e_inc_ntc(struct i40e_ring *rx_ring) +{ + u32 ntc = rx_ring->next_to_clean + 1; + + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; +} + /** * i40e_clean_rx_irq_zc - Consumes Rx packets from the hardware ring * @rx_ring: Rx ring From patchwork Mon Sep 14 17:32:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363838 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Bqtkf4Bfnz9sTC for ; Tue, 15 Sep 2020 03:33:46 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726340AbgINRdl (ORCPT ); Mon, 14 Sep 2020 13:33:41 -0400 Received: from mga09.intel.com ([134.134.136.24]:61362 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726322AbgINRcl (ORCPT ); Mon, 14 Sep 2020 13:32:41 -0400 IronPort-SDR: YAFF6n4X9V6Ep0uU/mii9FoWqEXTmOs6UqYFrYSIPWig92LL2Zoyr59dRA1dJ8UpurHwNMRMqW cVKnfQXUZ9tw== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056115" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056115" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: z1bVF+38rmvXCrL6adR2DuaLyivDqOGeYI5Z8ATwPW67HR7TBXsuNHd1PfzqKzFMwlBlJySu70 /nDZV61HqCNA== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137320" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Aaron Brown Subject: [net-next v2 4/5] i40e: use 16B HW descriptors instead of 32B Date: Mon, 14 Sep 2020 10:32:23 -0700 Message-Id: <20200914173224.692707-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The i40e NIC supports two flavors of HW descriptors, 16 and 32 byte. The latter has, obviously, room for more offloading information. However, the only fields of the 32B HW descriptor that is being used by the driver, is also available in the 16B descriptor. In other words; Reading and writing 32 bytes instead of 16 byte is a waste of bus bandwidth. This commit starts using 16 byte descriptors instead of 32 byte descriptors. For AF_XDP the rx_drop benchmark was improved by 2%. Signed-off-by: Björn Töpel Tested-by: Aaron Brown Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e.h | 2 +- drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 10 ++++------ drivers/net/ethernet/intel/i40e/i40e_main.c | 4 ++-- drivers/net/ethernet/intel/i40e/i40e_trace.h | 6 +++--- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 6 +++--- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 2 +- drivers/net/ethernet/intel/i40e/i40e_type.h | 5 ++++- 7 files changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index a7e212d1caa2..ada0e93c38f0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -90,7 +90,7 @@ #define I40E_OEM_RELEASE_MASK 0x0000ffff #define I40E_RX_DESC(R, i) \ - (&(((union i40e_32byte_rx_desc *)((R)->desc))[i])) + (&(((union i40e_rx_desc *)((R)->desc))[i])) #define I40E_TX_DESC(R, i) \ (&(((struct i40e_tx_desc *)((R)->desc))[i])) #define I40E_TX_CTXTDESC(R, i) \ diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c index d3ad2e3aa838..d7c13ca9be7d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c +++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c @@ -604,10 +604,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n, } else { rxd = I40E_RX_DESC(ring, i); dev_info(&pf->pdev->dev, - " d[%03x] = 0x%016llx 0x%016llx 0x%016llx 0x%016llx\n", + " d[%03x] = 0x%016llx 0x%016llx\n", i, rxd->read.pkt_addr, - rxd->read.hdr_addr, - rxd->read.rsvd1, rxd->read.rsvd2); + rxd->read.hdr_addr); } } } else if (cnt == 3) { @@ -625,10 +624,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n, } else { rxd = I40E_RX_DESC(ring, desc_n); dev_info(&pf->pdev->dev, - "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx 0x%016llx 0x%016llx\n", + "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n", vsi_seid, ring_id, desc_n, - rxd->read.pkt_addr, rxd->read.hdr_addr, - rxd->read.rsvd1, rxd->read.rsvd2); + rxd->read.pkt_addr, rxd->read.hdr_addr); } } else { dev_info(&pf->pdev->dev, "dump desc rx/tx/xdp []\n"); diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 9cfaa99da4e6..07207e21874f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3321,8 +3321,8 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring) rx_ctx.base = (ring->dma / 128); rx_ctx.qlen = ring->count; - /* use 32 byte descriptors */ - rx_ctx.dsize = 1; + /* use 16 byte descriptors */ + rx_ctx.dsize = 0; /* descriptor type is always zero * rx_ctx.dtype = 0; diff --git a/drivers/net/ethernet/intel/i40e/i40e_trace.h b/drivers/net/ethernet/intel/i40e/i40e_trace.h index 424f02077e2e..983f8b98b275 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_trace.h +++ b/drivers/net/ethernet/intel/i40e/i40e_trace.h @@ -112,7 +112,7 @@ DECLARE_EVENT_CLASS( i40e_rx_template, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb), @@ -140,7 +140,7 @@ DECLARE_EVENT_CLASS( DEFINE_EVENT( i40e_rx_template, i40e_clean_rx_irq, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb)); @@ -148,7 +148,7 @@ DEFINE_EVENT( DEFINE_EVENT( i40e_rx_template, i40e_clean_rx_irq_rx, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb)); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index b43bc20f701d..1606ba5318f7 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -533,11 +533,11 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring, u64 qword0_raw, { struct i40e_pf *pf = rx_ring->vsi->back; struct pci_dev *pdev = pf->pdev; - struct i40e_32b_rx_wb_qw0 *qw0; + struct i40e_16b_rx_wb_qw0 *qw0; u32 fcnt_prog, fcnt_avail; u32 error; - qw0 = (struct i40e_32b_rx_wb_qw0 *)&qword0_raw; + qw0 = (struct i40e_16b_rx_wb_qw0 *)&qword0_raw; error = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK) >> I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT; @@ -1418,7 +1418,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring) u64_stats_init(&rx_ring->syncp); /* Round up to nearest 4K */ - rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc); + rx_ring->size = rx_ring->count * sizeof(union i40e_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index 703b644fd71f..66c2b92c0d10 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -110,7 +110,7 @@ enum i40e_dyn_idx_t { */ #define I40E_RX_HDR_SIZE I40E_RXBUFFER_256 #define I40E_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) -#define i40e_rx_desc i40e_32byte_rx_desc +#define i40e_rx_desc i40e_16byte_rx_desc #define I40E_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index 52410d609ba1..97d29df65f9e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -628,7 +628,7 @@ union i40e_16byte_rx_desc { __le64 hdr_addr; /* Header buffer address */ } read; struct { - struct { + struct i40e_16b_rx_wb_qw0 { struct { union { __le16 mirroring_status; @@ -647,6 +647,9 @@ union i40e_16byte_rx_desc { __le64 status_error_len; } qword1; } wb; /* writeback */ + struct { + u64 qword[2]; + } raw; }; union i40e_32byte_rx_desc { From patchwork Mon Sep 14 17:32:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363837 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BqtkS5H8Bz9sTC for ; Tue, 15 Sep 2020 03:33:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726160AbgINRde (ORCPT ); Mon, 14 Sep 2020 13:33:34 -0400 Received: from mga09.intel.com ([134.134.136.24]:61365 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726327AbgINRcl (ORCPT ); Mon, 14 Sep 2020 13:32:41 -0400 IronPort-SDR: LUefgyvDTOO2X8nl5cDKnLgWTVGugK12fXZiZ4AFqVJ/NGEbTxFtv1VcjFIzgyKMf6LIysMBV6 VUCWZv7heWVA== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056117" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056117" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: rgWS/T4paeQqbsC/S4ezlO/BX26wUDmr3byvWifb/F509izoaR4ue/vbzY5d5xxHlzd7L7AhCX t1+zzCdzpCnw== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137322" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Aaron Brown Subject: [net-next v2 5/5] i40e, xsk: move buffer allocation out of the Rx processing loop Date: Mon, 14 Sep 2020 10:32:24 -0700 Message-Id: <20200914173224.692707-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Instead of checking in each iteration of the Rx packet processing loop, move the allocation out of the loop and do it once for each napi activation. For AF_XDP the rx_drop benchmark was improved by 6%. Signed-off-by: Björn Töpel Tested-by: Aaron Brown Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index cf48758447c2..6acede0acdca 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -281,8 +281,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; + bool failure; while (likely(total_rx_packets < (unsigned int)budget)) { union i40e_rx_desc *rx_desc; @@ -290,13 +290,6 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int size; u64 qword; - if (cleaned_count >= I40E_RX_BUFFER_WRITE) { - failure = failure || - !i40e_alloc_rx_buffers_zc(rx_ring, - cleaned_count); - cleaned_count = 0; - } - rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean); qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); @@ -371,6 +364,9 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) napi_gro_receive(&rx_ring->q_vector->napi, skb); } + if (cleaned_count >= I40E_RX_BUFFER_WRITE) + failure = !i40e_alloc_rx_buffers_zc(rx_ring, cleaned_count); + i40e_finalize_xdp_rx(rx_ring, xdp_xmit); i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);