From patchwork Tue Jul 28 19:08:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337974 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6m09FMz9sRN for ; Wed, 29 Jul 2020 05:09:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732605AbgG1TJC (ORCPT ); Tue, 28 Jul 2020 15:09:02 -0400 Received: from mga05.intel.com ([192.55.52.43]:43602 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728978AbgG1TIv (ORCPT ); Tue, 28 Jul 2020 15:08:51 -0400 IronPort-SDR: MVyZNGokTDT9bI7ymBwL8Qr9PjklCGAVvPbdqf2xJUdgrPU3daql+EghJsDwpU2WWjN1Qb5Yi5 FUkfE6tlOddg== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163823" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163823" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: Sbh7jBXu510kRcR2POiBDssc1X+E7wcPw43YkI6OUuIDihAmmTCOq7gVAN1A8z2r/GMKIyikJP eoAOkSpgoYkA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006223" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:48 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Li RongQing , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 1/6] i40e: not compute affinity_mask for IRQ Date: Tue, 28 Jul 2020 12:08:37 -0700 Message-Id: <20200728190842.1284145-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Li RongQing After commit 759dc4a7e605 ("i40e: initialize our affinity_mask based on cpu_possible_mask"), NAPI IRQ affinity_mask is bind to all possible CPUs, not a fixed CPU Signed-off-by: Li RongQing Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_main.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index dadbfb3d2a2b..523c07961b4b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -11185,11 +11185,10 @@ static int i40e_init_msix(struct i40e_pf *pf) * i40e_vsi_alloc_q_vector - Allocate memory for a single interrupt vector * @vsi: the VSI being configured * @v_idx: index of the vector in the vsi struct - * @cpu: cpu to be used on affinity_mask * * We allocate one q_vector. If allocation fails we return -ENOMEM. **/ -static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx, int cpu) +static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx) { struct i40e_q_vector *q_vector; @@ -11222,7 +11221,7 @@ static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx, int cpu) static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi) { struct i40e_pf *pf = vsi->back; - int err, v_idx, num_q_vectors, current_cpu; + int err, v_idx, num_q_vectors; /* if not MSIX, give the one vector only to the LAN VSI */ if (pf->flags & I40E_FLAG_MSIX_ENABLED) @@ -11232,15 +11231,10 @@ static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi) else return -EINVAL; - current_cpu = cpumask_first(cpu_online_mask); - for (v_idx = 0; v_idx < num_q_vectors; v_idx++) { - err = i40e_vsi_alloc_q_vector(vsi, v_idx, current_cpu); + err = i40e_vsi_alloc_q_vector(vsi, v_idx); if (err) goto err_out; - current_cpu = cpumask_next(current_cpu, cpu_online_mask); - if (unlikely(current_cpu >= nr_cpu_ids)) - current_cpu = cpumask_first(cpu_online_mask); } return 0; From patchwork Tue Jul 28 19:08:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337969 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6Y1XxHz9sRN for ; Wed, 29 Jul 2020 05:08:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732583AbgG1TIw (ORCPT ); Tue, 28 Jul 2020 15:08:52 -0400 Received: from mga05.intel.com ([192.55.52.43]:43602 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729168AbgG1TIv (ORCPT ); Tue, 28 Jul 2020 15:08:51 -0400 IronPort-SDR: KC2Y9DIg3vLdKdN8yuDyeWk5DnL+/O+cLiKbplJ5wIQUBxzcJIi0fKzBe0w4OlErIWfhx7lw5N H0/cyd1zVYTQ== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163821" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163821" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: UvqVpwsY3J/gAJK6TO91qbXwBRfWT7hQE9jj8PfeHHK3y1Lk+gfZXpjsyLQkTiUZ1tvMNIulVo iVL6Kn1oerYw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006227" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:49 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Li RongQing , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 2/6] i40e: prefetch struct page of Rx buffer conditionally Date: Tue, 28 Jul 2020 12:08:38 -0700 Message-Id: <20200728190842.1284145-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Li RongQing page_address() accesses struct page only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined, otherwise it returns address based on offset, so we prefetch it conditionally Signed-off-by: Li RongQing Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 3e5c566ceb01..5d408fe26063 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1953,7 +1953,9 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring, struct i40e_rx_buffer *rx_buffer; rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean); +#if defined(WANT_PAGE_VIRTUAL) || defined(HASHED_PAGE_VIRTUAL) prefetchw(rx_buffer->page); +#endif /* we are reusing so sync this buffer for CPU use */ dma_sync_single_range_for_cpu(rx_ring->dev, From patchwork Tue Jul 28 19:08:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337970 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6c02qdz9sRN for ; Wed, 29 Jul 2020 05:08:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732589AbgG1TIy (ORCPT ); Tue, 28 Jul 2020 15:08:54 -0400 Received: from mga05.intel.com ([192.55.52.43]:43607 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732582AbgG1TIx (ORCPT ); Tue, 28 Jul 2020 15:08:53 -0400 IronPort-SDR: /HjLUoAVkqFJJcbgVRDoB1kgGn8mJb3lS/b1NNSxFwGnquFfTdSewGwLCNBHUUuyy/ucGl+xm6 pTg6YWqHESYg== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163825" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163825" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: HJvBGjDgZoFqgAN/bAHArgLSfl0y4xdo6x9XUcJvRhlKOFyM+samQZ7lH2ZwwzsJ5jNYshx6a7 Kt778auVYehQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006234" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:49 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 3/6] i40e, xsk: remove HW descriptor prefetch in AF_XDP path Date: Tue, 28 Jul 2020 12:08:39 -0700 Message-Id: <20200728190842.1284145-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The software prefetching of HW descriptors has a negative impact on the performance. Therefore, it is now removed. Performance for the rx_drop benchmark increased with 2%. Signed-off-by: Björn Töpel Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 13 +++++++++++++ drivers/net/ethernet/intel/i40e/i40e_txrx_common.h | 13 ------------- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 12 ++++++++++++ 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 5d408fe26063..31dc1a011b2a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2301,6 +2301,19 @@ void i40e_finalize_xdp_rx(struct i40e_ring *rx_ring, unsigned int xdp_res) } } +/** + * i40e_inc_ntc: Advance the next_to_clean index + * @rx_ring: Rx ring + **/ +static void i40e_inc_ntc(struct i40e_ring *rx_ring) +{ + u32 ntc = rx_ring->next_to_clean + 1; + + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; + prefetch(I40E_RX_DESC(rx_ring, ntc)); +} + /** * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf * @rx_ring: rx descriptor ring to transact packets on diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h index 667c4dc4b39f..1397dd3c1c57 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h @@ -99,19 +99,6 @@ static inline bool i40e_rx_is_programming_status(u64 qword1) return qword1 & I40E_RXD_QW1_LENGTH_SPH_MASK; } -/** - * i40e_inc_ntc: Advance the next_to_clean index - * @rx_ring: Rx ring - **/ -static inline void i40e_inc_ntc(struct i40e_ring *rx_ring) -{ - u32 ntc = rx_ring->next_to_clean + 1; - - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - prefetch(I40E_RX_DESC(rx_ring, ntc)); -} - void i40e_xsk_clean_rx_ring(struct i40e_ring *rx_ring); void i40e_xsk_clean_tx_ring(struct i40e_ring *tx_ring); bool i40e_xsk_any_rx_ring_enabled(struct i40e_vsi *vsi); diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 8ce57b507a21..1f2dd591dbf1 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -253,6 +253,18 @@ static struct sk_buff *i40e_construct_skb_zc(struct i40e_ring *rx_ring, return skb; } +/** + * i40e_inc_ntc: Advance the next_to_clean index + * @rx_ring: Rx ring + **/ +static void i40e_inc_ntc(struct i40e_ring *rx_ring) +{ + u32 ntc = rx_ring->next_to_clean + 1; + + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; +} + /** * i40e_clean_rx_irq_zc - Consumes Rx packets from the hardware ring * @rx_ring: Rx ring From patchwork Tue Jul 28 19:08:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337971 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6c5WFgz9sSy for ; Wed, 29 Jul 2020 05:08:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732593AbgG1TIz (ORCPT ); Tue, 28 Jul 2020 15:08:55 -0400 Received: from mga05.intel.com ([192.55.52.43]:43602 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732580AbgG1TIw (ORCPT ); Tue, 28 Jul 2020 15:08:52 -0400 IronPort-SDR: HYSXT545+cbL5G3X6fWRpgsT+liGGG8WNqDKHBARo/TCR1EExBpf0h2O5efPipF19coyNAosi5 8SiORj1dEO7g== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163824" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163824" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: Akv/RaXfp3/hy6el1sujNe6f+IF1gnE14VZ1CLS11ASLyGVAf47Re72G1EXTXj29R/SxxTGgqY j0Y9GIUUgPSg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006236" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:49 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 4/6] i40e: use 16B HW descriptors instead of 32B Date: Tue, 28 Jul 2020 12:08:40 -0700 Message-Id: <20200728190842.1284145-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The i40e NIC supports two flavors of HW descriptors, 16 and 32 byte. The latter has, obviously, room for more offloading information. However, the only fields of the 32B HW descriptor that is being used by the driver, is also available in the 16B descriptor. In other words; Reading and writing 32 bytes instead of 16 byte is a waste of bus bandwidth. This commit starts using 16 byte descriptors instead of 32 byte descriptors. For AF_XDP the rx_drop benchmark was improved by 2%. Signed-off-by: Björn Töpel Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e.h | 2 +- drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 10 ++++------ drivers/net/ethernet/intel/i40e/i40e_main.c | 4 ++-- drivers/net/ethernet/intel/i40e/i40e_trace.h | 6 +++--- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 6 +++--- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 2 +- drivers/net/ethernet/intel/i40e/i40e_type.h | 5 ++++- 7 files changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index a7e212d1caa2..ada0e93c38f0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -90,7 +90,7 @@ #define I40E_OEM_RELEASE_MASK 0x0000ffff #define I40E_RX_DESC(R, i) \ - (&(((union i40e_32byte_rx_desc *)((R)->desc))[i])) + (&(((union i40e_rx_desc *)((R)->desc))[i])) #define I40E_TX_DESC(R, i) \ (&(((struct i40e_tx_desc *)((R)->desc))[i])) #define I40E_TX_CTXTDESC(R, i) \ diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c index d3ad2e3aa838..d7c13ca9be7d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c +++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c @@ -604,10 +604,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n, } else { rxd = I40E_RX_DESC(ring, i); dev_info(&pf->pdev->dev, - " d[%03x] = 0x%016llx 0x%016llx 0x%016llx 0x%016llx\n", + " d[%03x] = 0x%016llx 0x%016llx\n", i, rxd->read.pkt_addr, - rxd->read.hdr_addr, - rxd->read.rsvd1, rxd->read.rsvd2); + rxd->read.hdr_addr); } } } else if (cnt == 3) { @@ -625,10 +624,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n, } else { rxd = I40E_RX_DESC(ring, desc_n); dev_info(&pf->pdev->dev, - "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx 0x%016llx 0x%016llx\n", + "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n", vsi_seid, ring_id, desc_n, - rxd->read.pkt_addr, rxd->read.hdr_addr, - rxd->read.rsvd1, rxd->read.rsvd2); + rxd->read.pkt_addr, rxd->read.hdr_addr); } } else { dev_info(&pf->pdev->dev, "dump desc rx/tx/xdp []\n"); diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 523c07961b4b..cc4d7c03bdd5 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3320,8 +3320,8 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring) rx_ctx.base = (ring->dma / 128); rx_ctx.qlen = ring->count; - /* use 32 byte descriptors */ - rx_ctx.dsize = 1; + /* use 16 byte descriptors */ + rx_ctx.dsize = 0; /* descriptor type is always zero * rx_ctx.dtype = 0; diff --git a/drivers/net/ethernet/intel/i40e/i40e_trace.h b/drivers/net/ethernet/intel/i40e/i40e_trace.h index 424f02077e2e..983f8b98b275 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_trace.h +++ b/drivers/net/ethernet/intel/i40e/i40e_trace.h @@ -112,7 +112,7 @@ DECLARE_EVENT_CLASS( i40e_rx_template, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb), @@ -140,7 +140,7 @@ DECLARE_EVENT_CLASS( DEFINE_EVENT( i40e_rx_template, i40e_clean_rx_irq, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb)); @@ -148,7 +148,7 @@ DEFINE_EVENT( DEFINE_EVENT( i40e_rx_template, i40e_clean_rx_irq_rx, TP_PROTO(struct i40e_ring *ring, - union i40e_32byte_rx_desc *desc, + union i40e_16byte_rx_desc *desc, struct sk_buff *skb), TP_ARGS(ring, desc, skb)); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 31dc1a011b2a..2fb6dddf9106 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -533,11 +533,11 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring, u64 qword0_raw, { struct i40e_pf *pf = rx_ring->vsi->back; struct pci_dev *pdev = pf->pdev; - struct i40e_32b_rx_wb_qw0 *qw0; + struct i40e_16b_rx_wb_qw0 *qw0; u32 fcnt_prog, fcnt_avail; u32 error; - qw0 = (struct i40e_32b_rx_wb_qw0 *)&qword0_raw; + qw0 = (struct i40e_16b_rx_wb_qw0 *)&qword0_raw; error = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK) >> I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT; @@ -1418,7 +1418,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring) u64_stats_init(&rx_ring->syncp); /* Round up to nearest 4K */ - rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc); + rx_ring->size = rx_ring->count * sizeof(union i40e_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index 4036893d6825..0eacd5f21e9d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -110,7 +110,7 @@ enum i40e_dyn_idx_t { */ #define I40E_RX_HDR_SIZE I40E_RXBUFFER_256 #define I40E_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) -#define i40e_rx_desc i40e_32byte_rx_desc +#define i40e_rx_desc i40e_16byte_rx_desc #define I40E_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index 52410d609ba1..97d29df65f9e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -628,7 +628,7 @@ union i40e_16byte_rx_desc { __le64 hdr_addr; /* Header buffer address */ } read; struct { - struct { + struct i40e_16b_rx_wb_qw0 { struct { union { __le16 mirroring_status; @@ -647,6 +647,9 @@ union i40e_16byte_rx_desc { __le64 status_error_len; } qword1; } wb; /* writeback */ + struct { + u64 qword[2]; + } raw; }; union i40e_32byte_rx_desc { From patchwork Tue Jul 28 19:08:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337972 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6g3ZZWz9sRN for ; Wed, 29 Jul 2020 05:08:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732601AbgG1TI6 (ORCPT ); Tue, 28 Jul 2020 15:08:58 -0400 Received: from mga05.intel.com ([192.55.52.43]:43607 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732584AbgG1TIz (ORCPT ); Tue, 28 Jul 2020 15:08:55 -0400 IronPort-SDR: kcVacKA/21zmBhcby8xmpLZ17xCUfJ7ACMxTe9lHmhC/VosG3v2JZ7kuTaZoy7YYTwtbrkhWsT o5HpCmIth+UQ== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163826" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163826" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: NbTCAhMU6ctJ/Dku2W8Jxlc8lk6hesD0WJ54RGglZ5HdTAqZTJeduvDYzGzxpQH01d1MNfRRZu sBTvINkw8MRA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006241" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:49 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 5/6] i40e, xsk: increase budget for AF_XDP path Date: Tue, 28 Jul 2020 12:08:41 -0700 Message-Id: <20200728190842.1284145-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The napi_budget, meaning the number of received packets that are allowed to be processed for each napi invocation, takes into consideration that each received packet is aimed for the kernel networking stack. That is not the case for the AF_XDP receive path, where the cost of each packet is significantly less. Therefore, this commit disregards the napi budget and increases it to 256. Processing 256 packets targeted for AF_XDP is still less work than 64 (napi budget) packets going to the kernel networking stack. The performance for the rx_drop scenario is up 7%. Signed-off-by: Björn Töpel Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 1f2dd591dbf1..99f4afdc403d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -265,6 +265,8 @@ static void i40e_inc_ntc(struct i40e_ring *rx_ring) rx_ring->next_to_clean = ntc; } +#define I40E_XSK_CLEAN_RX_BUDGET 256U + /** * i40e_clean_rx_irq_zc - Consumes Rx packets from the hardware ring * @rx_ring: Rx ring @@ -280,7 +282,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) bool failure = false; struct sk_buff *skb; - while (likely(total_rx_packets < (unsigned int)budget)) { + while (likely(total_rx_packets < I40E_XSK_CLEAN_RX_BUDGET)) { union i40e_rx_desc *rx_desc; struct xdp_buff **bi; unsigned int size; From patchwork Tue Jul 28 19:08:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1337973 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BGR6h2QVTz9sSy for ; Wed, 29 Jul 2020 05:09:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732597AbgG1TI5 (ORCPT ); Tue, 28 Jul 2020 15:08:57 -0400 Received: from mga05.intel.com ([192.55.52.43]:43602 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729168AbgG1TIx (ORCPT ); Tue, 28 Jul 2020 15:08:53 -0400 IronPort-SDR: REKG0mUg7mfqGNFcxZjwyhz3p/UgaJ0jFqGEsNjqviHRAVG5vbADk2SUvpdGJp2CAbh+BLXu+8 oaYnLbyUUUVQ== X-IronPort-AV: E=McAfee;i="6000,8403,9696"; a="236163827" X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="236163827" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 12:08:50 -0700 IronPort-SDR: nV26MxN66gxRsQO1kbYJht6P5POJQlT+yb3HNJqjsiymeyV07zmU07C8dJanERryv9/ALTn2kx na1/gcYDFsmQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,407,1589266800"; d="scan'208";a="490006246" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga006.fm.intel.com with ESMTP; 28 Jul 2020 12:08:50 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Andrew Bowers Subject: [net-next 6/6] i40e, xsk: move buffer allocation out of the Rx processing loop Date: Tue, 28 Jul 2020 12:08:42 -0700 Message-Id: <20200728190842.1284145-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> References: <20200728190842.1284145-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Instead of checking in each iteration of the Rx packet processing loop, move the allocation out of the loop and do it once for each napi activation. For AF_XDP the rx_drop benchmark was improved by 6%. Signed-off-by: Björn Töpel Tested-by: Andrew Bowers Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 99f4afdc403d..91aee16fbe72 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -279,8 +279,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; + bool failure; while (likely(total_rx_packets < I40E_XSK_CLEAN_RX_BUDGET)) { union i40e_rx_desc *rx_desc; @@ -288,13 +288,6 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int size; u64 qword; - if (cleaned_count >= I40E_RX_BUFFER_WRITE) { - failure = failure || - !i40e_alloc_rx_buffers_zc(rx_ring, - cleaned_count); - cleaned_count = 0; - } - rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean); qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); @@ -369,6 +362,9 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) napi_gro_receive(&rx_ring->q_vector->napi, skb); } + if (cleaned_count >= I40E_RX_BUFFER_WRITE) + failure = !i40e_alloc_rx_buffers_zc(rx_ring, cleaned_count); + i40e_finalize_xdp_rx(rx_ring, xdp_xmit); i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);