From patchwork Mon Sep 14 17:32:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 1363837 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BqtkS5H8Bz9sTC for ; Tue, 15 Sep 2020 03:33:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726160AbgINRde (ORCPT ); Mon, 14 Sep 2020 13:33:34 -0400 Received: from mga09.intel.com ([134.134.136.24]:61365 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726327AbgINRcl (ORCPT ); Mon, 14 Sep 2020 13:32:41 -0400 IronPort-SDR: LUefgyvDTOO2X8nl5cDKnLgWTVGugK12fXZiZ4AFqVJ/NGEbTxFtv1VcjFIzgyKMf6LIysMBV6 VUCWZv7heWVA== X-IronPort-AV: E=McAfee;i="6000,8403,9744"; a="160056117" X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="160056117" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 IronPort-SDR: rgWS/T4paeQqbsC/S4ezlO/BX26wUDmr3byvWifb/F509izoaR4ue/vbzY5d5xxHlzd7L7AhCX t1+zzCdzpCnw== X-IronPort-AV: E=Sophos;i="5.76,426,1592895600"; d="scan'208";a="319137322" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2020 10:32:31 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Aaron Brown Subject: [net-next v2 5/5] i40e, xsk: move buffer allocation out of the Rx processing loop Date: Mon, 14 Sep 2020 10:32:24 -0700 Message-Id: <20200914173224.692707-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200914173224.692707-1-anthony.l.nguyen@intel.com> References: <20200914173224.692707-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Instead of checking in each iteration of the Rx packet processing loop, move the allocation out of the loop and do it once for each napi activation. For AF_XDP the rx_drop benchmark was improved by 6%. Signed-off-by: Björn Töpel Tested-by: Aaron Brown Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index cf48758447c2..6acede0acdca 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -281,8 +281,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; + bool failure; while (likely(total_rx_packets < (unsigned int)budget)) { union i40e_rx_desc *rx_desc; @@ -290,13 +290,6 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) unsigned int size; u64 qword; - if (cleaned_count >= I40E_RX_BUFFER_WRITE) { - failure = failure || - !i40e_alloc_rx_buffers_zc(rx_ring, - cleaned_count); - cleaned_count = 0; - } - rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean); qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); @@ -371,6 +364,9 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) napi_gro_receive(&rx_ring->q_vector->napi, skb); } + if (cleaned_count >= I40E_RX_BUFFER_WRITE) + failure = !i40e_alloc_rx_buffers_zc(rx_ring, cleaned_count); + i40e_finalize_xdp_rx(rx_ring, xdp_xmit); i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);