From patchwork Fri Jul 13 16:18:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943721 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41RyhQ5Vwwz9s2x for ; Sat, 14 Jul 2018 02:20:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731900AbeGMQfj (ORCPT ); Fri, 13 Jul 2018 12:35:39 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38919 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729626AbeGMQfi (ORCPT ); Fri, 13 Jul 2018 12:35:38 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 1403C2091A; Fri, 13 Jul 2018 18:20:18 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id D5FB7207AB; Fri, 13 Jul 2018 18:20:07 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 1/7] net: ethernet: mvneta: Fix napi structure mixup on armada 3700 Date: Fri, 13 Jul 2018 18:18:35 +0200 Message-Id: <20180713161841.11202-2-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andrew Lunn The mvneta Ethernet driver is used on a few different Marvell SoCs. Some SoCs have per cpu interrupts for Ethernet events. Some SoCs have a single interrupt, independent of the CPU. The driver handles this by having a per CPU napi structure when there are per CPU interrupts, and a global napi structure when there is a single interrupt. When the napi core calls mvneta_poll(), it passes the napi instance. This was not being propagated through the call chain, and instead the per-cpu napi instance was passed to napi_gro_receive() call. This breaks when there is a single global napi instance. Signed-off-by: Andrew Lunn Fixes: 2636ac3cc2b4 ("net: mvneta: Add network support for Armada 3700 SoC") Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 0ad2f3f7da85..ec84db47d82d 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, } /* Main rx processing when using software buffer management */ -static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo, +static int mvneta_rx_swbm(struct napi_struct *napi, + struct mvneta_port *pp, int rx_todo, struct mvneta_rx_queue *rxq) { - struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports); struct net_device *dev = pp->dev; int rx_done; u32 rcvd_pkts = 0; @@ -1959,7 +1959,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo, skb->protocol = eth_type_trans(skb, dev); mvneta_rx_csum(pp, rx_status, skb); - napi_gro_receive(&port->napi, skb); + napi_gro_receive(napi, skb); rcvd_pkts++; rcvd_bytes += rx_bytes; @@ -2001,7 +2001,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo, mvneta_rx_csum(pp, rx_status, skb); - napi_gro_receive(&port->napi, skb); + napi_gro_receive(napi, skb); } if (rcvd_pkts) { @@ -2020,10 +2020,10 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo, } /* Main rx processing when using hardware buffer management */ -static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo, +static int mvneta_rx_hwbm(struct napi_struct *napi, + struct mvneta_port *pp, int rx_todo, struct mvneta_rx_queue *rxq) { - struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports); struct net_device *dev = pp->dev; int rx_done; u32 rcvd_pkts = 0; @@ -2085,7 +2085,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo, skb->protocol = eth_type_trans(skb, dev); mvneta_rx_csum(pp, rx_status, skb); - napi_gro_receive(&port->napi, skb); + napi_gro_receive(napi, skb); rcvd_pkts++; rcvd_bytes += rx_bytes; @@ -2129,7 +2129,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo, mvneta_rx_csum(pp, rx_status, skb); - napi_gro_receive(&port->napi, skb); + napi_gro_receive(napi, skb); } if (rcvd_pkts) { @@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget) if (rx_queue) { rx_queue = rx_queue - 1; if (pp->bm_priv) - rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]); + rx_done = mvneta_rx_hwbm(napi, pp, budget, + &pp->rxqs[rx_queue]); else - rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]); + rx_done = mvneta_rx_swbm(napi, pp, budget, + &pp->rxqs[rx_queue]); } if (rx_done < budget) { From patchwork Fri Jul 13 16:18:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943728 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41RyjL3761z9s0n for ; Sat, 14 Jul 2018 02:21:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731949AbeGMQfq (ORCPT ); Fri, 13 Jul 2018 12:35:46 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38930 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729765AbeGMQfj (ORCPT ); Fri, 13 Jul 2018 12:35:39 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 5CCBB207AB; Fri, 13 Jul 2018 18:20:18 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 22DDE207B4; Fri, 13 Jul 2018 18:20:08 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 2/7] net: mvneta: remove data pointer usage from device_node structure Date: Fri, 13 Jul 2018 18:18:36 +0200 Message-Id: <20180713161841.11202-3-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On year ago Rob Herring wanted to remove the data pointer from the device_node structure[1]. The mvneta driver seemed to be the only one which used (abused ?) it. However, the proposal of Rob to remove this pointer from the driver introduced a regression, and I tested and fixed an alternative way, but it was never submitted as a proper patch. Now here it is: Instead of using the device_node structure ->data pointer, we store the BM private data as the driver data of the BM platform_device. The core mvneta code can retrieve it by doing a lookup on which platform_device corresponds to the BM device tree node using of_find_device_by_node(), and get its driver data [1]https://www.spinics.net/lists/netdev/msg445197.html Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 18 ++++++++++++------ drivers/net/ethernet/marvell/mvneta_bm.c | 15 +++++++++++++++ drivers/net/ethernet/marvell/mvneta_bm.h | 5 +++++ 3 files changed, 32 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index ec84db47d82d..f4e3943a745d 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -4462,12 +4462,16 @@ static int mvneta_probe(struct platform_device *pdev) /* Obtain access to BM resources if enabled and already initialized */ bm_node = of_parse_phandle(dn, "buffer-manager", 0); - if (bm_node && bm_node->data) { - pp->bm_priv = bm_node->data; - err = mvneta_bm_port_init(pdev, pp); - if (err < 0) { - dev_info(&pdev->dev, "use SW buffer management\n"); - pp->bm_priv = NULL; + if (bm_node) { + pp->bm_priv = mvneta_bm_get(bm_node); + if (pp->bm_priv) { + err = mvneta_bm_port_init(pdev, pp); + if (err < 0) { + dev_info(&pdev->dev, + "use SW buffer management\n"); + mvneta_bm_put(pp->bm_priv); + pp->bm_priv = NULL; + } } } of_node_put(bm_node); @@ -4528,6 +4532,7 @@ static int mvneta_probe(struct platform_device *pdev) mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id); mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short, 1 << pp->id); + mvneta_bm_put(pp->bm_priv); } err_free_stats: free_percpu(pp->stats); @@ -4565,6 +4570,7 @@ static int mvneta_remove(struct platform_device *pdev) mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id); mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short, 1 << pp->id); + mvneta_bm_put(pp->bm_priv); } return 0; diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c index 466939f8f0cf..de468e1bdba9 100644 --- a/drivers/net/ethernet/marvell/mvneta_bm.c +++ b/drivers/net/ethernet/marvell/mvneta_bm.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -392,6 +393,20 @@ static void mvneta_bm_put_sram(struct mvneta_bm *priv) MVNETA_BM_BPPI_SIZE); } +struct mvneta_bm *mvneta_bm_get(struct device_node *node) +{ + struct platform_device *pdev = of_find_device_by_node(node); + + return pdev ? platform_get_drvdata(pdev) : NULL; +} +EXPORT_SYMBOL_GPL(mvneta_bm_get); + +void mvneta_bm_put(struct mvneta_bm *priv) +{ + platform_device_put(priv->pdev); +} +EXPORT_SYMBOL_GPL(mvneta_bm_put); + static int mvneta_bm_probe(struct platform_device *pdev) { struct device_node *dn = pdev->dev.of_node; diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h index a32de432800c..9358626e51ec 100644 --- a/drivers/net/ethernet/marvell/mvneta_bm.h +++ b/drivers/net/ethernet/marvell/mvneta_bm.h @@ -134,6 +134,9 @@ void *mvneta_frag_alloc(unsigned int frag_size); void mvneta_frag_free(unsigned int frag_size, void *data); #if IS_ENABLED(CONFIG_MVNETA_BM) +struct mvneta_bm *mvneta_bm_get(struct device_node *node); +void mvneta_bm_put(struct mvneta_bm *priv); + void mvneta_bm_pool_destroy(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool, u8 port_map); void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool, @@ -178,5 +181,7 @@ static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv, static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool) { return 0; } +struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; } +void mvneta_bm_put(struct mvneta_bm *priv) {} #endif /* CONFIG_MVNETA_BM */ #endif From patchwork Fri Jul 13 16:18:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943722 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Ryhr4zf3z9s2x for ; Sat, 14 Jul 2018 02:20:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387841AbeGMQft (ORCPT ); Fri, 13 Jul 2018 12:35:49 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38946 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729765AbeGMQfr (ORCPT ); Fri, 13 Jul 2018 12:35:47 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id E39282093E; Fri, 13 Jul 2018 18:20:27 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 74089207C8; Fri, 13 Jul 2018 18:20:08 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 3/7] net: mvneta: increase number of buffers in RX and TX queue Date: Fri, 13 Jul 2018 18:18:37 +0200 Message-Id: <20180713161841.11202-4-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yelena Krivosheev The initial values were too small leading to poor performance when using the software buffer management. Signed-off-by: Yelena Krivosheev [gregory: extract from a larger patch] Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f4e3943a745d..c22df28b07c8 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -295,10 +295,10 @@ #define MVNETA_RSS_LU_TABLE_SIZE 1 /* Max number of Rx descriptors */ -#define MVNETA_MAX_RXD 128 +#define MVNETA_MAX_RXD 512 /* Max number of Tx descriptors */ -#define MVNETA_MAX_TXD 532 +#define MVNETA_MAX_TXD 1024 /* Max number of allowed TCP segments for software TSO */ #define MVNETA_MAX_TSO_SEGS 100 From patchwork Fri Jul 13 16:18:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943725 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Ryhz3WvKz9s0n for ; Sat, 14 Jul 2018 02:20:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387783AbeGMQfs (ORCPT ); Fri, 13 Jul 2018 12:35:48 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38952 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731992AbeGMQfs (ORCPT ); Fri, 13 Jul 2018 12:35:48 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id EB2C42093C; Fri, 13 Jul 2018 18:20:27 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id B775D20884; Fri, 13 Jul 2018 18:20:08 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 4/7] net: mvneta: discriminate error cause for missed packet Date: Fri, 13 Jul 2018 18:18:38 +0200 Message-Id: <20180713161841.11202-5-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In order to improve the diagnostic in case of error, make the distinction between refill error and skb allocation error. Also make the information available through the ethtool state. Based on the work of Yelena Krivosheev Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 28 +++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index c22df28b07c8..196205c79995 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -328,6 +328,8 @@ enum { ETHTOOL_STAT_EEE_WAKEUP, + ETHTOOL_STAT_SKB_ALLOC_ERR, + ETHTOOL_STAT_REFILL_ERR, ETHTOOL_MAX_STATS, }; @@ -375,6 +377,8 @@ static const struct mvneta_statistic mvneta_statistics[] = { { 0x3054, T_REG_32, "fc_sent", }, { 0x300c, T_REG_32, "internal_mac_transmit_err", }, { ETHTOOL_STAT_EEE_WAKEUP, T_SW, "eee_wakeup_errors", }, + { ETHTOOL_STAT_SKB_ALLOC_ERR, T_SW, "skb_alloc_errors", }, + { ETHTOOL_STAT_REFILL_ERR, T_SW, "refill_errors", }, }; struct mvneta_pcpu_stats { @@ -589,9 +593,6 @@ struct mvneta_rx_queue { /* num of rx descriptors in the rx descriptor ring */ int size; - /* counter of times when mvneta_refill() failed */ - int missed; - u32 pkts_coal; u32 time_coal; @@ -609,6 +610,10 @@ struct mvneta_rx_queue { /* Index of the next RX DMA descriptor to process */ int next_desc_to_proc; + + /* error counters */ + u32 skb_alloc_err; + u32 refill_err; }; static enum cpuhp_state online_hpstate; @@ -1946,8 +1951,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi, if (rx_bytes <= rx_copybreak) { /* better copy a small frame and not unmap the DMA region */ skb = netdev_alloc_skb_ip_align(dev, rx_bytes); - if (unlikely(!skb)) + if (unlikely(!skb)) { + netdev_err(dev, + "Can't allocate skb on queue %d\n", + rxq->id); + rxq->skb_alloc_err++; goto err_drop_frame; + } dma_sync_single_range_for_cpu(dev->dev.parent, phys_addr, @@ -1972,7 +1982,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, err = mvneta_rx_refill(pp, rx_desc, rxq); if (err) { netdev_err(dev, "Linux processing - Can't refill\n"); - rxq->missed++; + rxq->refill_err++; goto err_drop_frame; } @@ -2102,7 +2112,7 @@ static int mvneta_rx_hwbm(struct napi_struct *napi, err = hwbm_pool_refill(&bm_pool->hwbm_pool, GFP_ATOMIC); if (err) { netdev_err(dev, "Linux processing - Can't refill\n"); - rxq->missed++; + rxq->refill_err++; goto err_drop_frame_ret_pool; } @@ -3964,6 +3974,12 @@ static void mvneta_ethtool_update_stats(struct mvneta_port *pp) case ETHTOOL_STAT_EEE_WAKEUP: val = phylink_get_eee_err(pp->phylink); break; + case ETHTOOL_STAT_SKB_ALLOC_ERR: + val = pp->rxqs[0].skb_alloc_err; + break; + case ETHTOOL_STAT_REFILL_ERR: + val = pp->rxqs[0].refill_err; + break; } break; } From patchwork Fri Jul 13 16:18:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943724 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Ryhx4s1vz9s3R for ; Sat, 14 Jul 2018 02:20:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387796AbeGMQft (ORCPT ); Fri, 13 Jul 2018 12:35:49 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38963 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732663AbeGMQfr (ORCPT ); Fri, 13 Jul 2018 12:35:47 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 3011920884; Fri, 13 Jul 2018 18:20:28 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 0FC6A208C4; Fri, 13 Jul 2018 18:20:09 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 5/7] net: mvneta: Allocate page for the descriptor Date: Fri, 13 Jul 2018 18:18:39 +0200 Message-Id: <20180713161841.11202-6-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Instead of trying to allocate the exact amount of memory for each descriptor use a page for each of them, it allows to simplify the allocation management and increase the performance of the driver. Based on the work of Yelena Krivosheev Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 66 ++++++++++-------------- drivers/net/ethernet/marvell/mvneta_bm.h | 3 -- 2 files changed, 26 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 196205c79995..c203ea061ab9 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1795,47 +1795,30 @@ static void mvneta_txq_done(struct mvneta_port *pp, } } -void *mvneta_frag_alloc(unsigned int frag_size) -{ - if (likely(frag_size <= PAGE_SIZE)) - return netdev_alloc_frag(frag_size); - else - return kmalloc(frag_size, GFP_ATOMIC); -} -EXPORT_SYMBOL_GPL(mvneta_frag_alloc); - -void mvneta_frag_free(unsigned int frag_size, void *data) -{ - if (likely(frag_size <= PAGE_SIZE)) - skb_free_frag(data); - else - kfree(data); -} -EXPORT_SYMBOL_GPL(mvneta_frag_free); - /* Refill processing for SW buffer management */ -static int mvneta_rx_refill(struct mvneta_port *pp, - struct mvneta_rx_desc *rx_desc, - struct mvneta_rx_queue *rxq) - +/* Allocate page per descriptor */ +static inline int mvneta_rx_refill(struct mvneta_port *pp, + struct mvneta_rx_desc *rx_desc, + struct mvneta_rx_queue *rxq, + gfp_t gfp_mask) { dma_addr_t phys_addr; - void *data; + struct page *page; - data = mvneta_frag_alloc(pp->frag_size); - if (!data) + page = __dev_alloc_page(gfp_mask); + if (!page) return -ENOMEM; - phys_addr = dma_map_single(pp->dev->dev.parent, data, - MVNETA_RX_BUF_SIZE(pp->pkt_size), - DMA_FROM_DEVICE); + /* map page for use */ + phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE, + DMA_FROM_DEVICE); if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) { - mvneta_frag_free(pp->frag_size, data); + __free_page(page); return -ENOMEM; } phys_addr += pp->rx_offset_correction; - mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq); + mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); return 0; } @@ -1901,7 +1884,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); - mvneta_frag_free(pp->frag_size, data); + __free_page(data); } } @@ -1928,6 +1911,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); struct sk_buff *skb; unsigned char *data; + struct page *page; dma_addr_t phys_addr; u32 rx_status, frag_size; int rx_bytes, err, index; @@ -1936,7 +1920,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi, rx_status = rx_desc->status; rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); index = rx_desc - rxq->descs; - data = rxq->buf_virt_addr[index]; + page = (struct page *)rxq->buf_virt_addr[index]; + data = page_address(page); + /* Prefetch header */ + prefetch(data); phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction; if (!mvneta_rxq_desc_is_first_last(rx_status) || @@ -1979,7 +1966,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } /* Refill processing */ - err = mvneta_rx_refill(pp, rx_desc, rxq); + err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL); if (err) { netdev_err(dev, "Linux processing - Can't refill\n"); rxq->refill_err++; @@ -2773,9 +2760,11 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, for (i = 0; i < num; i++) { memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc)); - if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) { - netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs filled\n", - __func__, rxq->id, i, num); + if (mvneta_rx_refill(pp, rxq->descs + i, rxq, + GFP_KERNEL) != 0) { + netdev_err(pp->dev, + "%s:rxq %d, %d of %d buffs filled\n", + __func__, rxq->id, i, num); break; } } @@ -3189,8 +3178,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mvneta_bm_update_mtu(pp, mtu); pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu); - pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); ret = mvneta_setup_rxqs(pp); if (ret) { @@ -3678,8 +3665,7 @@ static int mvneta_open(struct net_device *dev) int ret; pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu); - pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + pp->frag_size = PAGE_SIZE; ret = mvneta_setup_rxqs(pp); if (ret) diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h index 9358626e51ec..c8425d35c049 100644 --- a/drivers/net/ethernet/marvell/mvneta_bm.h +++ b/drivers/net/ethernet/marvell/mvneta_bm.h @@ -130,9 +130,6 @@ struct mvneta_bm_pool { }; /* Declarations and definitions */ -void *mvneta_frag_alloc(unsigned int frag_size); -void mvneta_frag_free(unsigned int frag_size, void *data); - #if IS_ENABLED(CONFIG_MVNETA_BM) struct mvneta_bm *mvneta_bm_get(struct device_node *node); void mvneta_bm_put(struct mvneta_bm *priv); From patchwork Fri Jul 13 16:18:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943727 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41RyjF3lmrz9s0n for ; Sat, 14 Jul 2018 02:21:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387727AbeGMQfs (ORCPT ); Fri, 13 Jul 2018 12:35:48 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38971 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732688AbeGMQfs (ORCPT ); Fri, 13 Jul 2018 12:35:48 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 39198208C4; Fri, 13 Jul 2018 18:20:28 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 53036208FB; Fri, 13 Jul 2018 18:20:09 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 6/7] net: mvneta: Verify hardware checksum only when offload checksum feature is set Date: Fri, 13 Jul 2018 18:18:40 +0200 Message-Id: <20180713161841.11202-7-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yelena Krivosheev If the checksum offload feature is not set, then there is no point to check the status of the hardware. [gregory: extract from a larger patch] Signed-off-by: Yelena Krivosheev Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index c203ea061ab9..fdd4c9ecea04 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1720,7 +1720,8 @@ static void mvneta_rx_error(struct mvneta_port *pp, static void mvneta_rx_csum(struct mvneta_port *pp, u32 status, struct sk_buff *skb) { - if ((status & MVNETA_RXD_L3_IP4) && + if ((pp->dev->features & NETIF_F_RXCSUM) && + (status & MVNETA_RXD_L3_IP4) && (status & MVNETA_RXD_L4_CSUM_OK)) { skb->csum = 0; skb->ip_summed = CHECKSUM_UNNECESSARY; From patchwork Fri Jul 13 16:18:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 943726 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=bootlin.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Ryj36TC1z9s0n for ; Sat, 14 Jul 2018 02:20:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387900AbeGMQgJ (ORCPT ); Fri, 13 Jul 2018 12:36:09 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38978 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732723AbeGMQfs (ORCPT ); Fri, 13 Jul 2018 12:35:48 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 82EC3208FB; Fri, 13 Jul 2018 18:20:28 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 962CD208FF; Fri, 13 Jul 2018 18:20:09 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?utf-8?q?Miqu=C3=A8l_Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 7/7] net: mvneta: Improve the buffer allocation method for SWBM Date: Fri, 13 Jul 2018 18:18:41 +0200 Message-Id: <20180713161841.11202-8-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yelena Krivosheev With system having a small memory (around 256MB), the state "cannot allocate memory to refill with new buffer" is reach pretty quickly. By this patch we changed buffer allocation method to a better handling of this use case by avoiding memory allocation issues. Signed-off-by: Yelena Krivosheev [gregory: extract from a larger patch] Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 281 +++++++++++++++++--------- 1 file changed, 183 insertions(+), 98 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index fdd4c9ecea04..48aca22ed7e0 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -483,7 +483,10 @@ struct mvneta_port { #define MVNETA_RXD_ERR_RESOURCE (BIT(17) | BIT(18)) #define MVNETA_RXD_ERR_CODE_MASK (BIT(17) | BIT(18)) #define MVNETA_RXD_L3_IP4 BIT(25) -#define MVNETA_RXD_FIRST_LAST_DESC (BIT(26) | BIT(27)) +#define MVNETA_RXD_LAST_DESC BIT(26) +#define MVNETA_RXD_FIRST_DESC BIT(27) +#define MVNETA_RXD_FIRST_LAST_DESC (MVNETA_RXD_FIRST_DESC | \ + MVNETA_RXD_LAST_DESC) #define MVNETA_RXD_L4_CSUM_OK BIT(30) #if defined(__LITTLE_ENDIAN) @@ -611,6 +614,14 @@ struct mvneta_rx_queue { /* Index of the next RX DMA descriptor to process */ int next_desc_to_proc; + /* Index of first RX DMA descriptor to refill */ + int first_to_refill; + u32 refill_num; + + /* pointer to uncomplete skb buffer */ + struct sk_buff *skb; + int left_size; + /* error counters */ u32 skb_alloc_err; u32 refill_err; @@ -626,6 +637,7 @@ static int txq_number = 8; static int rxq_def; static int rx_copybreak __read_mostly = 256; +static int rx_header_size __read_mostly = 128; /* HW BM need that each port be identify by a unique ID */ static int global_port_id; @@ -1689,13 +1701,6 @@ static void mvneta_rx_error(struct mvneta_port *pp, { u32 status = rx_desc->status; - if (!mvneta_rxq_desc_is_first_last(status)) { - netdev_err(pp->dev, - "bad rx status %08x (buffer oversize), size=%d\n", - status, rx_desc->data_size); - return; - } - switch (status & MVNETA_RXD_ERR_CODE_MASK) { case MVNETA_RXD_ERR_CRC: netdev_err(pp->dev, "bad rx status %08x (crc error), size=%d\n", @@ -1882,6 +1887,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, for (i = 0; i < rxq->size; i++) { struct mvneta_rx_desc *rx_desc = rxq->descs + i; void *data = rxq->buf_virt_addr[i]; + if (!data || !(rx_desc->buf_phys_addr)) + continue; dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); @@ -1889,117 +1896,183 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, } } +static inline +int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq) +{ + struct mvneta_rx_desc *rx_desc; + int curr_desc = rxq->first_to_refill; + int i; + + for (i = 0; (i < rxq->refill_num) && (i < 64); i++) { + rx_desc = rxq->descs + curr_desc; + if (!(rx_desc->buf_phys_addr)) { + if (mvneta_rx_refill(pp, rx_desc, rxq, GFP_ATOMIC)) { + pr_err("Can't refill queue %d. Done %d from %d\n", + rxq->id, i, rxq->refill_num); + rxq->refill_err++; + break; + } + } + curr_desc = MVNETA_QUEUE_NEXT_DESC(rxq, curr_desc); + } + rxq->refill_num -= i; + rxq->first_to_refill = curr_desc; + + return i; +} + /* Main rx processing when using software buffer management */ static int mvneta_rx_swbm(struct napi_struct *napi, - struct mvneta_port *pp, int rx_todo, + struct mvneta_port *pp, int budget, struct mvneta_rx_queue *rxq) { struct net_device *dev = pp->dev; - int rx_done; + int rx_todo, rx_proc; + int refill = 0; u32 rcvd_pkts = 0; u32 rcvd_bytes = 0; /* Get number of received packets */ - rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq); - - if (rx_todo > rx_done) - rx_todo = rx_done; - - rx_done = 0; + rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); + rx_proc = 0; /* Fairness NAPI loop */ - while (rx_done < rx_todo) { + while ((rcvd_pkts < budget) && (rx_proc < rx_todo)) { struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); - struct sk_buff *skb; unsigned char *data; struct page *page; dma_addr_t phys_addr; - u32 rx_status, frag_size; - int rx_bytes, err, index; + u32 rx_status, index; + int rx_bytes, skb_size, copy_size; + int frag_num, frag_size, frag_offset; - rx_done++; - rx_status = rx_desc->status; - rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); index = rx_desc - rxq->descs; page = (struct page *)rxq->buf_virt_addr[index]; data = page_address(page); /* Prefetch header */ prefetch(data); - phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction; - if (!mvneta_rxq_desc_is_first_last(rx_status) || - (rx_status & MVNETA_RXD_ERR_SUMMARY)) { - mvneta_rx_error(pp, rx_desc); -err_drop_frame: - dev->stats.rx_errors++; - /* leave the descriptor untouched */ - continue; - } + phys_addr = rx_desc->buf_phys_addr; + rx_status = rx_desc->status; + rx_proc++; + rxq->refill_num++; + + if (rx_status & MVNETA_RXD_FIRST_DESC) { + /* Check errors only for FIRST descriptor */ + if (rx_status & MVNETA_RXD_ERR_SUMMARY) { + mvneta_rx_error(pp, rx_desc); + dev->stats.rx_errors++; + /* leave the descriptor untouched */ + continue; + } + rx_bytes = rx_desc->data_size - + (ETH_FCS_LEN + MVNETA_MH_SIZE); - if (rx_bytes <= rx_copybreak) { - /* better copy a small frame and not unmap the DMA region */ - skb = netdev_alloc_skb_ip_align(dev, rx_bytes); - if (unlikely(!skb)) { + /* Allocate small skb for each new packet */ + skb_size = max(rx_copybreak, rx_header_size); + rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size); + if (unlikely(!rxq->skb)) { netdev_err(dev, "Can't allocate skb on queue %d\n", rxq->id); + dev->stats.rx_dropped++; rxq->skb_alloc_err++; - goto err_drop_frame; + continue; } + copy_size = min(skb_size, rx_bytes); + + /* Copy data from buffer to SKB, skip Marvell header */ + memcpy(rxq->skb->data, data + MVNETA_MH_SIZE, + copy_size); + skb_put(rxq->skb, copy_size); + rxq->left_size = rx_bytes - copy_size; + + mvneta_rx_csum(pp, rx_status, rxq->skb); + if (rxq->left_size == 0) { + int size = copy_size + MVNETA_MH_SIZE; + + dma_sync_single_range_for_cpu(dev->dev.parent, + phys_addr, 0, + size, + DMA_FROM_DEVICE); + + /* leave the descriptor and buffer untouched */ + } else { + /* refill descriptor with new buffer later */ + rx_desc->buf_phys_addr = 0; + + frag_num = 0; + frag_offset = copy_size + MVNETA_MH_SIZE; + frag_size = min(rxq->left_size, + (int)(PAGE_SIZE - frag_offset)); + skb_add_rx_frag(rxq->skb, frag_num, page, + frag_offset, frag_size, + PAGE_SIZE); + dma_unmap_single(dev->dev.parent, phys_addr, + PAGE_SIZE, DMA_FROM_DEVICE); + rxq->left_size -= frag_size; + } + } else { + /* Middle or Last descriptor */ + if (unlikely(!rxq->skb)) { + pr_debug("no skb for rx_status 0x%x\n", + rx_status); + continue; + } + if (!rxq->left_size) { + /* last descriptor has only FCS */ + /* and can be discarded */ + dma_sync_single_range_for_cpu(dev->dev.parent, + phys_addr, 0, + ETH_FCS_LEN, + DMA_FROM_DEVICE); + /* leave the descriptor and buffer untouched */ + } else { + /* refill descriptor with new buffer later */ + rx_desc->buf_phys_addr = 0; + + frag_num = skb_shinfo(rxq->skb)->nr_frags; + frag_offset = 0; + frag_size = min(rxq->left_size, + (int)(PAGE_SIZE - frag_offset)); + skb_add_rx_frag(rxq->skb, frag_num, page, + frag_offset, frag_size, + PAGE_SIZE); + + dma_unmap_single(dev->dev.parent, phys_addr, + PAGE_SIZE, + DMA_FROM_DEVICE); + + rxq->left_size -= frag_size; + } + } /* Middle or Last descriptor */ - dma_sync_single_range_for_cpu(dev->dev.parent, - phys_addr, - MVNETA_MH_SIZE + NET_SKB_PAD, - rx_bytes, - DMA_FROM_DEVICE); - skb_put_data(skb, data + MVNETA_MH_SIZE + NET_SKB_PAD, - rx_bytes); - - skb->protocol = eth_type_trans(skb, dev); - mvneta_rx_csum(pp, rx_status, skb); - napi_gro_receive(napi, skb); - - rcvd_pkts++; - rcvd_bytes += rx_bytes; - - /* leave the descriptor and buffer untouched */ + if (!(rx_status & MVNETA_RXD_LAST_DESC)) + /* no last descriptor this time */ continue; - } - /* Refill processing */ - err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL); - if (err) { - netdev_err(dev, "Linux processing - Can't refill\n"); - rxq->refill_err++; - goto err_drop_frame; + if (rxq->left_size) { + pr_err("get last desc, but left_size (%d) != 0\n", + rxq->left_size); + dev_kfree_skb_any(rxq->skb); + rxq->left_size = 0; + rxq->skb = NULL; + continue; } - - frag_size = pp->frag_size; - - skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size); - - /* After refill old buffer has to be unmapped regardless - * the skb is successfully built or not. - */ - dma_unmap_single(dev->dev.parent, phys_addr, - MVNETA_RX_BUF_SIZE(pp->pkt_size), - DMA_FROM_DEVICE); - - if (!skb) - goto err_drop_frame; - rcvd_pkts++; - rcvd_bytes += rx_bytes; + rcvd_bytes += rxq->skb->len; /* Linux processing */ - skb_reserve(skb, MVNETA_MH_SIZE + NET_SKB_PAD); - skb_put(skb, rx_bytes); - - skb->protocol = eth_type_trans(skb, dev); + rxq->skb->protocol = eth_type_trans(rxq->skb, dev); - mvneta_rx_csum(pp, rx_status, skb); + if (dev->features & NETIF_F_GRO) + napi_gro_receive(napi, rxq->skb); + else + netif_receive_skb(rxq->skb); - napi_gro_receive(napi, skb); + /* clean uncomplete skb pointer in queue */ + rxq->skb = NULL; + rxq->left_size = 0; } if (rcvd_pkts) { @@ -2011,10 +2084,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi, u64_stats_update_end(&stats->syncp); } + /* return some buffers to hardware queue, one at a time is too slow */ + refill = mvneta_rx_refill_queue(pp, rxq); + /* Update rxq management counters */ - mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done); + mvneta_rxq_desc_num_update(pp, rxq, rx_proc, refill); - return rx_done; + return rcvd_pkts; } /* Main rx processing when using hardware buffer management */ @@ -2823,21 +2899,23 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp, mvreg_write(pp, MVNETA_RXQ_BASE_ADDR_REG(rxq->id), rxq->descs_phys); mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size); - /* Set Offset */ - mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction); - /* Set coalescing pkts and time */ mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal); mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal); if (!pp->bm_priv) { - /* Fill RXQ with buffers from RX pool */ - mvneta_rxq_buf_size_set(pp, rxq, - MVNETA_RX_BUF_SIZE(pp->pkt_size)); + /* Set Offset */ + mvneta_rxq_offset_set(pp, rxq, 0); + mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size); mvneta_rxq_bm_disable(pp, rxq); mvneta_rxq_fill(pp, rxq, rxq->size); } else { + /* Set Offset */ + mvneta_rxq_offset_set(pp, rxq, + NET_SKB_PAD - pp->rx_offset_correction); + mvneta_rxq_bm_enable(pp, rxq); + /* Fill RXQ with buffers from RX pool */ mvneta_rxq_long_pool_set(pp, rxq); mvneta_rxq_short_pool_set(pp, rxq); mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size); @@ -2866,6 +2944,9 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp, { mvneta_rxq_drop_pkts(pp, rxq); + if (rxq->skb) + dev_kfree_skb_any(rxq->skb); + if (rxq->descs) dma_free_coherent(pp->dev->dev.parent, rxq->size * MVNETA_DESC_ALIGNED_SIZE, @@ -2876,6 +2957,10 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp, rxq->last_desc = 0; rxq->next_desc_to_proc = 0; rxq->descs_phys = 0; + rxq->first_to_refill = 0; + rxq->refill_num = 0; + rxq->skb = NULL; + rxq->left_size = 0; } static int mvneta_txq_sw_init(struct mvneta_port *pp, @@ -4367,14 +4452,6 @@ static int mvneta_probe(struct platform_device *pdev) pp->dn = dn; pp->rxq_def = rxq_def; - - /* Set RX packet offset correction for platforms, whose - * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit - * platforms and 0B for 32-bit ones. - */ - pp->rx_offset_correction = - max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION); - pp->indir[0] = rxq_def; /* Get special SoC configurations */ @@ -4462,6 +4539,7 @@ static int mvneta_probe(struct platform_device *pdev) SET_NETDEV_DEV(dev, &pdev->dev); pp->id = global_port_id++; + pp->rx_offset_correction = 0; /* not relevant for SW BM */ /* Obtain access to BM resources if enabled and already initialized */ bm_node = of_parse_phandle(dn, "buffer-manager", 0); @@ -4476,6 +4554,13 @@ static int mvneta_probe(struct platform_device *pdev) pp->bm_priv = NULL; } } + /* Set RX packet offset correction for platforms, whose + * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit + * platforms and 0B for 32-bit ones. + */ + pp->rx_offset_correction = max(0, + NET_SKB_PAD - + MVNETA_RX_PKT_OFFSET_CORRECTION); } of_node_put(bm_node);