From patchwork Fri Jul 5 07:22:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jose Abreu X-Patchwork-Id: 1127817 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="ZPr0/txC"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45g5sw40rhz9sPX for ; Fri, 5 Jul 2019 17:23:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727997AbfGEHXK (ORCPT ); Fri, 5 Jul 2019 03:23:10 -0400 Received: from smtprelay-out1.synopsys.com ([198.182.47.102]:39916 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbfGEHXJ (ORCPT ); Fri, 5 Jul 2019 03:23:09 -0400 Received: from mailhost.synopsys.com (mdc-mailhost2.synopsys.com [10.225.0.210]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 837A6C297F; Fri, 5 Jul 2019 07:23:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1562311388; bh=ZPQ7sIxSVNzOnU2RiZGnNuSQVN+t2tSeJx2nODbEmQA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=ZPr0/txCLMAKgJ1s8bCMQCmaMR93cO/ar6HYPe9fW4OrCGFQRd4Etyz6Am26VQ7Up WCAnp36D+TuA58YTj4TMJ4abO5mpi52nYpgLJDJJxScjDEKaBlJ7tmZTrzOlrPNAsP a7ccmX03ylIVEsJA5Qlk1jBc2CrynS1lc+PNe4VNiWXWDsraOcbDsabRQa8igJDRJY E3416wUsJfPUzWI9Q6ylAWOIQI0t7jhr4tHcnB+D6BACw4WfaMvYTjiszWAZbeF9/8 XBAtZwmBVVxr2i/rigpT8vH9IF3Xf24PROIfneog198R8tkEiuIU+oeF7WnF3m8S5R UKvBGVWLzcHfg== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 104A5A006A; Fri, 5 Jul 2019 07:23:02 +0000 (UTC) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id B000F3E239; Fri, 5 Jul 2019 09:23:02 +0200 (CEST) From: Jose Abreu To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org Cc: Jose Abreu , Joao Pinto , "David S . Miller" , Giuseppe Cavallaro , Alexandre Torgue , Jakub Kicinski Subject: [PATCH net-next v3 1/3] net: stmmac: Implement RX Coalesce Frames setting Date: Fri, 5 Jul 2019 09:22:58 +0200 Message-Id: <38d3e575ccbfed1ae3b5fcaaac2f35cf67e9fe6c.1562311299.git.joabreu@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add support for coalescing RX path by specifying number of frames which don't need to have interrupt on completion bit set. This is only available when RX Watchdog is enabled. Acked-by: Jakub Kicinski Signed-off-by: Jose Abreu Cc: Joao Pinto Cc: David S. Miller Cc: Giuseppe Cavallaro Cc: Alexandre Torgue Cc: Jakub Kicinski --- drivers/net/ethernet/stmicro/stmmac/common.h | 1 + drivers/net/ethernet/stmicro/stmmac/stmmac.h | 2 ++ drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 7 +++++-- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 18 ++++++++++++------ 4 files changed, 20 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index 2403a65167b2..dfd47fdfa447 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -252,6 +252,7 @@ struct stmmac_safety_stats { #define STMMAC_MAX_COAL_TX_TICK 100000 #define STMMAC_TX_MAX_FRAMES 256 #define STMMAC_TX_FRAMES 1 +#define STMMAC_RX_FRAMES 25 /* Packets types */ enum packets_types { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index 123898235cb0..513f4e2df5f6 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -55,6 +55,7 @@ struct stmmac_tx_queue { }; struct stmmac_rx_queue { + u32 rx_count_frames; u32 queue_index; struct stmmac_priv *priv_data; struct dma_extended_desc *dma_erx; @@ -110,6 +111,7 @@ struct stmmac_priv { /* Frequently used values are kept adjacent for cache effect */ u32 tx_coal_frames; u32 tx_coal_timer; + u32 rx_coal_frames; int tx_coalesce; int hwts_tx_en; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c index cfd93eefb50e..6efb66820d4c 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c @@ -701,8 +701,10 @@ static int stmmac_get_coalesce(struct net_device *dev, ec->tx_coalesce_usecs = priv->tx_coal_timer; ec->tx_max_coalesced_frames = priv->tx_coal_frames; - if (priv->use_riwt) + if (priv->use_riwt) { + ec->rx_max_coalesced_frames = priv->rx_coal_frames; ec->rx_coalesce_usecs = stmmac_riwt2usec(priv->rx_riwt, priv); + } return 0; } @@ -715,7 +717,7 @@ static int stmmac_set_coalesce(struct net_device *dev, unsigned int rx_riwt; /* Check not supported parameters */ - if ((ec->rx_max_coalesced_frames) || (ec->rx_coalesce_usecs_irq) || + if ((ec->rx_coalesce_usecs_irq) || (ec->rx_max_coalesced_frames_irq) || (ec->tx_coalesce_usecs_irq) || (ec->use_adaptive_rx_coalesce) || (ec->use_adaptive_tx_coalesce) || (ec->pkt_rate_low) || (ec->rx_coalesce_usecs_low) || @@ -749,6 +751,7 @@ static int stmmac_set_coalesce(struct net_device *dev, /* Only copy relevant parameters, ignore all others. */ priv->tx_coal_frames = ec->tx_max_coalesced_frames; priv->tx_coal_timer = ec->tx_coalesce_usecs; + priv->rx_coal_frames = ec->rx_max_coalesced_frames; priv->rx_riwt = rx_riwt; stmmac_rx_watchdog(priv, priv->ioaddr, priv->rx_riwt, rx_cnt); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 3425d4dda03d..c8fe85ef9a7e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -2268,20 +2268,21 @@ static void stmmac_tx_timer(struct timer_list *t) } /** - * stmmac_init_tx_coalesce - init tx mitigation options. + * stmmac_init_coalesce - init mitigation options. * @priv: driver private structure * Description: - * This inits the transmit coalesce parameters: i.e. timer rate, + * This inits the coalesce parameters: i.e. timer rate, * timer handler and default threshold used for enabling the * interrupt on completion bit. */ -static void stmmac_init_tx_coalesce(struct stmmac_priv *priv) +static void stmmac_init_coalesce(struct stmmac_priv *priv) { u32 tx_channel_count = priv->plat->tx_queues_to_use; u32 chan; priv->tx_coal_frames = STMMAC_TX_FRAMES; priv->tx_coal_timer = STMMAC_COAL_TX_TIMER; + priv->rx_coal_frames = STMMAC_RX_FRAMES; for (chan = 0; chan < tx_channel_count; chan++) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; @@ -2651,7 +2652,7 @@ static int stmmac_open(struct net_device *dev) goto init_error; } - stmmac_init_tx_coalesce(priv); + stmmac_init_coalesce(priv); phylink_start(priv->phylink); @@ -3298,6 +3299,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) while (dirty-- > 0) { struct dma_desc *p; + bool use_rx_wd; if (priv->extend_desc) p = (struct dma_desc *)(rx_q->dma_erx + entry); @@ -3340,7 +3342,11 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) } dma_wmb(); - stmmac_set_rx_owner(priv, p, priv->use_riwt); + rx_q->rx_count_frames++; + rx_q->rx_count_frames %= priv->rx_coal_frames; + use_rx_wd = priv->use_riwt && rx_q->rx_count_frames; + + stmmac_set_rx_owner(priv, p, use_rx_wd); dma_wmb(); @@ -4623,7 +4629,7 @@ int stmmac_resume(struct device *dev) stmmac_clear_descriptors(priv); stmmac_hw_setup(ndev, false); - stmmac_init_tx_coalesce(priv); + stmmac_init_coalesce(priv); stmmac_set_rx_mode(ndev); stmmac_enable_all_queues(priv); From patchwork Fri Jul 5 07:22:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jose Abreu X-Patchwork-Id: 1127818 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="KRTBPpxd"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45g5sy6HyYz9sP7 for ; Fri, 5 Jul 2019 17:23:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727984AbfGEHXJ (ORCPT ); Fri, 5 Jul 2019 03:23:09 -0400 Received: from smtprelay-out1.synopsys.com ([198.182.47.102]:39894 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727959AbfGEHXH (ORCPT ); Fri, 5 Jul 2019 03:23:07 -0400 Received: from mailhost.synopsys.com (mdc-mailhost2.synopsys.com [10.225.0.210]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 9B543C29A1; Fri, 5 Jul 2019 07:23:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1562311386; bh=8/IDjB2DIXJSOAN7JOjH5TAznklTry/7Vqz9VmBB80k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=KRTBPpxdLkzXmfSDzzFDfHf8KDZuSbjFbX1JD7XXU3gnG2mwgAuvGvh3+9sIgOLXk N9h+n5Cb0pkvI/7TdkMOIIWB6nxrXjQThny40+IVIvcaBx5/2y6S9Ei2JDSjWvDuUb r9D0UiRUJYlR4CwPOK1+6L6mWkw78KrP3HnSRnRSEFnBUId6oln2DfASy/H78SUgdQ wt8wAb43VTMOnfqLkvMaYd3OzJT7mXD/Z4HpzJYWP5S2ZJzmu6qOMxfhN8212S2GrY k89Xod0SWWyJHrEwlnr3SR/tid+QcLl3Z2rLOIxi06NKsTALjBV6UNqXMcHAx2ba3Z IHm6RTOdFOSYA== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 0FAD0A0057; Fri, 5 Jul 2019 07:23:02 +0000 (UTC) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id C67B83E240; Fri, 5 Jul 2019 09:23:02 +0200 (CEST) From: Jose Abreu To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org Cc: Jose Abreu , Joao Pinto , "David S . Miller" , Giuseppe Cavallaro , Alexandre Torgue , Maxime Coquelin , Maxime Ripard , Chen-Yu Tsai Subject: [PATCH net-next v3 2/3] net: stmmac: Fix descriptors address being in > 32 bits address space Date: Fri, 5 Jul 2019 09:22:59 +0200 Message-Id: <15a55f0542be506412bbe1ac75cc633ebe44d6f0.1562311299.git.joabreu@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Commit a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC"), introduced support for > 32 bits addressing in XGMAC but the conversion of descriptors to dma_addr_t was left out. As some devices assing coherent memory in regions > 32 bits we need to set lower and upper value of descriptors address when initializing DMA channels. Luckly, this was working for me because I was assigning CMA to < 4GB address space for performance reasons. Fixes: a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC") Signed-off-by: Jose Abreu Cc: Joao Pinto Cc: David S. Miller Cc: Giuseppe Cavallaro Cc: Alexandre Torgue Cc: Maxime Coquelin Cc: Maxime Ripard Cc: Chen-Yu Tsai --- drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 8 ++++---- drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c | 8 ++++---- drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c | 8 ++++---- drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c | 8 ++++---- drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h | 2 ++ drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c | 10 ++++++---- drivers/net/ethernet/stmicro/stmmac/hwif.h | 4 ++-- 7 files changed, 26 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c index 6d5cba4075eb..2856f3fe5266 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c @@ -289,18 +289,18 @@ static void sun8i_dwmac_dma_init(void __iomem *ioaddr, static void sun8i_dwmac_dma_init_rx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan) + dma_addr_t dma_rx_phy, u32 chan) { /* Write RX descriptors address */ - writel(dma_rx_phy, ioaddr + EMAC_RX_DESC_LIST); + writel(lower_32_bits(dma_rx_phy), ioaddr + EMAC_RX_DESC_LIST); } static void sun8i_dwmac_dma_init_tx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan) + dma_addr_t dma_tx_phy, u32 chan) { /* Write TX descriptors address */ - writel(dma_tx_phy, ioaddr + EMAC_TX_DESC_LIST); + writel(lower_32_bits(dma_tx_phy), ioaddr + EMAC_TX_DESC_LIST); } /* sun8i_dwmac_dump_regs() - Dump EMAC address space diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c index 1fdedf77678f..2bac49b49f73 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c @@ -112,18 +112,18 @@ static void dwmac1000_dma_init(void __iomem *ioaddr, static void dwmac1000_dma_init_rx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan) + dma_addr_t dma_rx_phy, u32 chan) { /* RX descriptor base address list must be written into DMA CSR3 */ - writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR); + writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR); } static void dwmac1000_dma_init_tx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan) + dma_addr_t dma_tx_phy, u32 chan) { /* TX descriptor base address list must be written into DMA CSR4 */ - writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR); + writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR); } static u32 dwmac1000_configure_fc(u32 csr6, int rxfifosz) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c index c980cc7360a4..8f0d9bc7cab5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c @@ -31,18 +31,18 @@ static void dwmac100_dma_init(void __iomem *ioaddr, static void dwmac100_dma_init_rx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan) + dma_addr_t dma_rx_phy, u32 chan) { /* RX descriptor base addr lists must be written into DMA CSR3 */ - writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR); + writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR); } static void dwmac100_dma_init_tx(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan) + dma_addr_t dma_tx_phy, u32 chan) { /* TX descriptor base addr lists must be written into DMA CSR4 */ - writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR); + writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR); } /* Store and Forward capability is not used at all. diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c index 0f208e13da9f..6cbcdaea55f6 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c @@ -70,7 +70,7 @@ static void dwmac4_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi) static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan) + dma_addr_t dma_rx_phy, u32 chan) { u32 value; u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl; @@ -79,12 +79,12 @@ static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr, value = value | (rxpbl << DMA_BUS_MODE_RPBL_SHIFT); writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan)); - writel(dma_rx_phy, ioaddr + DMA_CHAN_RX_BASE_ADDR(chan)); + writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_CHAN_RX_BASE_ADDR(chan)); } static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan) + dma_addr_t dma_tx_phy, u32 chan) { u32 value; u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl; @@ -97,7 +97,7 @@ static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr, writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan)); - writel(dma_tx_phy, ioaddr + DMA_CHAN_TX_BASE_ADDR(chan)); + writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_CHAN_TX_BASE_ADDR(chan)); } static void dwmac4_dma_init_channel(void __iomem *ioaddr, diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h index 9a9792527530..7f86dffb264d 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h @@ -199,7 +199,9 @@ #define XGMAC_RxPBL GENMASK(21, 16) #define XGMAC_RxPBL_SHIFT 16 #define XGMAC_RXST BIT(0) +#define XGMAC_DMA_CH_TxDESC_HADDR(x) (0x00003110 + (0x80 * (x))) #define XGMAC_DMA_CH_TxDESC_LADDR(x) (0x00003114 + (0x80 * (x))) +#define XGMAC_DMA_CH_RxDESC_HADDR(x) (0x00003118 + (0x80 * (x))) #define XGMAC_DMA_CH_RxDESC_LADDR(x) (0x0000311c + (0x80 * (x))) #define XGMAC_DMA_CH_TxDESC_TAIL_LPTR(x) (0x00003124 + (0x80 * (x))) #define XGMAC_DMA_CH_RxDESC_TAIL_LPTR(x) (0x0000312c + (0x80 * (x))) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c index 229c58758cbd..a4f236e3593e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c @@ -44,7 +44,7 @@ static void dwxgmac2_dma_init_chan(void __iomem *ioaddr, static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan) + dma_addr_t phy, u32 chan) { u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl; u32 value; @@ -54,12 +54,13 @@ static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr, value |= (rxpbl << XGMAC_RxPBL_SHIFT) & XGMAC_RxPBL; writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); - writel(dma_rx_phy, ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan)); + writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_HADDR(chan)); + writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan)); } static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan) + dma_addr_t phy, u32 chan) { u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl; u32 value; @@ -70,7 +71,8 @@ static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr, value |= XGMAC_OSP; writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); - writel(dma_tx_phy, ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan)); + writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_HADDR(chan)); + writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan)); } static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi) diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h index 2acfbc70e3c8..278c0dbec9d9 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.h +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h @@ -150,10 +150,10 @@ struct stmmac_dma_ops { struct stmmac_dma_cfg *dma_cfg, u32 chan); void (*init_rx_chan)(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_rx_phy, u32 chan); + dma_addr_t phy, u32 chan); void (*init_tx_chan)(void __iomem *ioaddr, struct stmmac_dma_cfg *dma_cfg, - u32 dma_tx_phy, u32 chan); + dma_addr_t phy, u32 chan); /* Configure the AXI Bus Mode Register */ void (*axi)(void __iomem *ioaddr, struct stmmac_axi *axi); /* Dump DMA registers */ From patchwork Fri Jul 5 07:23:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jose Abreu X-Patchwork-Id: 1127820 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="NCMyMlEF"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45g5t33xklz9sP7 for ; Fri, 5 Jul 2019 17:23:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728020AbfGEHXW (ORCPT ); Fri, 5 Jul 2019 03:23:22 -0400 Received: from dc2-smtprelay2.synopsys.com ([198.182.61.142]:33020 "EHLO smtprelay-out1.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725863AbfGEHXH (ORCPT ); Fri, 5 Jul 2019 03:23:07 -0400 Received: from mailhost.synopsys.com (mdc-mailhost1.synopsys.com [10.225.0.209]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by smtprelay-out1.synopsys.com (Postfix) with ESMTPS id 39C0FC1009; Fri, 5 Jul 2019 07:23:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1562311386; bh=MW0AfzFshhydKXyyKxhR6Eom9+t41KI9JzN1JgYMC/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=NCMyMlEFZRFKBdXI3W5/YT+pFfiYDpmC5RQDRQUT50CIvtoPAYvMKkVh0+4+J+Pj+ lknPlJFnB+S/cU/yE7tOdkgRR0+msrMELQadDd0zbxsAHBIITfcUWTisJ+mJ5fPt9D OEBAfDE8ldPZUk7LHHzuaaPA7VL8vT5Xg4VLq9nPa7mET8c15Zz+S77MKPvLv2RuMw D84DZcaPDFDX24JEJIONljhsJxJ6jNINVeLDw1qmE5cQiDub+N6bgcvLZ50+65z7rq zibzivw014j1Ow57WdEKmbr+idJxkIKjEwP+RONv1Y4HVyNsernvpjMZbBye041/Ke rxl5Ez04ebKjQ== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 28806A0063; Fri, 5 Jul 2019 07:23:03 +0000 (UTC) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id DC4683E243; Fri, 5 Jul 2019 09:23:02 +0200 (CEST) From: Jose Abreu To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org Cc: Jose Abreu , Joao Pinto , "David S . Miller" , Giuseppe Cavallaro , Alexandre Torgue , Ilias Apalodimas , Jesper Dangaard Brouer , Arnd Bergmann Subject: [PATCH net-next v3 3/3] net: stmmac: Introducing support for Page Pool Date: Fri, 5 Jul 2019 09:23:00 +0200 Message-Id: <384dab52828c4b65596ef4202562a574eed93b91.1562311299.git.joabreu@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Mapping and unmapping DMA region is an high bottleneck in stmmac driver, specially in the RX path. This commit introduces support for Page Pool API and uses it in all RX queues. With this change, we get more stable troughput and some increase of banwidth with iperf: - MAC1000 - 950 Mbps - XGMAC: 9.22 Gbps Changes from v2: - Uncoditionally call page_pool_free() (Jesper) Changes from v1: - Use page_pool_get_dma_addr() (Jesper) - Add a comment (Jesper) - Add page_pool_free() call (Jesper) - Reintroduce sync_single_for_device (Arnd / Ilias) Signed-off-by: Jose Abreu Cc: Joao Pinto Cc: David S. Miller Cc: Giuseppe Cavallaro Cc: Alexandre Torgue Cc: Ilias Apalodimas Cc: Jesper Dangaard Brouer Cc: Arnd Bergmann --- drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 + drivers/net/ethernet/stmicro/stmmac/stmmac.h | 10 +- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 203 +++++++--------------- 3 files changed, 70 insertions(+), 144 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig index 943189dcccb1..2325b40dff6e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/Kconfig +++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig @@ -3,6 +3,7 @@ config STMMAC_ETH tristate "STMicroelectronics Multi-Gigabit Ethernet driver" depends on HAS_IOMEM && HAS_DMA select MII + select PAGE_POOL select PHYLINK select CRC32 imply PTP_1588_CLOCK diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index 513f4e2df5f6..5cd966c154f3 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -20,6 +20,7 @@ #include #include #include +#include struct stmmac_resources { void __iomem *addr; @@ -54,14 +55,19 @@ struct stmmac_tx_queue { u32 mss; }; +struct stmmac_rx_buffer { + struct page *page; + dma_addr_t addr; +}; + struct stmmac_rx_queue { u32 rx_count_frames; u32 queue_index; + struct page_pool *page_pool; + struct stmmac_rx_buffer *buf_pool; struct stmmac_priv *priv_data; struct dma_extended_desc *dma_erx; struct dma_desc *dma_rx ____cacheline_aligned_in_smp; - struct sk_buff **rx_skbuff; - dma_addr_t *rx_skbuff_dma; unsigned int cur_rx; unsigned int dirty_rx; u32 rx_zeroc_thresh; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index c8fe85ef9a7e..6566772e8ed5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, int i, gfp_t flags, u32 queue) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; - struct sk_buff *skb; + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; - skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags); - if (!skb) { - netdev_err(priv->dev, - "%s: Rx init fails; skb is NULL\n", __func__); + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); + if (!buf->page) return -ENOMEM; - } - rx_q->rx_skbuff[i] = skb; - rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data, - priv->dma_buf_sz, - DMA_FROM_DEVICE); - if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) { - netdev_err(priv->dev, "%s: DMA mapping error\n", __func__); - dev_kfree_skb_any(skb); - return -EINVAL; - } - - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]); + buf->addr = page_pool_get_dma_addr(buf->page); + stmmac_set_desc_addr(priv, p, buf->addr); if (priv->dma_buf_sz == BUF_SIZE_16KiB) stmmac_init_desc3(priv, p); @@ -1232,13 +1220,11 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; - if (rx_q->rx_skbuff[i]) { - dma_unmap_single(priv->device, rx_q->rx_skbuff_dma[i], - priv->dma_buf_sz, DMA_FROM_DEVICE); - dev_kfree_skb_any(rx_q->rx_skbuff[i]); - } - rx_q->rx_skbuff[i] = NULL; + if (buf->page) + page_pool_put_page(rx_q->page_pool, buf->page, false); + buf->page = NULL; } /** @@ -1321,10 +1307,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) queue); if (ret) goto err_init_rx_buffers; - - netif_dbg(priv, probe, priv->dev, "[%p]\t[%p]\t[%x]\n", - rx_q->rx_skbuff[i], rx_q->rx_skbuff[i]->data, - (unsigned int)rx_q->rx_skbuff_dma[i]); } rx_q->cur_rx = 0; @@ -1498,8 +1480,11 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv) sizeof(struct dma_extended_desc), rx_q->dma_erx, rx_q->dma_rx_phy); - kfree(rx_q->rx_skbuff_dma); - kfree(rx_q->rx_skbuff); + kfree(rx_q->buf_pool); + if (rx_q->page_pool) { + page_pool_request_shutdown(rx_q->page_pool); + page_pool_free(rx_q->page_pool); + } } } @@ -1551,20 +1536,29 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) /* RX queues buffers and DMA */ for (queue = 0; queue < rx_count; queue++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct page_pool_params pp_params = { 0 }; rx_q->queue_index = queue; rx_q->priv_data = priv; - rx_q->rx_skbuff_dma = kmalloc_array(DMA_RX_SIZE, - sizeof(dma_addr_t), - GFP_KERNEL); - if (!rx_q->rx_skbuff_dma) + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = DMA_RX_SIZE; + pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); + pp_params.nid = dev_to_node(priv->device); + pp_params.dev = priv->device; + pp_params.dma_dir = DMA_FROM_DEVICE; + + rx_q->page_pool = page_pool_create(&pp_params); + if (IS_ERR(rx_q->page_pool)) { + ret = PTR_ERR(rx_q->page_pool); + rx_q->page_pool = NULL; goto err_dma; + } - rx_q->rx_skbuff = kmalloc_array(DMA_RX_SIZE, - sizeof(struct sk_buff *), - GFP_KERNEL); - if (!rx_q->rx_skbuff) + rx_q->buf_pool = kmalloc_array(DMA_RX_SIZE, + sizeof(*rx_q->buf_pool), + GFP_KERNEL); + if (!rx_q->buf_pool) goto err_dma; if (priv->extend_desc) { @@ -3295,9 +3289,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) int dirty = stmmac_rx_dirty(priv, queue); unsigned int entry = rx_q->dirty_rx; - int bfsize = priv->dma_buf_sz; - while (dirty-- > 0) { + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; struct dma_desc *p; bool use_rx_wd; @@ -3306,49 +3299,22 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) else p = rx_q->dma_rx + entry; - if (likely(!rx_q->rx_skbuff[entry])) { - struct sk_buff *skb; - - skb = netdev_alloc_skb_ip_align(priv->dev, bfsize); - if (unlikely(!skb)) { - /* so for a while no zero-copy! */ - rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH; - if (unlikely(net_ratelimit())) - dev_err(priv->device, - "fail to alloc skb entry %d\n", - entry); - break; - } - - rx_q->rx_skbuff[entry] = skb; - rx_q->rx_skbuff_dma[entry] = - dma_map_single(priv->device, skb->data, bfsize, - DMA_FROM_DEVICE); - if (dma_mapping_error(priv->device, - rx_q->rx_skbuff_dma[entry])) { - netdev_err(priv->dev, "Rx DMA map failed\n"); - dev_kfree_skb(skb); + if (!buf->page) { + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); + if (!buf->page) break; - } - - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]); - stmmac_refill_desc3(priv, rx_q, p); - - if (rx_q->rx_zeroc_thresh > 0) - rx_q->rx_zeroc_thresh--; - - netif_dbg(priv, rx_status, priv->dev, - "refill entry #%d\n", entry); } - dma_wmb(); + + buf->addr = page_pool_get_dma_addr(buf->page); + stmmac_set_desc_addr(priv, p, buf->addr); + stmmac_refill_desc3(priv, rx_q, p); rx_q->rx_count_frames++; rx_q->rx_count_frames %= priv->rx_coal_frames; use_rx_wd = priv->use_riwt && rx_q->rx_count_frames; - stmmac_set_rx_owner(priv, p, use_rx_wd); - dma_wmb(); + stmmac_set_rx_owner(priv, p, use_rx_wd); entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE); } @@ -3373,9 +3339,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) unsigned int next_entry = rx_q->cur_rx; int coe = priv->hw->rx_csum; unsigned int count = 0; - bool xmac; - - xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; if (netif_msg_rx_status(priv)) { void *rx_head; @@ -3389,11 +3352,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true); } while (count < limit) { + struct stmmac_rx_buffer *buf; + struct dma_desc *np, *p; int entry, status; - struct dma_desc *p; - struct dma_desc *np; entry = next_entry; + buf = &rx_q->buf_pool[entry]; if (priv->extend_desc) p = (struct dma_desc *)(rx_q->dma_erx + entry); @@ -3423,20 +3387,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) stmmac_rx_extended_status(priv, &priv->dev->stats, &priv->xstats, rx_q->dma_erx + entry); if (unlikely(status == discard_frame)) { + page_pool_recycle_direct(rx_q->page_pool, buf->page); priv->dev->stats.rx_errors++; - if (priv->hwts_rx_en && !priv->extend_desc) { - /* DESC2 & DESC3 will be overwritten by device - * with timestamp value, hence reinitialize - * them in stmmac_rx_refill() function so that - * device can reuse it. - */ - dev_kfree_skb_any(rx_q->rx_skbuff[entry]); - rx_q->rx_skbuff[entry] = NULL; - dma_unmap_single(priv->device, - rx_q->rx_skbuff_dma[entry], - priv->dma_buf_sz, - DMA_FROM_DEVICE); - } + buf->page = NULL; } else { struct sk_buff *skb; int frame_len; @@ -3476,58 +3429,20 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) frame_len, status); } - /* The zero-copy is always used for all the sizes - * in case of GMAC4 because it needs - * to refill the used descriptors, always. - */ - if (unlikely(!xmac && - ((frame_len < priv->rx_copybreak) || - stmmac_rx_threshold_count(rx_q)))) { - skb = netdev_alloc_skb_ip_align(priv->dev, - frame_len); - if (unlikely(!skb)) { - if (net_ratelimit()) - dev_warn(priv->device, - "packet dropped\n"); - priv->dev->stats.rx_dropped++; - continue; - } - - dma_sync_single_for_cpu(priv->device, - rx_q->rx_skbuff_dma - [entry], frame_len, - DMA_FROM_DEVICE); - skb_copy_to_linear_data(skb, - rx_q-> - rx_skbuff[entry]->data, - frame_len); - - skb_put(skb, frame_len); - dma_sync_single_for_device(priv->device, - rx_q->rx_skbuff_dma - [entry], frame_len, - DMA_FROM_DEVICE); - } else { - skb = rx_q->rx_skbuff[entry]; - if (unlikely(!skb)) { - if (net_ratelimit()) - netdev_err(priv->dev, - "%s: Inconsistent Rx chain\n", - priv->dev->name); - priv->dev->stats.rx_dropped++; - continue; - } - prefetch(skb->data - NET_IP_ALIGN); - rx_q->rx_skbuff[entry] = NULL; - rx_q->rx_zeroc_thresh++; - - skb_put(skb, frame_len); - dma_unmap_single(priv->device, - rx_q->rx_skbuff_dma[entry], - priv->dma_buf_sz, - DMA_FROM_DEVICE); + skb = netdev_alloc_skb_ip_align(priv->dev, frame_len); + if (unlikely(!skb)) { + priv->dev->stats.rx_dropped++; + continue; } + dma_sync_single_for_cpu(priv->device, buf->addr, + frame_len, DMA_FROM_DEVICE); + skb_copy_to_linear_data(skb, page_address(buf->page), + frame_len); + skb_put(skb, frame_len); + dma_sync_single_for_device(priv->device, buf->addr, + frame_len, DMA_FROM_DEVICE); + if (netif_msg_pktdata(priv)) { netdev_dbg(priv->dev, "frame received (%dbytes)", frame_len); @@ -3547,6 +3462,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) napi_gro_receive(&ch->rx_napi, skb); + /* Data payload copied into SKB, page ready for recycle */ + page_pool_recycle_direct(rx_q->page_pool, buf->page); + buf->page = NULL; + priv->dev->stats.rx_packets++; priv->dev->stats.rx_bytes += frame_len; }