From patchwork Thu May 30 14:19:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107777 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8qr0cmfz9s9N for ; Fri, 31 May 2019 00:20:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726955AbfE3OT5 (ORCPT ); Thu, 30 May 2019 10:19:57 -0400 Received: from inva020.nxp.com ([92.121.34.13]:48968 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726065AbfE3OT4 (ORCPT ); Thu, 30 May 2019 10:19:56 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 28A251A0650; Thu, 30 May 2019 16:19:54 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1C93E1A016B; Thu, 30 May 2019 16:19:54 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 883732061C; Thu, 30 May 2019 16:19:53 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs Date: Thu, 30 May 2019 17:19:46 +0300 Message-Id: <20190530141951.6704-2-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor liodn base registers are specific to PAMU based NXP systems and on SMMU based ones are reserved. Don't access them if PAMU is compiled in. Signed-off-by: Laurentiu Tudor --- drivers/net/ethernet/freescale/fman/fman.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c index e80fedb27cee..cce6636b1763 100644 --- a/drivers/net/ethernet/freescale/fman/fman.c +++ b/drivers/net/ethernet/freescale/fman/fman.c @@ -634,6 +634,9 @@ static void set_port_liodn(struct fman *fman, u8 port_id, { u32 tmp; + iowrite32be(liodn_ofst, &fman->bmi_regs->fmbm_spliodn[port_id - 1]); + if (!IS_ENABLED(CONFIG_FSL_PAMU)) + return; /* set LIODN base for this port */ tmp = ioread32be(&fman->dma_regs->fmdmplr[port_id / 2]); if (port_id % 2) { @@ -644,7 +647,6 @@ static void set_port_liodn(struct fman *fman, u8 port_id, tmp |= liodn_base << DMA_LIODN_SHIFT; } iowrite32be(tmp, &fman->dma_regs->fmdmplr[port_id / 2]); - iowrite32be(liodn_ofst, &fman->bmi_regs->fmbm_spliodn[port_id - 1]); } static void enable_rams_ecc(struct fman_fpm_regs __iomem *fpm_rg) @@ -1942,6 +1944,8 @@ static int fman_init(struct fman *fman) fman->liodn_offset[i] = ioread32be(&fman->bmi_regs->fmbm_spliodn[i - 1]); + if (!IS_ENABLED(CONFIG_FSL_PAMU)) + continue; liodn_base = ioread32be(&fman->dma_regs->fmdmplr[i / 2]); if (i % 2) { /* FMDM_PLR LSB holds LIODN base for odd ports */ From patchwork Thu May 30 14:19:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107779 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8r040Mvz9s3Z for ; Fri, 31 May 2019 00:20:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727420AbfE3OU3 (ORCPT ); Thu, 30 May 2019 10:20:29 -0400 Received: from inva021.nxp.com ([92.121.34.21]:47142 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726454AbfE3OT4 (ORCPT ); Thu, 30 May 2019 10:19:56 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C226B2005D7; Thu, 30 May 2019 16:19:54 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B4BD620027F; Thu, 30 May 2019 16:19:54 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 2BF522061C; Thu, 30 May 2019 16:19:54 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 2/6] fsl/fman: add API to get the device behind a fman port Date: Thu, 30 May 2019 17:19:47 +0300 Message-Id: <20190530141951.6704-3-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor Add an API that retrieves the 'struct device' that the specified fman port probed against. The new API will be used in a subsequent iommu enablement related patch. Signed-off-by: Laurentiu Tudor Acked-by: Madalin Bucur --- drivers/net/ethernet/freescale/fman/fman_port.c | 14 ++++++++++++++ drivers/net/ethernet/freescale/fman/fman_port.h | 2 ++ 2 files changed, 16 insertions(+) diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c index ee82ee1384eb..bd76c9730692 100644 --- a/drivers/net/ethernet/freescale/fman/fman_port.c +++ b/drivers/net/ethernet/freescale/fman/fman_port.c @@ -1728,6 +1728,20 @@ u32 fman_port_get_qman_channel_id(struct fman_port *port) } EXPORT_SYMBOL(fman_port_get_qman_channel_id); +/** + * fman_port_get_device + * port: Pointer to the FMan port device + * + * Get the 'struct device' associated to the specified FMan port device + * + * Return: pointer to associated 'struct device' + */ +struct device *fman_port_get_device(struct fman_port *port) +{ + return port->dev; +} +EXPORT_SYMBOL(fman_port_get_device); + int fman_port_get_hash_result_offset(struct fman_port *port, u32 *offset) { if (port->buffer_offsets.hash_result_offset == ILLEGAL_BASE) diff --git a/drivers/net/ethernet/freescale/fman/fman_port.h b/drivers/net/ethernet/freescale/fman/fman_port.h index 9dbb69f40121..82f12661a46d 100644 --- a/drivers/net/ethernet/freescale/fman/fman_port.h +++ b/drivers/net/ethernet/freescale/fman/fman_port.h @@ -157,4 +157,6 @@ int fman_port_get_tstamp(struct fman_port *port, const void *data, u64 *tstamp); struct fman_port *fman_port_bind(struct device *dev); +struct device *fman_port_get_device(struct fman_port *port); + #endif /* __FMAN_PORT_H */ From patchwork Thu May 30 14:19:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107778 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8qt1wCrz9s3Z for ; Fri, 31 May 2019 00:20:26 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727389AbfE3OUX (ORCPT ); Thu, 30 May 2019 10:20:23 -0400 Received: from inva020.nxp.com ([92.121.34.13]:49024 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726498AbfE3OT5 (ORCPT ); Thu, 30 May 2019 10:19:57 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 63FE91A0179; Thu, 30 May 2019 16:19:55 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 57C171A016B; Thu, 30 May 2019 16:19:55 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id C38CA2026B; Thu, 30 May 2019 16:19:54 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 3/6] dpaa_eth: defer probing after qbman Date: Thu, 30 May 2019 17:19:48 +0300 Message-Id: <20190530141951.6704-4-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor Enabling SMMU altered the order of device probing causing the dpaa1 ethernet driver to get probed before qbman and causing a boot crash. Add predictability in the probing order by deferring the ethernet driver probe after qbman and portals by using the recently introduced qbman APIs. Signed-off-by: Laurentiu Tudor Acked-by: Madalin Bucur --- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index d3f2408dc9e8..975f307f0caa 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2774,6 +2774,37 @@ static int dpaa_eth_probe(struct platform_device *pdev) int err = 0, i, channel; struct device *dev; + err = bman_is_probed(); + if (!err) + return -EPROBE_DEFER; + if (err < 0) { + dev_err(&pdev->dev, "failing probe due to bman probe error\n"); + return -ENODEV; + } + err = qman_is_probed(); + if (!err) + return -EPROBE_DEFER; + if (err < 0) { + dev_err(&pdev->dev, "failing probe due to qman probe error\n"); + return -ENODEV; + } + err = bman_portals_probed(); + if (!err) + return -EPROBE_DEFER; + if (err < 0) { + dev_err(&pdev->dev, + "failing probe due to bman portals probe error\n"); + return -ENODEV; + } + err = qman_portals_probed(); + if (!err) + return -EPROBE_DEFER; + if (err < 0) { + dev_err(&pdev->dev, + "failing probe due to qman portals probe error\n"); + return -ENODEV; + } + /* device used for DMA mapping */ dev = pdev->dev.parent; err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40)); From patchwork Thu May 30 14:19:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107776 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8qm3Sm1z9s4V for ; Fri, 31 May 2019 00:20:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727094AbfE3OT6 (ORCPT ); Thu, 30 May 2019 10:19:58 -0400 Received: from inva021.nxp.com ([92.121.34.21]:47206 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbfE3OT5 (ORCPT ); Thu, 30 May 2019 10:19:57 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 0899420054E; Thu, 30 May 2019 16:19:56 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id F0260200182; Thu, 30 May 2019 16:19:55 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 678932026B; Thu, 30 May 2019 16:19:55 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 4/6] dpaa_eth: base dma mappings on the fman rx port Date: Thu, 30 May 2019 17:19:49 +0300 Message-Id: <20190530141951.6704-5-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor The dma transactions initiator is the rx fman port so that's the device that the dma mappings should be done. Previously the mappings were done through the MAC device which makes no sense because it's neither dma-able nor connected in any way to smmu. Signed-off-by: Laurentiu Tudor Acked-by: Madalin Bucur --- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 975f307f0caa..f54b0cd0d175 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2805,8 +2805,15 @@ static int dpaa_eth_probe(struct platform_device *pdev) return -ENODEV; } + mac_dev = dpaa_mac_dev_get(pdev); + if (IS_ERR(mac_dev)) { + dev_err(&pdev->dev, "dpaa_mac_dev_get() failed\n"); + err = PTR_ERR(mac_dev); + goto probe_err; + } + /* device used for DMA mapping */ - dev = pdev->dev.parent; + dev = fman_port_get_device(mac_dev->port[RX]); err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40)); if (err) { dev_err(dev, "dma_coerce_mask_and_coherent() failed\n"); @@ -2831,13 +2838,6 @@ static int dpaa_eth_probe(struct platform_device *pdev) priv->msg_enable = netif_msg_init(debug, DPAA_MSG_DEFAULT); - mac_dev = dpaa_mac_dev_get(pdev); - if (IS_ERR(mac_dev)) { - dev_err(dev, "dpaa_mac_dev_get() failed\n"); - err = PTR_ERR(mac_dev); - goto free_netdev; - } - /* If fsl_fm_max_frm is set to a higher value than the all-common 1500, * we choose conservatively and let the user explicitly set a higher * MTU via ifconfig. Otherwise, the user may end up with different MTUs @@ -2973,9 +2973,9 @@ static int dpaa_eth_probe(struct platform_device *pdev) qman_release_cgrid(priv->cgr_data.cgr.cgrid); free_dpaa_bps: dpaa_bps_free(priv); -free_netdev: dev_set_drvdata(dev, NULL); free_netdev(net_dev); +probe_err: return err; } From patchwork Thu May 30 14:19:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107774 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8qR3Ww2z9s5c for ; Fri, 31 May 2019 00:20:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727229AbfE3OT7 (ORCPT ); Thu, 30 May 2019 10:19:59 -0400 Received: from inva021.nxp.com ([92.121.34.21]:47256 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726944AbfE3OT7 (ORCPT ); Thu, 30 May 2019 10:19:59 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 978E22005F6; Thu, 30 May 2019 16:19:56 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 944CB200280; Thu, 30 May 2019 16:19:56 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 0BA872026B; Thu, 30 May 2019 16:19:56 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 5/6] dpaa_eth: fix iova handling for contiguous frames Date: Thu, 30 May 2019 17:19:50 +0300 Message-Id: <20190530141951.6704-6-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor The driver relies on the no longer valid assumption that dma addresses (iovas) are identical to physical addressees and uses phys_to_virt() to make iova -> vaddr conversions. Fix this by adding a function that does proper iova -> phys conversions using the iommu api and update the code to use it. Also, a dma_unmap_single() call had to be moved further down the code because iova -> vaddr conversions were required before the unmap. For now only the contiguous frame case is handled and the SG case is split in a following patch. While at it, clean-up a redundant dpaa_bpid2pool() and pass the bp as parameter. Signed-off-by: Laurentiu Tudor Acked-by: Madalin Bucur --- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 42 ++++++++++--------- .../net/ethernet/freescale/dpaa/dpaa_eth.h | 2 + 2 files changed, 24 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index f54b0cd0d175..46194a04617a 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -50,6 +50,7 @@ #include #include #include +#include #include #include #include @@ -1595,6 +1596,12 @@ static int dpaa_eth_refill_bpools(struct dpaa_priv *priv) return 0; } +static phys_addr_t dpaa_iova_to_phys(const struct dpaa_priv *priv, + dma_addr_t addr) +{ + return priv->domain ? iommu_iova_to_phys(priv->domain, addr) : addr; +} + /* Cleanup function for outgoing frame descriptors that were built on Tx path, * either contiguous frames or scatter/gather ones. * Skb freeing is not handled here. @@ -1617,7 +1624,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, int nr_frags, i; u64 ns; - skbh = (struct sk_buff **)phys_to_virt(addr); + skbh = (struct sk_buff **)phys_to_virt(dpaa_iova_to_phys(priv, addr)); skb = *skbh; if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { @@ -1687,25 +1694,21 @@ static u8 rx_csum_offload(const struct dpaa_priv *priv, const struct qm_fd *fd) * accommodate the shared info area of the skb. */ static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv, - const struct qm_fd *fd) + const struct qm_fd *fd, + struct dpaa_bp *dpaa_bp, + void *vaddr) { ssize_t fd_off = qm_fd_get_offset(fd); - dma_addr_t addr = qm_fd_addr(fd); - struct dpaa_bp *dpaa_bp; struct sk_buff *skb; - void *vaddr; - vaddr = phys_to_virt(addr); WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES)); - dpaa_bp = dpaa_bpid2pool(fd->bpid); - if (!dpaa_bp) - goto free_buffer; - skb = build_skb(vaddr, dpaa_bp->size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); - if (WARN_ONCE(!skb, "Build skb failure on Rx\n")) - goto free_buffer; + if (WARN_ONCE(!skb, "Build skb failure on Rx\n")) { + skb_free_frag(vaddr); + return NULL; + } WARN_ON(fd_off != priv->rx_headroom); skb_reserve(skb, fd_off); skb_put(skb, qm_fd_get_length(fd)); @@ -1713,10 +1716,6 @@ static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv, skb->ip_summed = rx_csum_offload(priv, fd); return skb; - -free_buffer: - skb_free_frag(vaddr); - return NULL; } /* Build an skb with the data of the first S/G entry in the linear portion and @@ -2309,12 +2308,12 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, if (!dpaa_bp) return qman_cb_dqrr_consume; - dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE); - /* prefetch the first 64 bytes of the frame or the SGT start */ - vaddr = phys_to_virt(addr); + vaddr = phys_to_virt(dpaa_iova_to_phys(priv, addr)); prefetch(vaddr + qm_fd_get_offset(fd)); + dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE); + /* The only FD types that we may receive are contig and S/G */ WARN_ON((fd_format != qm_fd_contig) && (fd_format != qm_fd_sg)); @@ -2325,7 +2324,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, (*count_ptr)--; if (likely(fd_format == qm_fd_contig)) - skb = contig_fd_to_skb(priv, fd); + skb = contig_fd_to_skb(priv, fd, dpaa_bp, vaddr); else skb = sg_fd_to_skb(priv, fd); if (!skb) @@ -2836,6 +2835,9 @@ static int dpaa_eth_probe(struct platform_device *pdev) priv = netdev_priv(net_dev); priv->net_dev = net_dev; + /* cache iommu domain */ + priv->domain = iommu_get_domain_for_dev(dev); + priv->msg_enable = netif_msg_init(debug, DPAA_MSG_DEFAULT); /* If fsl_fm_max_frm is set to a higher value than the all-common 1500, diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h index af320f83c742..1548cb67b448 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h @@ -185,6 +185,8 @@ struct dpaa_priv { bool tx_tstamp; /* Tx timestamping enabled */ bool rx_tstamp; /* Rx timestamping enabled */ + + struct iommu_domain *domain; }; /* from dpaa_ethtool.c */ From patchwork Thu May 30 14:19:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurentiu Tudor X-Patchwork-Id: 1107775 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45F8qf5MZvz9s3Z for ; Fri, 31 May 2019 00:20:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727282AbfE3OUD (ORCPT ); Thu, 30 May 2019 10:20:03 -0400 Received: from inva021.nxp.com ([92.121.34.21]:47314 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727019AbfE3OT7 (ORCPT ); Thu, 30 May 2019 10:19:59 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 45B6F2005FF; Thu, 30 May 2019 16:19:57 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 380D12002A0; Thu, 30 May 2019 16:19:57 +0200 (CEST) Received: from fsr-ub1864-101.ea.freescale.net (fsr-ub1864-101.ea.freescale.net [10.171.82.13]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id A3E7F2062D; Thu, 30 May 2019 16:19:56 +0200 (CEST) From: laurentiu.tudor@nxp.com To: netdev@vger.kernel.org, madalin.bucur@nxp.com, roy.pledge@nxp.com, camelia.groza@nxp.com, leoyang.li@nxp.com Cc: Joakim.Tjernlund@infinera.com, davem@davemloft.net, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Laurentiu Tudor Subject: [PATCH v3 6/6] dpaa_eth: fix iova handling for sg frames Date: Thu, 30 May 2019 17:19:51 +0300 Message-Id: <20190530141951.6704-7-laurentiu.tudor@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190530141951.6704-1-laurentiu.tudor@nxp.com> References: <20190530141951.6704-1-laurentiu.tudor@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Laurentiu Tudor The driver relies on the no longer valid assumption that dma addresses (iovas) are identical to physical addressees and uses phys_to_virt() to make iova -> vaddr conversions. Fix this also for scatter-gather frames using the iova -> phys conversion function added in the previous patch. While at it, clean-up a redundant dpaa_bpid2pool() and pass the bp as parameter. Signed-off-by: Laurentiu Tudor Acked-by: Madalin Bucur --- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 40 +++++++++++-------- 1 file changed, 23 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 46194a04617a..7d978a93dffd 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -1641,14 +1641,17 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) { nr_frags = skb_shinfo(skb)->nr_frags; - dma_unmap_single(dev, addr, - qm_fd_get_offset(fd) + DPAA_SGT_SIZE, - dma_dir); /* The sgt buffer has been allocated with netdev_alloc_frag(), * it's from lowmem. */ - sgt = phys_to_virt(addr + qm_fd_get_offset(fd)); + sgt = phys_to_virt(dpaa_iova_to_phys(priv, + addr + + qm_fd_get_offset(fd))); + + dma_unmap_single(dev, addr, + qm_fd_get_offset(fd) + DPAA_SGT_SIZE, + dma_dir); /* sgt[0] is from lowmem, was dma_map_single()-ed */ dma_unmap_single(dev, qm_sg_addr(&sgt[0]), @@ -1663,7 +1666,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv, } /* Free the page frag that we allocated on Tx */ - skb_free_frag(phys_to_virt(addr)); + skb_free_frag(skbh); } else { dma_unmap_single(dev, addr, skb_tail_pointer(skb) - (u8 *)skbh, dma_dir); @@ -1724,14 +1727,14 @@ static struct sk_buff *contig_fd_to_skb(const struct dpaa_priv *priv, * The page fragment holding the S/G Table is recycled here. */ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv, - const struct qm_fd *fd) + const struct qm_fd *fd, + struct dpaa_bp *dpaa_bp, + void *vaddr) { ssize_t fd_off = qm_fd_get_offset(fd); - dma_addr_t addr = qm_fd_addr(fd); const struct qm_sg_entry *sgt; struct page *page, *head_page; - struct dpaa_bp *dpaa_bp; - void *vaddr, *sg_vaddr; + void *sg_vaddr; int frag_off, frag_len; struct sk_buff *skb; dma_addr_t sg_addr; @@ -1740,7 +1743,6 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv, int *count_ptr; int i; - vaddr = phys_to_virt(addr); WARN_ON(!IS_ALIGNED((unsigned long)vaddr, SMP_CACHE_BYTES)); /* Iterate through the SGT entries and add data buffers to the skb */ @@ -1751,14 +1753,17 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv, WARN_ON(qm_sg_entry_is_ext(&sgt[i])); sg_addr = qm_sg_addr(&sgt[i]); - sg_vaddr = phys_to_virt(sg_addr); - WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr, - SMP_CACHE_BYTES)); /* We may use multiple Rx pools */ dpaa_bp = dpaa_bpid2pool(sgt[i].bpid); - if (!dpaa_bp) + if (!dpaa_bp) { + pr_info("%s: fail to get dpaa_bp for sg bpid %d\n", + __func__, sgt[i].bpid); goto free_buffers; + } + sg_vaddr = phys_to_virt(dpaa_iova_to_phys(priv, sg_addr)); + WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr, + SMP_CACHE_BYTES)); count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size, @@ -1830,10 +1835,11 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv, /* free all the SG entries */ for (i = 0; i < DPAA_SGT_MAX_ENTRIES ; i++) { sg_addr = qm_sg_addr(&sgt[i]); - sg_vaddr = phys_to_virt(sg_addr); - skb_free_frag(sg_vaddr); dpaa_bp = dpaa_bpid2pool(sgt[i].bpid); if (dpaa_bp) { + sg_addr = dpaa_iova_to_phys(priv, sg_addr); + sg_vaddr = phys_to_virt(sg_addr); + skb_free_frag(sg_vaddr); count_ptr = this_cpu_ptr(dpaa_bp->percpu_count); (*count_ptr)--; } @@ -2326,7 +2332,7 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, if (likely(fd_format == qm_fd_contig)) skb = contig_fd_to_skb(priv, fd, dpaa_bp, vaddr); else - skb = sg_fd_to_skb(priv, fd); + skb = sg_fd_to_skb(priv, fd, dpaa_bp, vaddr); if (!skb) return qman_cb_dqrr_consume;