From patchwork Wed Dec 12 11:13:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011831 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="LqyBmEfa"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhg4PHSz9s1c for ; Wed, 12 Dec 2018 22:13:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727061AbeLLLNz (ORCPT ); Wed, 12 Dec 2018 06:13:55 -0500 Received: from us01smtprelay-2.synopsys.com ([198.182.60.111]:35076 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726856AbeLLLNy (ORCPT ); Wed, 12 Dec 2018 06:13:54 -0500 Received: from mailhost.synopsys.com (mailhost3.synopsys.com [10.12.238.238]) by smtprelay.synopsys.com (Postfix) with ESMTP id C1E1510C1556; Wed, 12 Dec 2018 03:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613233; bh=WN90Eyjiwm6XVWI7n4fvknglpLq16zmiI+A5AG3jyh0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=LqyBmEfaiK2/pD9eQB9XhDZwMIdpPt0t7B/Wl1ErDIY/oLjtEZ0O7t/eTHTH3a/+H l9nzMh7j4aC8xbBiLJcFwB54SwjJciFUl94Pi5rI/cebw6gFS7gD1gJJWCGRdtuv3D fp+J3fR5XVFXSYVYCKAAFUMunHOYMzBicAkfXV4y6I5GpIdBkqyqbCukI1Y8x2l8Yo Fu7QYuS/HFfnl1/C/IscOPKJ46wjNHmNAAVxLycIcMnPE6xvCyaCGMV2a0BPvytkmJ +2kOYtqxpAC5hAWfRKGnZ9J6SKBpsz5FkUhOXWqDMdEK/3JFdbhjwZxBuxz9M6WnFY bVUbwwmpu3CXg== Received: from de02.synopsys.com (germany.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 86E503D5F; Wed, 12 Dec 2018 03:13:53 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id B31B93BDCA; Wed, 12 Dec 2018 12:13:52 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Vinod Koul , Eugeniy Paltsev , Andy Shevchenko , Joao Pinto Subject: [RFC 1/6] dma: Add Synopsys eDMA IP core driver Date: Wed, 12 Dec 2018 12:13:21 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add Synopsys eDMA IP core driver to kernel. This core driver, initializes and configures the eDMA IP using vma-helpers functions and dma-engine subsystem. Also creates an abstration layer through callbacks allowing different registers mappings in the future, organized in to versions. This driver can be compile as built-in or external module in kernel. To enable this driver just select DW_EDMA option in kernel configuration, however it requires and selects automatically DMA_ENGINE and DMA_VIRTUAL_CHANNELS option too. Signed-off-by: Gustavo Pimentel Cc: Vinod Koul Cc: Eugeniy Paltsev Cc: Andy Shevchenko Cc: Joao Pinto --- drivers/dma/Kconfig | 2 + drivers/dma/Makefile | 1 + drivers/dma/dw-edma/Kconfig | 9 + drivers/dma/dw-edma/Makefile | 4 + drivers/dma/dw-edma/dw-edma-core.c | 925 +++++++++++++++++++++++++++++++++++++ drivers/dma/dw-edma/dw-edma-core.h | 145 ++++++ include/linux/dma/edma.h | 42 ++ 7 files changed, 1128 insertions(+) create mode 100644 drivers/dma/dw-edma/Kconfig create mode 100644 drivers/dma/dw-edma/Makefile create mode 100644 drivers/dma/dw-edma/dw-edma-core.c create mode 100644 drivers/dma/dw-edma/dw-edma-core.h create mode 100644 include/linux/dma/edma.h diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index de511db..40517f8 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -640,6 +640,8 @@ source "drivers/dma/qcom/Kconfig" source "drivers/dma/dw/Kconfig" +source "drivers/dma/dw-edma/Kconfig" + source "drivers/dma/hsu/Kconfig" source "drivers/dma/sh/Kconfig" diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index 7fcc4d8..3ebfab0 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -29,6 +29,7 @@ obj-$(CONFIG_DMA_SUN4I) += sun4i-dma.o obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o obj-$(CONFIG_DW_AXI_DMAC) += dw-axi-dmac/ obj-$(CONFIG_DW_DMAC_CORE) += dw/ +obj-$(CONFIG_DW_EDMA) += dw-edma/ obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o obj-$(CONFIG_FSL_DMA) += fsldma.o obj-$(CONFIG_FSL_EDMA) += fsl-edma.o fsl-edma-common.o diff --git a/drivers/dma/dw-edma/Kconfig b/drivers/dma/dw-edma/Kconfig new file mode 100644 index 0000000..3016bed --- /dev/null +++ b/drivers/dma/dw-edma/Kconfig @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 + +config DW_EDMA + tristate "Synopsys DesignWare eDMA controller driver" + select DMA_ENGINE + select DMA_VIRTUAL_CHANNELS + help + Support the Synopsys DesignWare eDMA controller, normally + implemented on endpoints SoCs. diff --git a/drivers/dma/dw-edma/Makefile b/drivers/dma/dw-edma/Makefile new file mode 100644 index 0000000..3224010 --- /dev/null +++ b/drivers/dma/dw-edma/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_DW_EDMA) += dw-edma.o +dw-edma-objs := dw-edma-core.o diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c new file mode 100644 index 0000000..b4c2982 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -0,0 +1,925 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA core driver + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "dw-edma-core.h" +#include "../dmaengine.h" +#include "../virt-dma.h" + +#define DRV_CORE_NAME "dw-edma-core" + +#define SET(reg, name, val) \ + reg.name = val + +#define SET_BOTH_CH(name, value) \ + do { \ + SET(dw->wr_edma, name, value); \ + SET(dw->rd_edma, name, value); \ + } while (0) + +static inline +struct device *dchan2dev(struct dma_chan *dchan) +{ + return &dchan->dev->device; +} + +static inline +struct device *chan2dev(struct dw_edma_chan *chan) +{ + return &chan->vc.chan.dev->device; +} + +static inline +const struct dw_edma_core_ops *chan2ops(struct dw_edma_chan *chan) +{ + return chan->chip->dw->ops; +} + +static inline +struct dw_edma_desc *vd2dw_edma_desc(struct virt_dma_desc *vd) +{ + return container_of(vd, struct dw_edma_desc, vd); +} + +static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) +{ + struct dw_edma_chan *chan = chunk->chan; + struct dw_edma_burst *burst; + + burst = kzalloc(sizeof(struct dw_edma_burst), GFP_NOWAIT); + if (unlikely(!burst)) { + dev_err(chan2dev(chan), ": fail to alloc new burst\n"); + return NULL; + } + + INIT_LIST_HEAD(&burst->list); + burst->sar = 0; + burst->dar = 0; + burst->sz = 0; + + if (chunk->burst) { + atomic_inc(&chunk->bursts_alloc); + dev_dbg(chan2dev(chan), ": alloc new burst element (%d)\n", + atomic_read(&chunk->bursts_alloc)); + list_add_tail(&burst->list, &chunk->burst->list); + } else { + atomic_set(&chunk->bursts_alloc, 0); + chunk->burst = burst; + dev_dbg(chan2dev(chan), ": alloc new burst head\n"); + } + + return burst; +} + +static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) +{ + struct dw_edma_chan *chan = desc->chan; + struct dw_edma *dw = chan->chip->dw; + struct dw_edma_chunk *chunk; + + chunk = kzalloc(sizeof(struct dw_edma_chunk), GFP_NOWAIT); + if (unlikely(!chunk)) { + dev_err(chan2dev(chan), ": fail to alloc new chunk\n"); + return NULL; + } + + INIT_LIST_HEAD(&chunk->list); + chunk->chan = chan; + chunk->cb = !(atomic_read(&desc->chunks_alloc) % 2); + chunk->sz = 0; + chunk->p_addr = (dma_addr_t) (dw->pa_ll + chan->ll_off); + chunk->v_addr = (dma_addr_t) (dw->va_ll + chan->ll_off); + + if (desc->chunk) { + atomic_inc(&desc->chunks_alloc); + dev_dbg(chan2dev(chan), ": alloc new chunk element (%d)\n", + atomic_read(&desc->chunks_alloc)); + list_add_tail(&chunk->list, &desc->chunk->list); + dw_edma_alloc_burst(chunk); + } else { + chunk->burst = NULL; + atomic_set(&desc->chunks_alloc, 0); + desc->chunk = chunk; + dev_dbg(chan2dev(chan), ": alloc new chunk head\n"); + } + + return chunk; +} + +static struct dw_edma_desc *dw_edma_alloc_desc(struct dw_edma_chan *chan) +{ + struct dw_edma_desc *desc; + + dev_dbg(chan2dev(chan), ": alloc new descriptor\n"); + + desc = kzalloc(sizeof(struct dw_edma_desc), GFP_NOWAIT); + if (unlikely(!desc)) { + dev_err(chan2dev(chan), ": fail to alloc new descriptor\n"); + return NULL; + } + + desc->chan = chan; + desc->alloc_sz = 0; + desc->xfer_sz = 0; + desc->chunk = NULL; + dw_edma_alloc_chunk(desc); + + return desc; +} + +static void dw_edma_free_burst(struct dw_edma_chunk *chunk) +{ + struct dw_edma_burst *child, *_next; + + if (!chunk->burst) + return; + + // Remove all the list elements + list_for_each_entry_safe(child, _next, &chunk->burst->list, list) { + list_del(&child->list); + kfree(child); + atomic_dec(&chunk->bursts_alloc); + } + + // Remove the list head + kfree(child); + chunk->burst = NULL; +} + +static void dw_edma_free_chunk(struct dw_edma_desc *desc) +{ + struct dw_edma_chan *chan = desc->chan; + struct dw_edma_chunk *child, *_next; + + if (!desc->chunk) + return; + + // Remove all the list elements + list_for_each_entry_safe(child, _next, &desc->chunk->list, list) { + dw_edma_free_burst(child); + if (atomic_read(&child->bursts_alloc)) + dev_dbg(chan2dev(chan), + ": %d bursts still allocated\n", + atomic_read(&child->bursts_alloc)); + list_del(&child->list); + kfree(child); + atomic_dec(&desc->chunks_alloc); + } + + // Remove the list head + kfree(child); + desc->chunk = NULL; +} + +static void dw_edma_free_desc(struct dw_edma_desc *desc) +{ + struct dw_edma_chan *chan = desc->chan; + + dw_edma_free_chunk(desc); + if (atomic_read(&desc->chunks_alloc)) + dev_dbg(chan2dev(chan), + ": %d chunks still allocated\n", + atomic_read(&desc->chunks_alloc)); +} + +static void vchan_free_desc(struct virt_dma_desc *vdesc) +{ + dw_edma_free_desc(vd2dw_edma_desc(vdesc)); +} + +static void start_transfer(struct dw_edma_chan *chan) +{ + struct virt_dma_desc *vd; + struct dw_edma_desc *desc; + struct dw_edma_chunk *child, *_next; + const struct dw_edma_core_ops *ops = chan2ops(chan); + + vd = vchan_next_desc(&chan->vc); + if (!vd) + return; + + desc = vd2dw_edma_desc(vd); + if (!desc) + return; + + list_for_each_entry_safe(child, _next, &desc->chunk->list, list) { + ops->start(child, !desc->xfer_sz); + desc->xfer_sz += child->sz; + dev_dbg(chan2dev(chan), + ": transfer of %u bytes started\n", child->sz); + + dw_edma_free_burst(child); + if (atomic_read(&child->bursts_alloc)) + dev_dbg(chan2dev(chan), + ": %d bursts still allocated\n", + atomic_read(&child->bursts_alloc)); + list_del(&child->list); + kfree(child); + atomic_dec(&desc->chunks_alloc); + + return; + } +} + +static int dw_edma_device_config(struct dma_chan *dchan, + struct dma_slave_config *config) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + const struct dw_edma_core_ops *ops = chan2ops(chan); + enum dma_transfer_direction dir; + unsigned long flags; + int err = 0; + + spin_lock_irqsave(&chan->vc.lock, flags); + + if (!config) { + err = -EINVAL; + goto err_config; + } + + if (chan->configured) { + dev_err(chan2dev(chan), ": channel already configured\n"); + err = -EPERM; + goto err_config; + } + + dir = config->direction; + if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE) { + dev_info(chan2dev(chan), + ": direction DMA_DEV_TO_MEM (EDMA_DIR_WRITE)\n"); + chan->p_addr = config->src_addr; + } else if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_READ) { + dev_info(chan2dev(chan), + ": direction DMA_MEM_TO_DEV (EDMA_DIR_READ)\n"); + chan->p_addr = config->dst_addr; + } else { + dev_err(chan2dev(chan), ": invalid direction\n"); + err = -EINVAL; + goto err_config; + } + + dev_info(chan2dev(chan), + ": src_addr(physical) = 0x%.16x\n", config->src_addr); + dev_info(chan2dev(chan), + ": dst_addr(physical) = 0x%.16x\n", config->dst_addr); + + err = ops->device_config(dchan); + if (!err) { + chan->configured = true; + dev_dbg(chan2dev(chan), ": channel configured\n"); + } + +err_config: + spin_unlock_irqrestore(&chan->vc.lock, flags); + return err; +} + +static int dw_edma_device_pause(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + unsigned long flags; + int err = 0; + + spin_lock_irqsave(&chan->vc.lock, flags); + + if (!chan->configured) { + dev_err(dchan2dev(dchan), ": channel not configured\n"); + err = -EPERM; + goto err_pause; + } + + switch (chan->status) { + case EDMA_ST_IDLE: + dev_err(dchan2dev(dchan), ": channel is idle\n"); + err = -EPERM; + goto err_pause; + case EDMA_ST_PAUSE: + dev_err(dchan2dev(dchan), ": channel is already paused\n"); + err = -EPERM; + goto err_pause; + case EDMA_ST_BUSY: + // Only acceptable state + break; + default: + dev_err(dchan2dev(dchan), ": invalid status state\n"); + err = -EINVAL; + goto err_pause; + } + + switch (chan->request) { + case EDMA_REQ_NONE: + chan->request = EDMA_REQ_PAUSE; + dev_dbg(dchan2dev(dchan), ": pause requested\n"); + break; + case EDMA_REQ_STOP: + dev_err(dchan2dev(dchan), + ": there is already an ongoing request to stop, any pause request is overridden\n"); + err = -EPERM; + break; + case EDMA_REQ_PAUSE: + dev_err(dchan2dev(dchan), + ": there is already an ongoing request to pause\n"); + err = -EBUSY; + break; + default: + dev_err(dchan2dev(dchan), ": invalid request state\n"); + err = -EINVAL; + break; + } + +err_pause: + spin_unlock_irqrestore(&chan->vc.lock, flags); + return err; +} + +static int dw_edma_device_resume(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + unsigned long flags; + int err = 0; + + spin_lock_irqsave(&chan->vc.lock, flags); + + if (!chan->configured) { + dev_err(dchan2dev(dchan), ": channel not configured\n"); + err = -EPERM; + goto err_resume; + } + + switch (chan->status) { + case EDMA_ST_IDLE: + dev_err(dchan2dev(dchan), ": channel is idle\n"); + err = -EPERM; + goto err_resume; + case EDMA_ST_PAUSE: + // Only acceptable state + break; + case EDMA_ST_BUSY: + dev_err(dchan2dev(dchan), ": channel is not paused\n"); + err = -EPERM; + goto err_resume; + default: + dev_err(dchan2dev(dchan), ": invalid status state\n"); + err = -EINVAL; + goto err_resume; + } + + switch (chan->request) { + case EDMA_REQ_NONE: + chan->status = EDMA_ST_BUSY; + dev_dbg(dchan2dev(dchan), ": transfer resumed\n"); + start_transfer(chan); + break; + case EDMA_REQ_STOP: + dev_err(dchan2dev(dchan), + ": there is already an ongoing request to stop, any resume request is overridden\n"); + err = -EPERM; + break; + case EDMA_REQ_PAUSE: + dev_err(dchan2dev(dchan), + ": there is an ongoing request to pause, this request will be aborted\n"); + chan->request = EDMA_REQ_NONE; + err = -EPERM; + break; + default: + dev_err(dchan2dev(dchan), ": invalid request state\n"); + err = -EINVAL; + break; + } + +err_resume: + spin_unlock_irqrestore(&chan->vc.lock, flags); + return err; +} + +static int dw_edma_device_terminate_all(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + unsigned long flags; + int err = 0; + LIST_HEAD(head); + + spin_lock_irqsave(&chan->vc.lock, flags); + + if (!chan->configured) { + dev_err(dchan2dev(dchan), ": channel not configured\n"); + err = -EPERM; + goto err_terminate; + } + + switch (chan->status) { + case EDMA_ST_IDLE: + dev_err(dchan2dev(dchan), ": channel is idle\n"); + err = -EPERM; + goto err_terminate; + case EDMA_ST_PAUSE: + dev_dbg(dchan2dev(dchan), + ": channel is paused, stopping immediately\n"); + vchan_get_all_descriptors(&chan->vc, &head); + vchan_dma_desc_free_list(&chan->vc, &head); + chan->status = EDMA_ST_IDLE; + goto err_terminate; + case EDMA_ST_BUSY: + // Only acceptable state + break; + default: + dev_err(dchan2dev(dchan), ": invalid status state\n"); + err = -EINVAL; + goto err_terminate; + } + + switch (chan->request) { + case EDMA_REQ_NONE: + chan->request = EDMA_REQ_STOP; + dev_dbg(dchan2dev(dchan), ": termination requested\n"); + break; + case EDMA_REQ_STOP: + dev_dbg(dchan2dev(dchan), + ": there is already an ongoing request to stop\n"); + err = -EBUSY; + break; + case EDMA_REQ_PAUSE: + dev_dbg(dchan2dev(dchan), + ": there is an ongoing request to pause, this request overridden by stop request\n"); + chan->request = EDMA_REQ_STOP; + err = -EPERM; + break; + default: + dev_err(dchan2dev(dchan), ": invalid request state\n"); + err = -EINVAL; + break; + } + +err_terminate: + spin_unlock_irqrestore(&chan->vc.lock, flags); + return err; +} + +static void dw_edma_device_issue_pending(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + unsigned long flags; + + spin_lock_irqsave(&chan->vc.lock, flags); + + if (chan->configured && chan->request == EDMA_REQ_NONE && + chan->status == EDMA_ST_IDLE && vchan_issue_pending(&chan->vc)) { + dev_dbg(dchan2dev(dchan), ": transfer issued\n"); + chan->status = EDMA_ST_BUSY; + start_transfer(chan); + } + + spin_unlock_irqrestore(&chan->vc.lock, flags); +} + +static enum dma_status +dw_edma_device_tx_status(struct dma_chan *dchan, dma_cookie_t cookie, + struct dma_tx_state *txstate) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + const struct dw_edma_core_ops *ops = chan2ops(chan); + unsigned long flags; + enum dma_status ret; + + spin_lock_irqsave(&chan->vc.lock, flags); + + ret = ops->ch_status(chan); + if (ret == DMA_ERROR) { + goto ret_status; + } else if (ret == DMA_IN_PROGRESS) { + chan->status = EDMA_ST_BUSY; + goto ret_status; + } else { + // DMA_COMPLETE + if (chan->status == EDMA_ST_PAUSE) + ret = DMA_PAUSED; + else if (chan->status == EDMA_ST_BUSY) + ret = DMA_IN_PROGRESS; + else + ret = DMA_COMPLETE; + } + +ret_status: + spin_unlock_irqrestore(&chan->vc.lock, flags); + dma_set_residue(txstate, 0); + + return ret; +} + +static struct dma_async_tx_descriptor * +dw_edma_device_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, + unsigned int sg_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + struct dw_edma_desc *desc; + struct dw_edma_chunk *chunk; + struct dw_edma_burst *burst; + struct scatterlist *sg; + dma_addr_t dev_addr = chan->p_addr; + int i; + + if (sg_len < 1) { + dev_err(chan2dev(chan), ": invalid sg length %u\n", sg_len); + return NULL; + } + + if (direction == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE) { + dev_dbg(chan2dev(chan), ": prepare operation (WRITE)\n"); + } else if (direction == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_READ) { + dev_dbg(chan2dev(chan), ": prepare operation (READ)\n"); + } else { + dev_err(chan2dev(chan), ": invalid direction\n"); + return NULL; + } + + if (!chan->configured) { + dev_err(dchan2dev(dchan), ": channel not configured\n"); + return NULL; + } + if (chan->status == EDMA_ST_BUSY) { + dev_err(chan2dev(chan), ": channel is busy or paused\n"); + return NULL; + } + + desc = dw_edma_alloc_desc(chan); + if (unlikely(!desc)) + return NULL; + + chunk = dw_edma_alloc_chunk(desc); + if (unlikely(!chunk)) + goto err_alloc; + + for_each_sg(sgl, sg, sg_len, i) { + if (atomic_read(&chunk->bursts_alloc) == chan->ll_max) { + chunk = dw_edma_alloc_chunk(desc); + if (unlikely(!chunk)) + goto err_alloc; + } + + burst = dw_edma_alloc_burst(chunk); + + if (unlikely(!burst)) + goto err_alloc; + + if (direction == DMA_MEM_TO_DEV) { + burst->sar = sg_dma_address(sg); + burst->dar = dev_addr; + } else { + burst->sar = dev_addr; + burst->dar = sg_dma_address(sg); + } + + burst->sz = sg_dma_len(sg); + chunk->sz += burst->sz; + desc->alloc_sz += burst->sz; + dev_addr += burst->sz; + + dev_dbg(chan2dev(chan), + "lli %u/%u, sar=0x%.16llx, dar=0x%.16llx, size=%u bytes\n", + i + 1, sg_len, + burst->sar, burst->dar, + burst->sz); + } + + return vchan_tx_prep(&chan->vc, &desc->vd, flags); + +err_alloc: + dw_edma_free_desc(desc); + return NULL; +} + +static void dw_edma_done_interrupt(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + const struct dw_edma_core_ops *ops = dw->ops; + struct virt_dma_desc *vd; + struct dw_edma_desc *desc; + unsigned long flags; + + ops->clear_done_int(chan); + dev_dbg(chan2dev(chan), ": clear done interrupt\n"); + + spin_lock_irqsave(&chan->vc.lock, flags); + vd = vchan_next_desc(&chan->vc); + switch (chan->request) { + case EDMA_REQ_NONE: + desc = vd2dw_edma_desc(vd); + if (atomic_read(&desc->chunks_alloc)) { + dev_dbg(chan2dev(chan), ": sub-transfer complete\n"); + chan->status = EDMA_ST_BUSY; + dev_dbg(chan2dev(chan), + ": transferred %u bytes\n", desc->xfer_sz); + start_transfer(chan); + } else { + list_del(&vd->node); + vchan_cookie_complete(vd); + chan->status = EDMA_ST_IDLE; + dev_dbg(chan2dev(chan), ": transfer complete\n"); + } + break; + case EDMA_REQ_STOP: + list_del(&vd->node); + vchan_cookie_complete(vd); + chan->request = EDMA_REQ_NONE; + chan->status = EDMA_ST_IDLE; + dev_dbg(chan2dev(chan), ": transfer stop\n"); + break; + case EDMA_REQ_PAUSE: + chan->request = EDMA_REQ_NONE; + chan->status = EDMA_ST_PAUSE; + break; + default: + dev_err(chan2dev(chan), ": invalid status state\n"); + break; + } + spin_unlock_irqrestore(&chan->vc.lock, flags); +} + +static void dw_edma_abort_interrupt(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + const struct dw_edma_core_ops *ops = dw->ops; + struct virt_dma_desc *vd; + unsigned long flags; + + ops->clear_abort_int(chan); + dev_dbg(chan2dev(chan), ": clear abort interrupt\n"); + + spin_lock_irqsave(&chan->vc.lock, flags); + vd = vchan_next_desc(&chan->vc); + list_del(&vd->node); + vchan_cookie_complete(vd); + chan->request = EDMA_REQ_NONE; + chan->status = EDMA_ST_IDLE; + + spin_unlock_irqrestore(&chan->vc.lock, flags); +} + +static irqreturn_t dw_edma_interrupt(int irq, void *data) +{ + struct dw_edma_chip *chip = data; + struct dw_edma *dw = chip->dw; + const struct dw_edma_core_ops *ops = dw->ops; + struct dw_edma_chan *chan; + u32 i; + + // Poll, clear and process every chanel interrupt status + for (i = 0; i < (dw->wr_ch_count + dw->rd_ch_count); i++) { + chan = &dw->chan[i]; + + if (ops->status_done_int(chan)) + dw_edma_done_interrupt(chan); + + if (ops->status_abort_int(chan)) + dw_edma_abort_interrupt(chan); + } + + return IRQ_HANDLED; +} + +static int dw_edma_alloc_chan_resources(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + + if (chan->status != EDMA_ST_IDLE) { + dev_err(chan2dev(chan), ": channel is busy\n"); + return -EBUSY; + } + + dev_dbg(dchan2dev(dchan), ": allocated\n"); + + pm_runtime_get(chan->chip->dev); + + return 0; +} + +static void dw_edma_free_chan_resources(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + unsigned long timeout = jiffies + msecs_to_jiffies(5000); + int ret; + + if (chan->status != EDMA_ST_IDLE) + dev_err(chan2dev(chan), ": channel is busy\n"); + + do { + ret = dw_edma_device_terminate_all(dchan); + if (!ret) + break; + + if (time_after_eq(jiffies, timeout)) { + dev_err(chan2dev(chan), ": timeout\n"); + return; + } + + cpu_relax(); + } while (1); + + dev_dbg(dchan2dev(dchan), ": freed\n"); + + pm_runtime_put(chan->chip->dev); +} + +int dw_edma_probe(struct dw_edma_chip *chip) +{ + struct dw_edma *dw = chip->dw; + const struct dw_edma_core_ops *ops; + size_t ll_chunk = dw->ll_sz; + int i, j, err; + + raw_spin_lock_init(&dw->lock); + + switch (dw->version) { + default: + dev_err(chip->dev, ": unsupported version\n"); + return -EPERM; + } + + pm_runtime_get_sync(chip->dev); + + dw->wr_ch_count = ops->ch_count(dw, WRITE); + if (!dw->wr_ch_count) { + dev_err(chip->dev, ": invalid number of write channels(0)\n"); + return -EINVAL; + } + + dw->rd_ch_count = ops->ch_count(dw, READ); + if (!dw->rd_ch_count) { + dev_err(chip->dev, ": invalid number of read channels(0)\n"); + return -EINVAL; + } + + dev_info(chip->dev, "Channels:\twrite=%d, read=%d\n", + dw->wr_ch_count, dw->rd_ch_count); + + dw->chan = devm_kcalloc(chip->dev, dw->wr_ch_count + dw->rd_ch_count, + sizeof(*dw->chan), GFP_KERNEL); + if (!dw->chan) + return -ENOMEM; + + ll_chunk /= roundup_pow_of_two(dw->wr_ch_count + dw->rd_ch_count); + + // Disable eDMA, only to establish the ideal initial conditions + ops->off(dw); + + snprintf(dw->name, sizeof(dw->name), "%s:%d", DRV_CORE_NAME, chip->id); + + err = devm_request_irq(chip->dev, chip->irq, dw_edma_interrupt, + IRQF_SHARED, dw->name, chip); + if (err) + return err; + + INIT_LIST_HEAD(&dw->wr_edma.channels); + for (i = 0; i < dw->wr_ch_count; i++) { + struct dw_edma_chan *chan = &dw->chan[i]; + + chan->chip = chip; + chan->id = i; + chan->dir = EDMA_DIR_WRITE; + chan->configured = false; + chan->request = EDMA_REQ_NONE; + chan->status = EDMA_ST_IDLE; + + chan->ll_off = (ll_chunk * i); + chan->ll_max = (ll_chunk / 24) - 1; + + chan->msi_done_addr = dw->msi_addr; + chan->msi_abort_addr = dw->msi_addr; + chan->msi_data = dw->msi_data; + + chan->vc.desc_free = vchan_free_desc; + vchan_init(&chan->vc, &dw->wr_edma); + } + dma_cap_set(DMA_SLAVE, dw->wr_edma.cap_mask); + dw->wr_edma.directions = BIT(DMA_MEM_TO_DEV); + dw->wr_edma.chancnt = dw->wr_ch_count; + + INIT_LIST_HEAD(&dw->rd_edma.channels); + for (j = 0; j < dw->rd_ch_count; j++, i++) { + struct dw_edma_chan *chan = &dw->chan[i]; + + chan->chip = chip; + chan->id = j; + chan->dir = EDMA_DIR_READ; + chan->request = EDMA_REQ_NONE; + chan->status = EDMA_ST_IDLE; + + chan->ll_off = (ll_chunk * i); + chan->ll_max = (ll_chunk / 24) - 1; + + chan->msi_done_addr = dw->msi_addr; + chan->msi_abort_addr = dw->msi_addr; + chan->msi_data = dw->msi_data; + + chan->vc.desc_free = vchan_free_desc; + vchan_init(&chan->vc, &dw->rd_edma); + } + dma_cap_set(DMA_SLAVE, dw->rd_edma.cap_mask); + dw->rd_edma.directions = BIT(DMA_DEV_TO_MEM); + dw->rd_edma.chancnt = dw->rd_ch_count; + + // Set DMA capabilities + SET_BOTH_CH(src_addr_widths, BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)); + SET_BOTH_CH(dst_addr_widths, BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)); + SET_BOTH_CH(residue_granularity, DMA_RESIDUE_GRANULARITY_DESCRIPTOR); + + SET_BOTH_CH(dev, chip->dev); + + SET_BOTH_CH(device_alloc_chan_resources, dw_edma_alloc_chan_resources); + SET_BOTH_CH(device_free_chan_resources, dw_edma_free_chan_resources); + + SET_BOTH_CH(device_config, dw_edma_device_config); + SET_BOTH_CH(device_pause, dw_edma_device_pause); + SET_BOTH_CH(device_resume, dw_edma_device_resume); + SET_BOTH_CH(device_terminate_all, dw_edma_device_terminate_all); + SET_BOTH_CH(device_issue_pending, dw_edma_device_issue_pending); + SET_BOTH_CH(device_tx_status, dw_edma_device_tx_status); + SET_BOTH_CH(device_prep_slave_sg, dw_edma_device_prep_slave_sg); + + // Power management + pm_runtime_enable(chip->dev); + + // Register DMA device + err = dma_async_device_register(&dw->wr_edma); + if (err) + goto err_pm_disable; + + err = dma_async_device_register(&dw->rd_edma); + if (err) + goto err_pm_disable; + + // Turn debugfs on + err = ops->debugfs_on(chip); + if (err) { + dev_err(chip->dev, ": unable to create debugfs structure\n"); + goto err_pm_disable; + } + + dev_info(chip->dev, + "DesignWare eDMA controller driver loaded completely\n"); + + return 0; + +err_pm_disable: + pm_runtime_disable(chip->dev); + + return err; +} +EXPORT_SYMBOL_GPL(dw_edma_probe); + +int dw_edma_remove(struct dw_edma_chip *chip) +{ + struct dw_edma *dw = chip->dw; + const struct dw_edma_core_ops *ops = dw->ops; + struct dw_edma_chan *chan, *_chan; + + // Disable eDMA + if (ops) + ops->off(dw); + + // Free irq + devm_free_irq(chip->dev, chip->irq, chip); + + // Power management + pm_runtime_disable(chip->dev); + + list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, + vc.chan.device_node) { + list_del(&chan->vc.chan.device_node); + tasklet_kill(&chan->vc.task); + } + + list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, + vc.chan.device_node) { + list_del(&chan->vc.chan.device_node); + tasklet_kill(&chan->vc.task); + } + + // Deregister eDMA device + dma_async_device_unregister(&dw->wr_edma); + dma_async_device_unregister(&dw->rd_edma); + + // Turn debugfs off + if (ops) + ops->debugfs_off(); + + dev_info(chip->dev, + ": DesignWare eDMA controller driver unloaded complete\n"); + + return 0; +} +EXPORT_SYMBOL_GPL(dw_edma_remove); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Synopsys DesignWare eDMA controller core driver"); +MODULE_AUTHOR("Gustavo Pimentel "); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h new file mode 100644 index 0000000..1d11a65 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA core driver + +#ifndef _DW_EDMA_CORE_H +#define _DW_EDMA_CORE_H + +#include + +#include "../virt-dma.h" + +#define DRV_NAME "dw-edma" + +enum dw_edma_dir { + EDMA_DIR_WRITE = 0, + EDMA_DIR_READ +}; + +enum dw_edma_mode { + EDMA_MODE_LEGACY = 0, + EDMA_MODE_UNROLL +}; + +enum dw_edma_request { + EDMA_REQ_NONE = 0, + EDMA_REQ_STOP, + EDMA_REQ_PAUSE +}; + +enum dw_edma_status { + EDMA_ST_IDLE = 0, + EDMA_ST_PAUSE, + EDMA_ST_BUSY +}; + +struct dw_edma_chan; +struct dw_edma_chunk; + +struct dw_edma_core_ops { + // eDMA management callbacks + void (*off)(struct dw_edma *dw); + u16 (*ch_count)(struct dw_edma *dw, enum dw_edma_dir dir); + enum dma_status (*ch_status)(struct dw_edma_chan *chan); + void (*clear_done_int)(struct dw_edma_chan *chan); + void (*clear_abort_int)(struct dw_edma_chan *chan); + bool (*status_done_int)(struct dw_edma_chan *chan); + bool (*status_abort_int)(struct dw_edma_chan *chan); + void (*start)(struct dw_edma_chunk *chunk, bool first); + int (*device_config)(struct dma_chan *dchan); + // eDMA debug fs callbacks + int (*debugfs_on)(struct dw_edma_chip *chip); + void (*debugfs_off)(void); +}; + +struct dw_edma_burst { + struct list_head list; + u64 sar; + u64 dar; + u32 sz; +}; + +struct dw_edma_chunk { + struct list_head list; + struct dw_edma_chan *chan; + struct dw_edma_burst *burst; + + atomic_t bursts_alloc; + + bool cb; + u32 sz; + + dma_addr_t p_addr; // Linked list + dma_addr_t v_addr; // Linked list +}; + +struct dw_edma_desc { + struct virt_dma_desc vd; + struct dw_edma_chan *chan; + struct dw_edma_chunk *chunk; + + atomic_t chunks_alloc; + + u32 alloc_sz; + u32 xfer_sz; +}; + +struct dw_edma_chan { + struct virt_dma_chan vc; + struct dw_edma_chip *chip; + int id; + enum dw_edma_dir dir; + + u64 ll_off; + u32 ll_max; + + u64 msi_done_addr; + u64 msi_abort_addr; + u32 msi_data; + + enum dw_edma_request request; + enum dw_edma_status status; + bool configured; + + dma_addr_t p_addr; // Data +}; + +struct dw_edma { + char name[20]; + + struct dma_device wr_edma; + u16 wr_ch_count; + struct dma_device rd_edma; + u16 rd_ch_count; + + void __iomem *regs; + + void __iomem *va_ll; + resource_size_t pa_ll; + size_t ll_sz; + + u64 msi_addr; + u32 msi_data; + + u32 version; + enum dw_edma_mode mode; + + struct dw_edma_chan *chan; + const struct dw_edma_core_ops *ops; + + raw_spinlock_t lock; // Only for legacy type +}; + +static inline +struct dw_edma_chan *vc2dw_edma_chan(struct virt_dma_chan *vc) +{ + return container_of(vc, struct dw_edma_chan, vc); +} + +static inline +struct dw_edma_chan *dchan2dw_edma_chan(struct dma_chan *dchan) +{ + return vc2dw_edma_chan(to_virt_chan(dchan)); +} + +#endif /* _DW_EDMA_CORE_H */ diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h new file mode 100644 index 0000000..5e62523 --- /dev/null +++ b/include/linux/dma/edma.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA core driver + +#ifndef _DW_EDMA_H +#define _DW_EDMA_H + +#include +#include + +struct dw_edma; + +/** + * struct dw_edma_chip - representation of DesignWare eDMA controller hardware + * @dev: struct device of the eDMA controller + * @id: instance ID + * @irq: irq line + * @dw: struct dw_edma that is filed by dw_edma_probe() + */ +struct dw_edma_chip { + struct device *dev; + int id; + int irq; + struct dw_edma *dw; +}; + +/* Export to the platform drivers */ +#if IS_ENABLED(CONFIG_DW_EDMA) +int dw_edma_probe(struct dw_edma_chip *chip); +int dw_edma_remove(struct dw_edma_chip *chip); +#else +static inline int dw_edma_probe(struct dw_edma_chip *chip) +{ + return -ENODEV; +} +static inline int dw_edma_remove(struct dw_edma_chip *chip) +{ + return 0; +} +#endif /* CONFIG_DW_EDMA */ + +#endif /* _DW_EDMA_H */ From patchwork Wed Dec 12 11:13:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011832 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="AH+zk28A"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhk20fVz9s4s for ; Wed, 12 Dec 2018 22:13:58 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727067AbeLLLN4 (ORCPT ); Wed, 12 Dec 2018 06:13:56 -0500 Received: from smtprelay2.synopsys.com ([198.182.60.111]:35084 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726856AbeLLLNz (ORCPT ); Wed, 12 Dec 2018 06:13:55 -0500 Received: from mailhost.synopsys.com (mailhost2.synopsys.com [10.13.184.66]) by smtprelay.synopsys.com (Postfix) with ESMTP id 0F83B10C1804; Wed, 12 Dec 2018 03:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613235; bh=JoMEKe9UMoOzRjT+lX3Tmx3PtzOJfHuKp20NofjhIfk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=AH+zk28Auc52GyMU/9u+qz4SMgd/XQ3wi01/plm2E4qsqmrVxpswguIWW/cwmCJCH Z12dPu9HCek2/WDcIW/KqkAK3VKT3JBjpGsrIx4YuUjTeJjAa+Dng9WUUsiFo76cSt TrZCdeVWS7EffyZ3oRvTSDuDeGnknPKm+AFSuMGHnWVgLFhFz5zwk2VBTdL1HCQwcV kJhD1Lf+cyS2w5kTKJubQdriOwz1f6Bk8yXeVBeFSi6KU4ohpu2vkRaTjm4XRbqsnj fwWrmi/jZNG2FMUXzlYmj+n0aMu+F/BtJv7NjolKCBw3ibESbV7Q2Pd5qQ3gGMpRSj OOKYiD2x4qRWQ== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id A67D93DFF; Wed, 12 Dec 2018 03:13:54 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id D6A403BDD3; Wed, 12 Dec 2018 12:13:53 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Vinod Koul , Eugeniy Paltsev , Andy Shevchenko , Joao Pinto Subject: [RFC 2/6] dma: Add Synopsys eDMA IP version 0 support Date: Wed, 12 Dec 2018 12:13:22 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add support for the eDMA IP version 0 driver for both register maps (legacy and unroll). The legacy register mapping was the initial implementation, which consisted in having all registers belonging to channels multiplexed, which could be change anytime (which could led a race-condition) by view port register (access to only one channel available each time). This register mapping is not very effective and efficient in a multithread environment, which has led to the development of unroll registers mapping, which consists of having all channels registers accessible any time by spreading all channels registers by an offset between them. This version supports a maximum of 16 independent channels (8 write + 8 read), which can run simultaneously. Implements a scatter-gather transfer through a linked list, where the size of linked list depends on the allocated memory divided equally among all channels. Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is alignmented to DWORD. Both SAR (Source Address Register) and DAR (Destination Address Register) are alignmented to byte. Signed-off-by: Gustavo Pimentel Cc: Vinod Koul Cc: Eugeniy Paltsev Cc: Andy Shevchenko Cc: Joao Pinto --- drivers/dma/dw-edma/Makefile | 3 +- drivers/dma/dw-edma/dw-edma-core.c | 20 ++ drivers/dma/dw-edma/dw-edma-v0-core.c | 346 ++++++++++++++++++++++++++++++++++ drivers/dma/dw-edma/dw-edma-v0-core.h | 24 +++ drivers/dma/dw-edma/dw-edma-v0-regs.h | 143 ++++++++++++++ 5 files changed, 535 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/dw-edma/dw-edma-v0-core.c create mode 100644 drivers/dma/dw-edma/dw-edma-v0-core.h create mode 100644 drivers/dma/dw-edma/dw-edma-v0-regs.h diff --git a/drivers/dma/dw-edma/Makefile b/drivers/dma/dw-edma/Makefile index 3224010..01c7c63 100644 --- a/drivers/dma/dw-edma/Makefile +++ b/drivers/dma/dw-edma/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DW_EDMA) += dw-edma.o -dw-edma-objs := dw-edma-core.o +dw-edma-objs := dw-edma-core.o \ + dw-edma-v0-core.o diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index b4c2982..c08125a 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -12,6 +12,7 @@ #include #include "dw-edma-core.h" +#include "dw-edma-v0-core.h" #include "../dmaengine.h" #include "../virt-dma.h" @@ -26,6 +27,22 @@ SET(dw->rd_edma, name, value); \ } while (0) +static const struct dw_edma_core_ops dw_edma_v0_core_ops = { + // eDMA management callbacks + .off = dw_edma_v0_core_off, + .ch_count = dw_edma_v0_core_ch_count, + .ch_status = dw_edma_v0_core_ch_status, + .clear_done_int = dw_edma_v0_core_clear_done_int, + .clear_abort_int = dw_edma_v0_core_clear_abort_int, + .status_done_int = dw_edma_v0_core_status_done_int, + .status_abort_int = dw_edma_v0_core_status_abort_int, + .start = dw_edma_v0_core_start, + .device_config = dw_edma_v0_core_device_config, + // eDMA debug fs callbacks + .debugfs_on = dw_edma_v0_core_debugfs_on, + .debugfs_off = dw_edma_v0_core_debugfs_off, +}; + static inline struct device *dchan2dev(struct dma_chan *dchan) { @@ -740,6 +757,9 @@ int dw_edma_probe(struct dw_edma_chip *chip) raw_spin_lock_init(&dw->lock); switch (dw->version) { + case 0: + dw->ops = ops = &dw_edma_v0_core_ops; + break; default: dev_err(chip->dev, ": unsupported version\n"); return -EPERM; diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c new file mode 100644 index 0000000..cc362b0 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -0,0 +1,346 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA v0 core + +#include "dw-edma-core.h" +#include "dw-edma-v0-core.h" +#include "dw-edma-v0-regs.h" +#include "dw-edma-v0-debugfs.h" + +#define QWORD_HI(value) ((value & 0xFFFFFFFF00000000llu) >> 32) +#define QWORD_LO(value) (value & 0x00000000FFFFFFFFllu) + +enum dw_edma_control { + DW_EDMA_CB = BIT(0), + DW_EDMA_TCB = BIT(1), + DW_EDMA_LLP = BIT(2), + DW_EDMA_LIE = BIT(3), + DW_EDMA_RIE = BIT(4), + DW_EDMA_CCS = BIT(8), + DW_EDMA_LLE = BIT(9), +}; + +static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) +{ + return dw->regs; +} +#define SET(dw, name, value) \ + writel(value, &(__dw_regs(dw)->name)) + +#define GET(dw, name) \ + readl(&(__dw_regs(dw)->name)) + +#define SET_RW(dw, dir, name, value) \ + do { \ + if (dir == EDMA_DIR_WRITE) \ + SET(dw, wr_##name, value); \ + else \ + SET(dw, rd_##name, value); \ + } while (0) + +#define GET_RW(dw, dir, name) \ + (dir == EDMA_DIR_WRITE \ + ? GET(dw, wr_##name) \ + : GET(dw, rd_##name)) + +#define SET_BOTH(dw, name, value) \ + do { \ + SET(dw, wr_##name, value); \ + SET(dw, rd_##name, value); \ + } while (0) + +static inline struct dw_edma_v0_ch_regs __iomem * +__dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) +{ + if (dw->mode == EDMA_MODE_LEGACY) + return &(__dw_regs(dw)->type.legacy.ch); + + if (dir == EDMA_DIR_WRITE) + return &__dw_regs(dw)->type.unroll.ch[ch].wr; + + return &__dw_regs(dw)->type.unroll.ch[ch].rd; +} + +static inline void writel_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, + u32 value, void __iomem *addr) +{ + if (dw->mode == EDMA_MODE_LEGACY) { + u32 viewport_sel; + unsigned long flags; + + raw_spin_lock_irqsave(&dw->lock, flags); + + viewport_sel = (ch & 0x00000007ul); + if (dir == EDMA_DIR_READ) + viewport_sel |= BIT(31); + + writel(viewport_sel, + &(__dw_regs(dw)->type.legacy.viewport_sel)); + writel(value, addr); + + raw_spin_unlock_irqrestore(&dw->lock, flags); + } else { + writel(value, addr); + } +} + +static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, + const void __iomem *addr) +{ + u32 value; + + if (dw->mode == EDMA_MODE_LEGACY) { + u32 viewport_sel; + unsigned long flags; + + raw_spin_lock_irqsave(&dw->lock, flags); + + viewport_sel = (ch & 0x00000007ul); + if (dir == EDMA_DIR_READ) + viewport_sel |= BIT(31); + + writel(viewport_sel, + &(__dw_regs(dw)->type.legacy.viewport_sel)); + value = readl(addr); + + raw_spin_unlock_irqrestore(&dw->lock, flags); + } else { + value = readl(addr); + } + + return value; +} +#define SET_CH(dw, dir, ch, name, value) \ + writel_ch(dw, dir, ch, value, &(__dw_ch_regs(dw, dir, ch)->name)) + +#define GET_CH(dw, dir, ch, name) \ + readl_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) + +#define SET_LL(ll, value) \ + writel(value, ll) + +// eDMA management callbacks +void dw_edma_v0_core_off(struct dw_edma *dw) +{ + SET_BOTH(dw, int_mask, 0x00FF00FFul); + SET_BOTH(dw, int_clear, 0x00FF00FFul); + SET_BOTH(dw, engine_en, 0x00000000ul); +} + +u16 dw_edma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir) +{ + u32 num_ch = GET(dw, ctrl); + + if (dir == EDMA_DIR_WRITE) { + num_ch &= 0x0000000Ful; + } else { + num_ch &= 0x000F0000ul; + num_ch >>= 16; + } + + if (num_ch > EDMA_V0_MAX_NR_CH) + num_ch = EDMA_V0_MAX_NR_CH; + + return (u16)num_ch; +} + +enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + u32 tmp = GET_CH(dw, chan->dir, chan->id, ch_control1); + + tmp &= 0x00000060ul; + tmp >>= 5; + if (tmp == 1) + return DMA_IN_PROGRESS; + else if (tmp == 3) + return DMA_COMPLETE; + else + return DMA_ERROR; +} + +void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + + SET_RW(dw, chan->dir, int_clear, BIT(chan->id)); +} + +void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + + SET_RW(dw, chan->dir, int_clear, BIT(chan->id + 16)); +} + +bool dw_edma_v0_core_status_done_int(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + u32 tmp; + + tmp = GET_RW(dw, chan->dir, int_status); + tmp &= BIT(chan->id); + + return tmp ? true : false; +} + +bool dw_edma_v0_core_status_abort_int(struct dw_edma_chan *chan) +{ + struct dw_edma *dw = chan->chip->dw; + u32 tmp; + + tmp = GET_RW(dw, chan->dir, int_status); + tmp &= BIT(chan->id + 16); + + return tmp ? true : false; +} + +static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) +{ + struct dw_edma_burst *child; + struct dw_edma_v0_lli *lli; + struct dw_edma_v0_llp *llp; + u32 control = 0, i = 0, j; + u64 sar, dar, addr; + + lli = (struct dw_edma_v0_lli *) chunk->v_addr; + + if (chunk->cb) + control = DW_EDMA_CB; + + j = atomic_read(&chunk->bursts_alloc); + list_for_each_entry(child, &chunk->burst->list, list) { + j--; + if (!j) + control |= (DW_EDMA_LIE | DW_EDMA_RIE); + + // Channel control + SET_LL(&(lli[i].control), control); + // Transfer size + SET_LL(&(lli[i].transfer_size), child->sz); + // SAR - low, high + sar = cpu_to_le64(child->sar); + SET_LL(&(lli[i].sar_low), QWORD_LO(sar)); + SET_LL(&(lli[i].sar_high), QWORD_HI(sar)); + // DAR - low, high + dar = cpu_to_le64(child->dar); + SET_LL(&(lli[i].dar_low), QWORD_LO(dar)); + SET_LL(&(lli[i].dar_high), QWORD_HI(dar)); + + i++; + } + + llp = (struct dw_edma_v0_llp *) &(lli[i]); + control = DW_EDMA_LLP | DW_EDMA_TCB; + if (!chunk->cb) + control |= DW_EDMA_CB; + + // Channel control + SET_LL(&(llp->control), control); + // Linked list - low, high + addr = cpu_to_le64(chunk->p_addr); + SET_LL(&(llp->llp_low), QWORD_LO(addr)); + SET_LL(&(llp->llp_high), QWORD_HI(addr)); +} + +void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) +{ + struct dw_edma_chan *chan = chunk->chan; + struct dw_edma *dw = chan->chip->dw; + u32 mask; + u64 llp; + + dw_edma_v0_core_write_chunk(chunk); + + if (first) { + // Enable engine + SET_RW(dw, chan->dir, engine_en, 0x00000001ul); + // Interrupt unmask - done, abort + mask = GET_RW(dw, chan->dir, int_mask); + mask &= ~(BIT(chan->id + 16) | BIT(chan->id)); + SET_RW(dw, chan->dir, int_mask, mask); + // Linked list error + SET_RW(dw, chan->dir, linked_list_err_en, BIT(chan->id)); + // Channel control + SET_CH(dw, chan->dir, chan->id, ch_control1, + DW_EDMA_CCS | DW_EDMA_LLE); + // Linked list - low, high + llp = cpu_to_le64(chunk->p_addr); + SET_CH(dw, chan->dir, chan->id, llp_low, QWORD_LO(llp)); + SET_CH(dw, chan->dir, chan->id, llp_high, QWORD_HI(llp)); + } + // Doorbell + SET_RW(dw, chan->dir, doorbell, chan->id & 0x00000007ul); +} + +int dw_edma_v0_core_device_config(struct dma_chan *dchan) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + struct dw_edma *dw = chan->chip->dw; + u32 tmp; + + // MSI done addr - low, high + SET_RW(dw, chan->dir, done_imwr_low, QWORD_LO(chan->msi_done_addr)); + SET_RW(dw, chan->dir, done_imwr_high, QWORD_HI(chan->msi_done_addr)); + // MSI abort addr - low, high + SET_RW(dw, chan->dir, abort_imwr_low, QWORD_LO(chan->msi_abort_addr)); + SET_RW(dw, chan->dir, abort_imwr_high, QWORD_HI(chan->msi_abort_addr)); + // MSI data - low, high + switch (chan->id) { + case 0: + case 1: + tmp = GET_RW(dw, chan->dir, ch01_imwr_data); + break; + case 2: + case 3: + tmp = GET_RW(dw, chan->dir, ch23_imwr_data); + break; + case 4: + case 5: + tmp = GET_RW(dw, chan->dir, ch45_imwr_data); + break; + case 6: + case 7: + tmp = GET_RW(dw, chan->dir, ch67_imwr_data); + break; + } + + if (chan->id & 0x00000001ul) { + tmp &= 0x00FFu; + tmp |= ((u32)chan->msi_data << 16); + } else { + tmp &= 0xFF00u; + tmp |= chan->msi_data; + } + + switch (chan->id) { + case 0: + case 1: + SET_RW(dw, chan->dir, ch01_imwr_data, tmp); + break; + case 2: + case 3: + SET_RW(dw, chan->dir, ch23_imwr_data, tmp); + break; + case 4: + case 5: + SET_RW(dw, chan->dir, ch45_imwr_data, tmp); + break; + case 6: + case 7: + SET_RW(dw, chan->dir, ch67_imwr_data, tmp); + break; + } + + return 0; +} + +// eDMA debug fs callbacks +int dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip) +{ + return 0; +} + +void dw_edma_v0_core_debugfs_off(void) +{ +} diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.h b/drivers/dma/dw-edma/dw-edma-v0-core.h new file mode 100644 index 0000000..e698e27 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-v0-core.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +//Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +//Synopsys DesignWare eDMA v0 core + +#ifndef _DW_EDMA_V0_CORE_H +#define _DW_EDMA_V0_CORE_H + +#include + +// eDMA management callbacks +void dw_edma_v0_core_off(struct dw_edma *chan); +u16 dw_edma_v0_core_ch_count(struct dw_edma *chan, enum dw_edma_dir dir); +enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan); +void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan); +void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan); +bool dw_edma_v0_core_status_done_int(struct dw_edma_chan *chan); +bool dw_edma_v0_core_status_abort_int(struct dw_edma_chan *chan); +void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); +int dw_edma_v0_core_device_config(struct dma_chan *dchan); +// eDMA debug fs callbacks +int dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip); +void dw_edma_v0_core_debugfs_off(void); + +#endif /* _DW_EDMA_V0_CORE_H */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-regs.h b/drivers/dma/dw-edma/dw-edma-v0-regs.h new file mode 100644 index 0000000..564b2e0 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-v0-regs.h @@ -0,0 +1,143 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA v0 core + +#ifndef _DW_EDMA_V0_REGS_H +#define _DW_EDMA_V0_REGS_H + +#include + +#define EDMA_V0_MAX_NR_CH 8 + +struct dw_edma_v0_ch_regs { + u32 ch_control1; // B + 0x000 + u32 ch_control2; // B + 0x004 + u32 transfer_size; // B + 0x008 + u32 sar_low; // B + 0x00c + u32 sar_high; // B + 0x010 + u32 dar_low; // B + 0x014 + u32 dar_high; // B + 0x018 + u32 llp_low; // B + 0x01c + u32 llp_high; // B + 0x020 +}; + +struct dw_edma_v0_ch { + struct dw_edma_v0_ch_regs wr; // B + 0x200 + u32 padding_1[55]; // B + [0x224..0x2fc] + struct dw_edma_v0_ch_regs rd; // B + 0x300 + u32 padding_2[55]; // B + [0x224..0x2fc] +}; + +struct dw_edma_v0_unroll { + u32 padding_1; // B + 0x0f8 + u32 wr_engine_chgroup; // B + 0x100 + u32 rd_engine_chgroup; // B + 0x104 + u32 wr_engine_hshake_cnt_low; // B + 0x108 + u32 wr_engine_hshake_cnt_high; // B + 0x10c + u32 padding_2[2]; // B + [0x110..0x114] + u32 rd_engine_hshake_cnt_low; // B + 0x118 + u32 rd_engine_hshake_cnt_high; // B + 0x11c + u32 padding_3[2]; // B + [0x120..0x124] + u32 wr_ch0_pwr_en; // B + 0x128 + u32 wr_ch1_pwr_en; // B + 0x12c + u32 wr_ch2_pwr_en; // B + 0x130 + u32 wr_ch3_pwr_en; // B + 0x134 + u32 wr_ch4_pwr_en; // B + 0x138 + u32 wr_ch5_pwr_en; // B + 0x13c + u32 wr_ch6_pwr_en; // B + 0x140 + u32 wr_ch7_pwr_en; // B + 0x144 + u32 padding_4[8]; // B + [0x148..0x164] + u32 rd_ch0_pwr_en; // B + 0x168 + u32 rd_ch1_pwr_en; // B + 0x16c + u32 rd_ch2_pwr_en; // B + 0x170 + u32 rd_ch3_pwr_en; // B + 0x174 + u32 rd_ch4_pwr_en; // B + 0x178 + u32 rd_ch5_pwr_en; // B + 0x18c + u32 rd_ch6_pwr_en; // B + 0x180 + u32 rd_ch7_pwr_en; // B + 0x184 + u32 padding_5[30]; // B + [0x188..0x1fc] + struct dw_edma_v0_ch ch[EDMA_V0_MAX_NR_CH]; // B + i*0x400 +}; + +struct dw_edma_v0_legacy { + u32 viewport_sel; // B + 0x0f8 + struct dw_edma_v0_ch_regs ch; // B + [0x100..0x120] +}; + +struct dw_edma_v0_regs { + // eDMA global registers + u32 ctrl_data_arb_prior; // B + 0x000 + u32 padding_1; // B + 0x004 + u32 ctrl; // B + 0x008 + u32 wr_engine_en; // B + 0x00c + u32 wr_doorbell; // B + 0x010 + u32 padding_2; // B + 0x014 + u32 wr_ch_arb_weight_low; // B + 0x018 + u32 wr_ch_arb_weight_high; // B + 0x01c + u32 padding_3[3]; // B + [0x020..0x028] + u32 rd_engine_en; // B + 0x02c + u32 rd_doorbell; // B + 0x030 + u32 padding_4; // B + 0x034 + u32 rd_ch_arb_weight_low; // B + 0x038 + u32 rd_ch_arb_weight_high; // B + 0x03c + u32 padding_5[3]; // B + [0x040..0x048] + // eDMA interrupts registers + u32 wr_int_status; // B + 0x04c + u32 padding_6; // B + 0x050 + u32 wr_int_mask; // B + 0x054 + u32 wr_int_clear; // B + 0x058 + u32 wr_err_status; // B + 0x05c + u32 wr_done_imwr_low; // B + 0x060 + u32 wr_done_imwr_high; // B + 0x064 + u32 wr_abort_imwr_low; // B + 0x068 + u32 wr_abort_imwr_high; // B + 0x06c + u32 wr_ch01_imwr_data; // B + 0x070 + u32 wr_ch23_imwr_data; // B + 0x074 + u32 wr_ch45_imwr_data; // B + 0x078 + u32 wr_ch67_imwr_data; // B + 0x07c + u32 padding_7[4]; // B + [0x080..0x08c] + u32 wr_linked_list_err_en; // B + 0x090 + u32 padding_8[3]; // B + [0x094..0x09c] + u32 rd_int_status; // B + 0x0a0 + u32 padding_9; // B + 0x0a4 + u32 rd_int_mask; // B + 0x0a8 + u32 rd_int_clear; // B + 0x0ac + u32 padding_10; // B + 0x0b0 + u32 rd_err_status_low; // B + 0x0b4 + u32 rd_err_status_high; // B + 0x0b8 + u32 padding_11[2]; // B + [0x0bc..0x0c0] + u32 rd_linked_list_err_en; // B + 0x0c4 + u32 padding_12; // B + 0x0c8 + u32 rd_done_imwr_low; // B + 0x0cc + u32 rd_done_imwr_high; // B + 0x0d0 + u32 rd_abort_imwr_low; // B + 0x0d4 + u32 rd_abort_imwr_high; // B + 0x0d8 + u32 rd_ch01_imwr_data; // B + 0x0dc + u32 rd_ch23_imwr_data; // B + 0x0e0 + u32 rd_ch45_imwr_data; // B + 0x0e4 + u32 rd_ch67_imwr_data; // B + 0x0e8 + u32 padding_13[4]; // B + [0x0ec..0x0f8] + // eDMA channel context grouping + union Type { + struct dw_edma_v0_legacy legacy; // B + [0x0f8..0x120] + struct dw_edma_v0_unroll unroll; // B + [0x0f8..0x1120] + } type; +}; + +struct dw_edma_v0_lli { + u32 control; + u32 transfer_size; + u32 sar_low; + u32 sar_high; + u32 dar_low; + u32 dar_high; +}; + +struct dw_edma_v0_llp { + u32 control; + u32 reserved; + u32 llp_low; + u32 llp_high; +}; + +#endif /* _DW_EDMA_V0_REGS_H */ From patchwork Wed Dec 12 11:13:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011833 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="CkAHdIbE"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhl1bpjz9s1c for ; Wed, 12 Dec 2018 22:13:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727133AbeLLLN4 (ORCPT ); Wed, 12 Dec 2018 06:13:56 -0500 Received: from smtprelay2.synopsys.com ([198.182.60.111]:35090 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727062AbeLLLN4 (ORCPT ); Wed, 12 Dec 2018 06:13:56 -0500 Received: from mailhost.synopsys.com (mailhost3.synopsys.com [10.12.238.238]) by smtprelay.synopsys.com (Postfix) with ESMTP id CE94A10C1813; Wed, 12 Dec 2018 03:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613235; bh=zbu0tR6sWHO1WKkM4odfuq/LQ9WpiTHJaTYO97PrpIs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=CkAHdIbEYGJxuVMTIzMzQIY5xA2OHrlUuFxFiIOLDB8Ckl3ZZEHNV/gOGDQpkzv4F h9CY3VM596+jqp1Q/vMNIS55QECtg0lHbThd3t92FbuohWUyuvjlWSo+3V1ZSFgKrV tHY8P0DNpLKIY5qQDYr+SUG49qUt0Ql5odfllIDBZommNjgLocJlIGbSvuOTNIv1wq 5ziO8pPeU7aOHLgSaT3w+v7RXddT2uiVt1kPod2LaPzFtmJ6YmmrapTqocaGMH2Wts QmmqJJ7kyhloOIOiukU7Cvdqk5mQNwgSjldSZv69eGLUE3oStDcIzM1YhQDJEvwjNK +IdRF/snyzxAQ== Received: from de02.synopsys.com (germany.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 8DE553D68; Wed, 12 Dec 2018 03:13:55 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id B758E3BDD8; Wed, 12 Dec 2018 12:13:54 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Vinod Koul , Eugeniy Paltsev , Andy Shevchenko , Joao Pinto Subject: [RFC 3/6] dma: Add Synopsys eDMA IP version 0 debugfs support Date: Wed, 12 Dec 2018 12:13:23 +0100 Message-Id: <033537347ecc7c6a6670417226c5cebc6a74dce5.1544610287.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add Synopsys eDMA IP version 0 debugfs support to assist any debug in the future. Creates a file system structure composed by folders and files that mimic the IP register map (this files are read only) to ease any debug. To enable this feature is necessary to select DEBUG_FS option on kernel configuration. Small output example: (eDMA IP version 0, unroll, 1 write + 1 read channels) % mount -t debugfs none /sys/kernel/debug/ % tree /sys/kernel/debug/dw-edma/ dw-edma/ ├── mode ├── wr_ch_count ├── rd_ch_count └── registers     ├── ctrl_data_arb_prior     ├── ctrl     ├── write     │   ├── engine_en     │   ├── doorbell     │   ├── ch_arb_weight_low     │   ├── ch_arb_weight_high     │   ├── int_status     │   ├── int_mask     │   ├── int_clear     │   ├── err_status     │   ├── done_imwr_low     │   ├── done_imwr_high     │   ├── abort_imwr_low     │   ├── abort_imwr_high     │   ├── ch01_imwr_data     │   ├── ch23_imwr_data     │   ├── ch45_imwr_data     │   ├── ch67_imwr_data     │   ├── linked_list_err_en     │   ├── engine_chgroup     │   ├── engine_hshake_cnt_low     │   ├── engine_hshake_cnt_high     │   ├── ch0_pwr_en     │   ├── ch1_pwr_en     │   ├── ch2_pwr_en     │   ├── ch3_pwr_en     │   ├── ch4_pwr_en     │   ├── ch5_pwr_en     │   ├── ch6_pwr_en     │   ├── ch7_pwr_en     │   └── channel:0     │       ├── ch_control1     │       ├── ch_control2     │       ├── transfer_size     │       ├── sar_low     │       ├── sar_high     │       ├── dar_high     │       ├── llp_low     │       └── llp_high     └── read         ├── engine_en         ├── doorbell         ├── ch_arb_weight_low         ├── ch_arb_weight_high         ├── int_status         ├── int_mask         ├── int_clear         ├── err_status_low         ├── err_status_high         ├── done_imwr_low         ├── done_imwr_high         ├── abort_imwr_low         ├── abort_imwr_high         ├── ch01_imwr_data         ├── ch23_imwr_data         ├── ch45_imwr_data         ├── ch67_imwr_data         ├── linked_list_err_en         ├── engine_chgroup         ├── engine_hshake_cnt_low         ├── engine_hshake_cnt_high         ├── ch0_pwr_en         ├── ch1_pwr_en         ├── ch2_pwr_en         ├── ch3_pwr_en         ├── ch4_pwr_en         ├── ch5_pwr_en         ├── ch6_pwr_en         ├── ch7_pwr_en         └── channel:0             ├── ch_control1             ├── ch_control2             ├── transfer_size             ├── sar_low             ├── sar_high             ├── dar_high             ├── llp_low             └── llp_high Signed-off-by: Gustavo Pimentel Cc: Vinod Koul Cc: Eugeniy Paltsev Cc: Andy Shevchenko Cc: Joao Pinto --- drivers/dma/dw-edma/Makefile | 3 +- drivers/dma/dw-edma/dw-edma-v0-core.c | 3 +- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 359 +++++++++++++++++++++++++++++++ drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 21 ++ 4 files changed, 384 insertions(+), 2 deletions(-) create mode 100644 drivers/dma/dw-edma/dw-edma-v0-debugfs.c create mode 100644 drivers/dma/dw-edma/dw-edma-v0-debugfs.h diff --git a/drivers/dma/dw-edma/Makefile b/drivers/dma/dw-edma/Makefile index 01c7c63..0c53033 100644 --- a/drivers/dma/dw-edma/Makefile +++ b/drivers/dma/dw-edma/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DW_EDMA) += dw-edma.o +dw-edma-$(CONFIG_DEBUG_FS) := dw-edma-v0-debugfs.o dw-edma-objs := dw-edma-core.o \ - dw-edma-v0-core.o + dw-edma-v0-core.o $(dw-edma-y) diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index cc362b0..945eddd 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -338,9 +338,10 @@ int dw_edma_v0_core_device_config(struct dma_chan *dchan) // eDMA debug fs callbacks int dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip) { - return 0; + return dw_edma_v0_debugfs_on(chip); } void dw_edma_v0_core_debugfs_off(void) { + dw_edma_v0_debugfs_off(); } diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c new file mode 100644 index 0000000..2d16911 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -0,0 +1,359 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA v0 core + +#include + +#include "dw-edma-v0-debugfs.h" +#include "dw-edma-v0-regs.h" +#include "dw-edma-core.h" + +#define DRV_V0_CORE_NAME "dw-edma-v0-core" + +#define RD_PERM 0444 + +#define REGS_ADDR(name) \ + (&(regs->name)) +#define REGISTER(name) \ + { #name, REGS_ADDR(name) } + +#define WR_REGISTER(name) \ + { #name, REGS_ADDR(wr_##name) } +#define RD_REGISTER(name) \ + { #name, REGS_ADDR(rd_##name) } + +#define WR_REGISTER_LEGACY(name) \ + { #name, REGS_ADDR(type.legacy.wr_##name) } +#define RD_REGISTER_LEGACY(name) \ + { #name, REGS_ADDR(type.legacy.rd_##name) } + +#define WR_REGISTER_UNROLL(name) \ + { #name, REGS_ADDR(type.unroll.wr_##name) } +#define RD_REGISTER_UNROLL(name) \ + { #name, REGS_ADDR(type.unroll.rd_##name) } + +#define WRITE_STR "write" +#define READ_STR "read" +#define CHANNEL_STR "channel" +#define REGISTERS_STR "registers" + +static struct dentry *base_dir; +static struct dw_edma *dw; +static struct dw_edma_v0_regs *regs; + +static struct { + void *start; + void *end; +} lim[2][EDMA_V0_MAX_NR_CH]; + +struct debugfs_entries { + char name[24]; + void __iomem *reg; +}; + +static int dw_edma_debugfs_u32_get(void *data, u64 *val) +{ + if (dw->mode == EDMA_MODE_LEGACY && + data >= (void *)®s->type.legacy.ch) { + void *ptr = (void *)&(regs->type.legacy.ch); + u32 viewport_sel = 0; + unsigned long flags; + u16 ch; + + for (ch = 0; ch < dw->wr_ch_count; ch++) + if (lim[0][ch].start >= data && data < lim[0][ch].end) { + ptr += (data - lim[0][ch].start); + goto legacy_sel_wr; + } + + for (ch = 0; ch < dw->rd_ch_count; ch++) + if (lim[1][ch].start >= data && data < lim[1][ch].end) { + ptr += (data - lim[1][ch].start); + goto legacy_sel_rd; + } + + return 0; +legacy_sel_rd: + viewport_sel = BIT(31); +legacy_sel_wr: + viewport_sel |= (ch & 0x00000007ul); + + raw_spin_lock_irqsave(&dw->lock, flags); + *val = readl((u32 *) ptr); + raw_spin_unlock_irqrestore(&dw->lock, flags); + } else { + *val = readl((u32 *)data); + } + + return 0; +} +DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); + +static int dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], + int nr_entries, struct dentry *dir) +{ + struct dentry *entry; + int i; + + for (i = 0; i < nr_entries; i++) { + entry = debugfs_create_file_unsafe(entries[i].name, RD_PERM, + dir, entries[i].reg, + &fops_x32); + if (!entry) + return -EPERM; + } + + return 0; +} + +static int dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs *regs, + struct dentry *dir) +{ + int nr_entries; + const struct debugfs_entries debugfs_regs[] = { + REGISTER(ch_control1), + REGISTER(ch_control2), + REGISTER(transfer_size), + REGISTER(sar_low), + REGISTER(sar_high), + REGISTER(dar_low), + REGISTER(dar_high), + REGISTER(llp_low), + REGISTER(llp_high), + }; + + nr_entries = sizeof(debugfs_regs) / sizeof(struct debugfs_entries); + return dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); +} + +static int dw_edma_debugfs_regs_wr(struct dentry *dir) +{ + struct dentry *regs_dir, *ch_dir; + int nr_entries, i, err; + char name[16]; + const struct debugfs_entries debugfs_regs[] = { + // eDMA global registers + WR_REGISTER(engine_en), + WR_REGISTER(doorbell), + WR_REGISTER(ch_arb_weight_low), + WR_REGISTER(ch_arb_weight_high), + // eDMA interrupts registers + WR_REGISTER(int_status), + WR_REGISTER(int_mask), + WR_REGISTER(int_clear), + WR_REGISTER(err_status), + WR_REGISTER(done_imwr_low), + WR_REGISTER(done_imwr_high), + WR_REGISTER(abort_imwr_low), + WR_REGISTER(abort_imwr_high), + WR_REGISTER(ch01_imwr_data), + WR_REGISTER(ch23_imwr_data), + WR_REGISTER(ch45_imwr_data), + WR_REGISTER(ch67_imwr_data), + WR_REGISTER(linked_list_err_en), + }; + const struct debugfs_entries debugfs_unroll_regs[] = { + // eDMA channel context grouping + WR_REGISTER_UNROLL(engine_chgroup), + WR_REGISTER_UNROLL(engine_hshake_cnt_low), + WR_REGISTER_UNROLL(engine_hshake_cnt_high), + WR_REGISTER_UNROLL(ch0_pwr_en), + WR_REGISTER_UNROLL(ch1_pwr_en), + WR_REGISTER_UNROLL(ch2_pwr_en), + WR_REGISTER_UNROLL(ch3_pwr_en), + WR_REGISTER_UNROLL(ch4_pwr_en), + WR_REGISTER_UNROLL(ch5_pwr_en), + WR_REGISTER_UNROLL(ch6_pwr_en), + WR_REGISTER_UNROLL(ch7_pwr_en), + }; + + regs_dir = debugfs_create_dir(WRITE_STR, dir); + if (!regs_dir) + return -EPERM; + + nr_entries = sizeof(debugfs_regs) / sizeof(struct debugfs_entries); + err = dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + if (err) + return err; + + if (dw->mode == EDMA_MODE_UNROLL) { + nr_entries = sizeof(debugfs_unroll_regs) / + sizeof(struct debugfs_entries); + err = dw_edma_debugfs_create_x32(debugfs_unroll_regs, + nr_entries, regs_dir); + if (err) + return err; + } + + for (i = 0; i < dw->wr_ch_count; i++) { + snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); + + ch_dir = debugfs_create_dir(name, regs_dir); + if (!ch_dir) + return -EPERM; + + err = dw_edma_debugfs_regs_ch(&(regs->type.unroll.ch[i].wr), + ch_dir); + if (err) + return err; + + lim[0][i].start = ®s->type.unroll.ch[i].wr; + lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; + } + + return 0; +} + +static int dw_edma_debugfs_regs_rd(struct dentry *dir) +{ + struct dentry *regs_dir, *ch_dir; + int nr_entries, i, err; + char name[16]; + const struct debugfs_entries debugfs_regs[] = { + // eDMA global registers + RD_REGISTER(engine_en), + RD_REGISTER(doorbell), + RD_REGISTER(ch_arb_weight_low), + RD_REGISTER(ch_arb_weight_high), + // eDMA interrupts registers + RD_REGISTER(int_status), + RD_REGISTER(int_mask), + RD_REGISTER(int_clear), + RD_REGISTER(err_status_low), + RD_REGISTER(err_status_high), + RD_REGISTER(linked_list_err_en), + RD_REGISTER(done_imwr_low), + RD_REGISTER(done_imwr_high), + RD_REGISTER(abort_imwr_low), + RD_REGISTER(abort_imwr_high), + RD_REGISTER(ch01_imwr_data), + RD_REGISTER(ch23_imwr_data), + RD_REGISTER(ch45_imwr_data), + RD_REGISTER(ch67_imwr_data), + }; + const struct debugfs_entries debugfs_unroll_regs[] = { + // eDMA channel context grouping + RD_REGISTER_UNROLL(engine_chgroup), + RD_REGISTER_UNROLL(engine_hshake_cnt_low), + RD_REGISTER_UNROLL(engine_hshake_cnt_high), + RD_REGISTER_UNROLL(ch0_pwr_en), + RD_REGISTER_UNROLL(ch1_pwr_en), + RD_REGISTER_UNROLL(ch2_pwr_en), + RD_REGISTER_UNROLL(ch3_pwr_en), + RD_REGISTER_UNROLL(ch4_pwr_en), + RD_REGISTER_UNROLL(ch5_pwr_en), + RD_REGISTER_UNROLL(ch6_pwr_en), + RD_REGISTER_UNROLL(ch7_pwr_en), + }; + + regs_dir = debugfs_create_dir(READ_STR, dir); + if (!regs_dir) + return -EPERM; + + nr_entries = sizeof(debugfs_regs) / sizeof(struct debugfs_entries); + err = dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + if (err) + return err; + + if (dw->mode == EDMA_MODE_UNROLL) { + nr_entries = sizeof(debugfs_unroll_regs) / + sizeof(struct debugfs_entries); + err = dw_edma_debugfs_create_x32(debugfs_unroll_regs, + nr_entries, regs_dir); + if (err) + return err; + } + + for (i = 0; i < dw->rd_ch_count; i++) { + snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); + + ch_dir = debugfs_create_dir(name, regs_dir); + if (!ch_dir) + return -EPERM; + + err = dw_edma_debugfs_regs_ch(&(regs->type.unroll.ch[i].rd), + ch_dir); + if (err) + return err; + + lim[1][i].start = ®s->type.unroll.ch[i].rd; + lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; + } + + return 0; +} + +static int dw_edma_debugfs_regs(void) +{ + struct dentry *regs_dir; + int nr_entries, err; + const struct debugfs_entries debugfs_regs[] = { + REGISTER(ctrl_data_arb_prior), + REGISTER(ctrl), + }; + + regs_dir = debugfs_create_dir(REGISTERS_STR, base_dir); + if (!regs_dir) + return -EPERM; + + nr_entries = sizeof(debugfs_regs) / sizeof(struct debugfs_entries); + err = dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + if (err) + return err; + + err = dw_edma_debugfs_regs_wr(regs_dir); + if (err) + return err; + + err = dw_edma_debugfs_regs_rd(regs_dir); + if (err) + return err; + + return 0; +} + +int dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) +{ + struct dentry *entry; + int err; + + dw = chip->dw; + if (!dw) + return -EPERM; + + regs = (struct dw_edma_v0_regs *) dw->regs; + if (!regs) + return -EPERM; + + base_dir = debugfs_create_dir(DRV_NAME, 0); + if (!base_dir) + return -EPERM; + + entry = debugfs_create_u32("version", RD_PERM, base_dir, + &(dw->version)); + if (!entry) + return -EPERM; + + entry = debugfs_create_u32("mode", RD_PERM, base_dir, + &(dw->mode)); + if (!entry) + return -EPERM; + + entry = debugfs_create_u16("wr_ch_count", RD_PERM, base_dir, + &(dw->wr_ch_count)); + if (!entry) + return -EPERM; + + entry = debugfs_create_u16("rd_ch_count", RD_PERM, base_dir, + &(dw->rd_ch_count)); + if (!entry) + return -EPERM; + + err = dw_edma_debugfs_regs(); + return err; +} + +void dw_edma_v0_debugfs_off(void) +{ + debugfs_remove_recursive(base_dir); +} diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h new file mode 100644 index 0000000..702e454 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA v0 core + +#ifndef _DW_EDMA_V0_DEBUG_FS_H +#define _DW_EDMA_V0_DEBUG_FS_H + +#include + +#ifdef CONFIG_DEBUG_FS +int dw_edma_v0_debugfs_on(struct dw_edma_chip *chip); +void dw_edma_v0_debugfs_off(void); +#else +static inline int dw_edma_v0_debugfs_on(struct dw_edma_chip *chip); +{ + return 0; +} +static inline void dw_edma_v0_debugfs_off(void); +#endif /* CONFIG_DEBUG_FS */ + +#endif /* _DW_EDMA_V0_DEBUG_FS_H */ From patchwork Wed Dec 12 11:13:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011834 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="SrmqJR80"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhl6ZsZz9s4s for ; Wed, 12 Dec 2018 22:13:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727155AbeLLLN5 (ORCPT ); Wed, 12 Dec 2018 06:13:57 -0500 Received: from smtprelay.synopsys.com ([198.182.60.111]:35100 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726856AbeLLLN5 (ORCPT ); Wed, 12 Dec 2018 06:13:57 -0500 Received: from mailhost.synopsys.com (mailhost2.synopsys.com [10.13.184.66]) by smtprelay.synopsys.com (Postfix) with ESMTP id A16F910C1804; Wed, 12 Dec 2018 03:13:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613236; bh=7TGnoCVaV+g4IamKonl5z3SzoCxj2tCUAEzmyLA2Lt8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=SrmqJR80Qnx+LJfGqf4ZHYg40jyz4vy8xImfBL3uJ1aPyEZ0eIJgi2WF32Atm7/yn nWey9uLh1DRXEnAHbT7JK5z3dC7fe8tdb9asE/KdonMij40oDekmVSfb4b48ETQSZX 7aKMfMwnAS6KBbgd3DteKlW5kQRY7+EQRCHEtyVZlwLyYv+aglcZo1KIqcVLDQWL6m M6Cw943dWbxCsxwMK5utit68YGCfbwW0wqq8RangVZOFW5S7mRYIL9QS/HS8W0nW7s 9hoT8sDzYG1BCPwSwsmy0sy3Kmf2Ak6AUPhNUxiKvp4bDRMfaN/xh+CZ2ReJ85rBcJ mwhuxgDrideyw== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 673513E22; Wed, 12 Dec 2018 03:13:56 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id 9BCE63BDDD; Wed, 12 Dec 2018 12:13:55 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Vinod Koul , Eugeniy Paltsev , Lorenzo Pieralisi , Andy Shevchenko , Joao Pinto Subject: [RFC 4/6] dma: Add Synopsys eDMA IP PCIe glue-logic Date: Wed, 12 Dec 2018 12:13:24 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Synopsys eDMA IP is normally distributed along with Synopsys PCIe EndPoint IP (depends of the use and licensing agreement). This IP requires some basic configurations, such as: - eDMA registers BAR - eDMA registers offset - eDMA linked list BAR - eDMA linked list offset - eDMA linked list size - eDMA version - eDMA mode As a working example, PCIe glue-logic will attach to a Synopsys PCIe EndPoint IP prototype kit (Vendor ID = 0x16c3, Device ID = 0xedda), which has built-in an eDMA IP with this default configuration: - eDMA registers BAR = 0 - eDMA registers offset = 0x1000 (4 Kbytes) - eDMA linked list BAR = 2 - eDMA linked list offset = 0x0 (0 Kbytes) - eDMA linked list size = 0x20000 (128 Kbytes) - eDMA version = 0 - eDMA mode = EDMA_MODE_UNROLL This driver can be compile as built-in or external module in kernel. To enable this driver just select DW_EDMA_PCIE option in kernel configuration, however it requires and selects automatically DW_EDMA option too. Signed-off-by: Gustavo Pimentel Cc: Vinod Koul Cc: Eugeniy Paltsev Cc: Lorenzo Pieralisi Cc: Andy Shevchenko Cc: Joao Pinto --- drivers/dma/dw-edma/Kconfig | 9 ++ drivers/dma/dw-edma/Makefile | 1 + drivers/dma/dw-edma/dw-edma-pcie.c | 302 +++++++++++++++++++++++++++++++++++++ 3 files changed, 312 insertions(+) create mode 100644 drivers/dma/dw-edma/dw-edma-pcie.c diff --git a/drivers/dma/dw-edma/Kconfig b/drivers/dma/dw-edma/Kconfig index 3016bed..c0838ce 100644 --- a/drivers/dma/dw-edma/Kconfig +++ b/drivers/dma/dw-edma/Kconfig @@ -7,3 +7,12 @@ config DW_EDMA help Support the Synopsys DesignWare eDMA controller, normally implemented on endpoints SoCs. + +config DW_EDMA_PCIE + tristate "Synopsys DesignWare eDMA PCIe driver" + depends on PCI && PCI_MSI + select DW_EDMA + help + Provides a glue-logic between the Synopsys DesignWare + eDMA controller and an endpoint PCIe device. This also serves + as a reference design to whom desires to use this IP. diff --git a/drivers/dma/dw-edma/Makefile b/drivers/dma/dw-edma/Makefile index 0c53033..8d45c0d 100644 --- a/drivers/dma/dw-edma/Makefile +++ b/drivers/dma/dw-edma/Makefile @@ -4,3 +4,4 @@ obj-$(CONFIG_DW_EDMA) += dw-edma.o dw-edma-$(CONFIG_DEBUG_FS) := dw-edma-v0-debugfs.o dw-edma-objs := dw-edma-core.o \ dw-edma-v0-core.o $(dw-edma-y) +obj-$(CONFIG_DW_EDMA_PCIE) += dw-edma-pcie.o diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c new file mode 100644 index 0000000..f29a861 --- /dev/null +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -0,0 +1,302 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. +// Synopsys DesignWare eDMA PCIe driver + +#include +#include +#include +#include +#include + +#include "dw-edma-core.h" + +#define DRV_PCIE_NAME "dw-edma-pcie" + +enum dw_edma_pcie_bar { + BAR_0, + BAR_1, + BAR_2, + BAR_3, + BAR_4, + BAR_5 +}; + +struct dw_edma_pcie_data { + enum dw_edma_pcie_bar regs_bar; + u64 regs_off; + enum dw_edma_pcie_bar ll_bar; + u64 ll_off; + size_t ll_sz; + u32 version; + enum dw_edma_mode mode; +}; + +static const struct dw_edma_pcie_data snps_edda_data = { + // eDMA registers location + .regs_bar = BAR_0, + .regs_off = 0x1000, // 4 KBytes + // eDMA memory linked list location + .ll_bar = BAR_2, + .ll_off = 0, // 0 KBytes + .ll_sz = 0x20000, // 128 KBytes + // Other + .version = 0, + .mode = EDMA_MODE_UNROLL, +}; + +static int dw_edma_pcie_probe(struct pci_dev *pdev, + const struct pci_device_id *pid) +{ + const struct dw_edma_pcie_data *pdata = (void *)pid->driver_data; + struct device *dev = &pdev->dev; + struct dw_edma_chip *chip; + struct dw_edma *dw; + void __iomem *reg; + int err, irq = -1; + u32 addr_hi, addr_lo; + u16 flags; + u8 cap_off; + + if (!pdata) { + dev_err(dev, "%s missing data struture\n", + pci_name(pdev)); + return -EFAULT; + } + + err = pcim_enable_device(pdev); + if (err) { + dev_err(dev, "%s enabling device failed\n", + pci_name(pdev)); + return err; + } + + err = pcim_iomap_regions(pdev, 1 << pdata->regs_bar, pci_name(pdev)); + if (err) { + dev_err(dev, "%s eDMA register BAR I/O memory remapping failed\n", + pci_name(pdev)); + return err; + } + + err = pcim_iomap_regions(pdev, 1 << pdata->ll_bar, pci_name(pdev)); + if (err) { + dev_err(dev, "%s eDMA linked list BAR I/O remapping failed\n", + pci_name(pdev)); + return err; + } + + pci_set_master(pdev); + + err = pci_try_set_mwi(pdev); + if (err) { + dev_err(dev, "%s DMA memory write invalidate\n", + pci_name(pdev)); + return err; + } + + err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (err) { + dev_err(dev, "%s DMA mask set failed\n", + pci_name(pdev)); + return err; + } + + err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + if (err) { + dev_err(dev, "%s consistent DMA mask set failed\n", + pci_name(pdev)); + return err; + } + + chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL); + if (!chip) + return -ENOMEM; + + dw = devm_kzalloc(&pdev->dev, sizeof(*dw), GFP_KERNEL); + if (!dw) + return -ENOMEM; + + irq = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_MSIX); + if (irq < 0) { + dev_err(dev, "%s failed to alloc IRQ vector\n", + pci_name(pdev)); + return -EPERM; + } + + chip->dw = dw; + chip->dev = dev; + chip->id = pdev->devfn; + chip->irq = pdev->irq; + + dw->regs = pcim_iomap_table(pdev)[pdata->regs_bar]; + dw->regs += pdata->regs_off; + + dw->va_ll = pcim_iomap_table(pdev)[pdata->ll_bar]; + dw->va_ll += pdata->ll_off; + dw->pa_ll = pdev->resource[pdata->ll_bar].start; + dw->pa_ll += pdata->ll_off; + dw->ll_sz = pdata->ll_sz; + + dw->msi_addr = 0; + dw->msi_data = 0; + + dw->version = pdata->version; + dw->mode = pdata->mode; + + dev_info(dev, "Version:\t%u\n", dw->version); + + dev_info(dev, "Mode:\t%s\n", + dw->mode == EDMA_MODE_LEGACY ? "Legacy" : "Unroll"); + + dev_info(dev, "Registers:\tBAR=%u, off=0x%.16llx B, addr=0x%.8lx\n", + pdata->regs_bar, pdata->regs_off, + (unsigned long) dw->regs); + + dev_info(dev, + "L. List:\tBAR=%u, off=0x%.16llx B, sz=0x%.8x B, vaddr=0x%.8lx, paddr=0x%.8lx", + pdata->ll_bar, pdata->ll_off, pdata->ll_sz, + (unsigned long) dw->va_ll, + (unsigned long) dw->pa_ll); + + if (pdev->msi_cap && pdev->msi_enabled) { + cap_off = pdev->msi_cap + PCI_MSI_FLAGS; + pci_read_config_word(pdev, cap_off, &flags); + if (flags & PCI_MSI_FLAGS_ENABLE) { + cap_off = pdev->msi_cap + PCI_MSI_ADDRESS_LO; + pci_read_config_dword(pdev, cap_off, &addr_lo); + + if (flags & PCI_MSI_FLAGS_64BIT) { + cap_off = pdev->msi_cap + PCI_MSI_ADDRESS_HI; + pci_read_config_dword(pdev, cap_off, &addr_hi); + cap_off = pdev->msi_cap + PCI_MSI_DATA_64; + } else { + addr_hi = 0; + cap_off = pdev->msi_cap + PCI_MSI_DATA_32; + } + + dw->msi_addr = addr_hi; + dw->msi_addr <<= 32; + dw->msi_addr |= addr_lo; + + pci_read_config_dword(pdev, cap_off, &(dw->msi_data)); + dw->msi_data &= 0xffff; + + dev_info(dev, + "MSI:\t\taddr=0x%.16llx, data=0x%.8x, nr=%d\n", + dw->msi_addr, dw->msi_data, pdev->irq); + } + } + + if (pdev->msix_cap && pdev->msix_enabled) { + u32 offset; + u8 bir; + + cap_off = pdev->msix_cap + PCI_MSIX_FLAGS; + pci_read_config_word(pdev, cap_off, &flags); + + if (flags & PCI_MSIX_FLAGS_ENABLE) { + cap_off = pdev->msix_cap + PCI_MSIX_TABLE; + pci_read_config_dword(pdev, cap_off, &offset); + + bir = offset & PCI_MSIX_TABLE_BIR; + offset &= PCI_MSIX_TABLE_OFFSET; + + reg = pcim_iomap_table(pdev)[bir]; + reg += offset; + + addr_lo = readl(reg + PCI_MSIX_ENTRY_LOWER_ADDR); + addr_hi = readl(reg + PCI_MSIX_ENTRY_UPPER_ADDR); + dw->msi_addr = addr_hi; + dw->msi_addr <<= 32; + dw->msi_addr |= addr_lo; + + dw->msi_data = readl(reg + PCI_MSIX_ENTRY_DATA); + + dev_info(dev, + "MSI-X:\taddr=0x%.16llx, data=0x%.8x, nr=%d\n", + dw->msi_addr, dw->msi_data, pdev->irq); + } + } + + if (!pdev->msi_enabled && !pdev->msix_enabled) { + dev_err(dev, "%s enable interrupt failed\n", + pci_name(pdev)); + return -EPERM; + } + + err = dw_edma_probe(chip); + if (err) { + dev_err(dev, "%s eDMA probe failed\n", + pci_name(pdev)); + return err; + } + + pci_set_drvdata(pdev, chip); + + dev_info(dev, "DesignWare eDMA PCIe driver loaded completely\n"); + + return 0; +} + +static void dw_edma_pcie_remove(struct pci_dev *pdev) +{ + struct dw_edma_chip *chip = pci_get_drvdata(pdev); + struct device *dev = &pdev->dev; + int err; + + err = dw_edma_remove(chip); + if (err) { + dev_warn(dev, "%s can't remove device properly: %d\n", + pci_name(pdev), err); + } + + pci_free_irq_vectors(pdev); + + dev_info(dev, "DesignWare eDMA PCIe driver unloaded completely\n"); +} + +#ifdef CONFIG_PM_SLEEP + +static int dw_edma_pcie_suspend_late(struct device *dev) +{ + struct pci_dev *pci = to_pci_dev(dev); + struct dw_edma_chip *chip = pci_get_drvdata(pci); + + return dw_edma_disable(chip); +}; + +static int dw_edma_pcie_resume_early(struct device *dev) +{ + struct pci_dev *pci = to_pci_dev(dev); + struct dw_edma_chip *chip = pci_get_drvdata(pci); + + return dw_edma_enable(chip); +}; + +#endif /* CONFIG_PM_SLEEP */ + +static const struct dev_pm_ops dw_edma_pcie_dev_pm_ops = { + SET_LATE_SYSTEM_SLEEP_PM_OPS(dw_edma_pcie_suspend_late, + dw_edma_pcie_resume_early) +}; + +static const struct pci_device_id dw_edma_pcie_id_table[] = { + { PCI_DEVICE_DATA(SYNOPSYS, 0xedda, &snps_edda_data) }, + { } +}; +MODULE_DEVICE_TABLE(pci, dw_edma_pcie_id_table); + +static struct pci_driver dw_edma_pcie_driver = { + .name = DRV_PCIE_NAME, + .id_table = dw_edma_pcie_id_table, + .probe = dw_edma_pcie_probe, + .remove = dw_edma_pcie_remove, + .driver = { + .pm = &dw_edma_pcie_dev_pm_ops, + }, +}; + +module_pci_driver(dw_edma_pcie_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Synopsys DesignWare eDMA PCIe driver"); +MODULE_AUTHOR("Gustavo Pimentel "); From patchwork Wed Dec 12 11:13:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011835 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="Q8LWEehE"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhm4KbZz9s1c for ; Wed, 12 Dec 2018 22:14:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727157AbeLLLN6 (ORCPT ); Wed, 12 Dec 2018 06:13:58 -0500 Received: from us01smtprelay-2.synopsys.com ([198.182.47.9]:37680 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726856AbeLLLN6 (ORCPT ); Wed, 12 Dec 2018 06:13:58 -0500 Received: from mailhost.synopsys.com (mailhost3.synopsys.com [10.12.238.238]) by smtprelay.synopsys.com (Postfix) with ESMTP id 9B87624E2250; Wed, 12 Dec 2018 03:13:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613237; bh=oCQMFrisgrUbXZMmA0K4nQiKCO/Gi7QRh8Ud8vnY8nE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Q8LWEehE6G8Sls7qcNriODLSOh6EdRA3QJ2gCyp1xk0E7L2cCFNItOVxeSmeEaJkN 7Usaay1TIyMOaeQlC5PMC3ln+dJpLN9YmPMnjHMOFsbRtx5YK3uN4k3+aLLVNZBuhf LQyyX39JW14BVqXykzfZVBCTUqzCPYFCdkuCwsVAr+BYMcY9y9QzNHAr2xb3eWjYWs KcW2nTY7alw8jqwUIWKAxJUL4Ogrm1s7sz/ZmADVctfNHQnusTsXe5U6X98cG3q8HZ dHmbgyLFwl1bymlSNvV+yqc9WekMOGZcL27H8WVaSzL+uFI0rfSKk6/D98eUyaII5U bTHukXliH8hAA== Received: from de02.synopsys.com (germany.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id 5C4CD3D78; Wed, 12 Dec 2018 03:13:57 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id 982D33BDE1; Wed, 12 Dec 2018 12:13:56 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Vinod Koul , Eugeniy Paltsev , Joao Pinto Subject: [RFC 5/6] MAINTAINERS: Add Synopsys eDMA IP driver maintainer Date: Wed, 12 Dec 2018 12:13:25 +0100 Message-Id: <45584389de19948652a32b2cdf2780d3a0003c01.1544610287.git.gustavo.pimentel@synopsys.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add Synopsys eDMA IP driver maintainer. This driver aims to support Synopsys eDMA IP and is normally distributed along with Synopsys PCIe EndPoint IP (depends of the use and licensing agreement). Signed-off-by: Gustavo Pimentel Cc: Vinod Koul Cc: Eugeniy Paltsev Cc: Joao Pinto --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index f485597..bdbfc14 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4256,6 +4256,13 @@ L: linux-mtd@lists.infradead.org S: Supported F: drivers/mtd/nand/raw/denali* +DESIGNWARE EDMA CORE IP DRIVER +M: Gustavo Pimentel +L: dmaengine@vger.kernel.org +S: Maintained +F: drivers/dma/dw-edma/ +F: include/linux/dma/edma.h + DESIGNWARE USB2 DRD IP DRIVER M: Minas Harutyunyan L: linux-usb@vger.kernel.org From patchwork Wed Dec 12 11:13:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gustavo Pimentel X-Patchwork-Id: 1011836 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=synopsys.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=synopsys.com header.i=@synopsys.com header.b="URSML/eb"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43FDhn2QXJz9s4s for ; Wed, 12 Dec 2018 22:14:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727188AbeLLLN6 (ORCPT ); Wed, 12 Dec 2018 06:13:58 -0500 Received: from smtprelay.synopsys.com ([198.182.60.111]:35104 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727159AbeLLLN6 (ORCPT ); Wed, 12 Dec 2018 06:13:58 -0500 Received: from mailhost.synopsys.com (mailhost2.synopsys.com [10.13.184.66]) by smtprelay.synopsys.com (Postfix) with ESMTP id 1F2A110C0461; Wed, 12 Dec 2018 03:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synopsys.com; s=mail; t=1544613238; bh=bQxyfyjhbeTLF292y1bLb172w8Quqj/1rP22rG8cDyM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=URSML/ebVHSIquut8oyhUbU3fQ/jwnry3RTsAcD8ctjkNb10jDYG1EZhJr4iwDt+W qjBnl/pVxtKiRl+fQZ7QhcM+3y4QKYbFhUpI5Gqb8KykdBg8rwx/xRr8UACPxhlYJZ fLUOADD39YHdogVcOrgpLhxP25fIFiLT2LTjP4oGZyXN7kCPd6+fOlM2qhnQMIvte/ BZpOGpq5XWU3yL2GQL9If28d52q+814aQNYyyHE8nyZUzi32kcMufaLEl5IW1T03xB RWdstKJLDRWrrgnobF4niZLcFq5nxbyjm4FqNIY/HgG3h1/2ZbDyvXuSF3OHCW0knH SkKC/95R5J7Jw== Received: from de02.synopsys.com (de02.internal.synopsys.com [10.225.17.21]) by mailhost.synopsys.com (Postfix) with ESMTP id DB8703E25; Wed, 12 Dec 2018 03:13:57 -0800 (PST) Received: from de02dwia024.internal.synopsys.com (de02dwia024.internal.synopsys.com [10.225.19.81]) by de02.synopsys.com (Postfix) with ESMTP id 8A6B33BDE6; Wed, 12 Dec 2018 12:13:57 +0100 (CET) From: Gustavo Pimentel To: linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Cc: Gustavo Pimentel , Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , Joao Pinto Subject: [RFC 6/6] pci: pci_ids: Add Synopsys device id 0xedda Date: Wed, 12 Dec 2018 12:13:26 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Create and add Synopsys device id (0xedda) to pci id list, since this id is now being use on two different drivers (pci_endpoint_test.ko and dw-edma-pcie.ko). Signed-off-by: Gustavo Pimentel Cc: Kishon Vijay Abraham I Cc: Bjorn Helgaas Cc: Lorenzo Pieralisi Cc: Joao Pinto Acked-by: Bjorn Helgaas --- drivers/dma/dw-edma/dw-edma-pcie.c | 2 +- drivers/misc/pci_endpoint_test.c | 2 +- include/linux/pci_ids.h | 1 + 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index f29a861..50e0db4 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -280,7 +280,7 @@ static const struct dev_pm_ops dw_edma_pcie_dev_pm_ops = { }; static const struct pci_device_id dw_edma_pcie_id_table[] = { - { PCI_DEVICE_DATA(SYNOPSYS, 0xedda, &snps_edda_data) }, + { PCI_DEVICE_DATA(SYNOPSYS, EDDA, &snps_edda_data) }, { } }; MODULE_DEVICE_TABLE(pci, dw_edma_pcie_id_table); diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c index 896e2df..d27efe838 100644 --- a/drivers/misc/pci_endpoint_test.c +++ b/drivers/misc/pci_endpoint_test.c @@ -788,7 +788,7 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev) static const struct pci_device_id pci_endpoint_test_tbl[] = { { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, - { PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) }, + { PCI_DEVICE_DATA(SYNOPSYS, EDDA, NULL) }, { } }; MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl); diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 69f0abe..57f17dd 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2358,6 +2358,7 @@ #define PCI_DEVICE_ID_CENATEK_IDE 0x0001 #define PCI_VENDOR_ID_SYNOPSYS 0x16c3 +#define PCI_DEVICE_ID_SYNOPSYS_EDDA 0xedda #define PCI_VENDOR_ID_VITESSE 0x1725 #define PCI_DEVICE_ID_VITESSE_VSC7174 0x7174