From patchwork Wed Feb 11 02:38:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Olivari X-Patchwork-Id: 438648 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from arrakis.dune.hu (arrakis.dune.hu [78.24.191.176]) (using TLSv1.1 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id A5C8B1400D5 for ; Wed, 11 Feb 2015 13:42:11 +1100 (AEDT) Received: from arrakis.dune.hu (localhost [127.0.0.1]) by arrakis.dune.hu (Postfix) with ESMTP id BBFFE28BFE3; Wed, 11 Feb 2015 03:37:07 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on arrakis.dune.hu X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD autolearn=unavailable version=3.3.2 Received: from arrakis.dune.hu (localhost [127.0.0.1]) by arrakis.dune.hu (Postfix) with ESMTP id 6232428BBE0 for ; Wed, 11 Feb 2015 03:36:26 +0100 (CET) X-policyd-weight: using cached result; rate:hard: -7.6 Received: from smtp.codeaurora.org (smtp.codeaurora.org [198.145.29.96]) by arrakis.dune.hu (Postfix) with ESMTPS for ; Wed, 11 Feb 2015 03:36:08 +0100 (CET) Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 541481401B0; Wed, 11 Feb 2015 02:38:43 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 46DE1140C42; Wed, 11 Feb 2015 02:38:43 +0000 (UTC) Received: from mathieu-linux.qualcomm.com (qf-scl1nat.qualcomm.com [207.114.132.30]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: mathieu@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id E6D881401B0; Wed, 11 Feb 2015 02:38:40 +0000 (UTC) From: Mathieu Olivari To: blogic@openwrt.org, kaloz@openwrt.org Date: Tue, 10 Feb 2015 18:38:13 -0800 Message-Id: <1423622295-17130-2-git-send-email-mathieu@codeaurora.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1423622295-17130-1-git-send-email-mathieu@codeaurora.org> References: <1423622295-17130-1-git-send-email-mathieu@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP Cc: mmcclint@codeaurora.org, openwrt-devel@lists.openwrt.org Subject: [OpenWrt-Devel] [PATCH 2/4] ipq806x: fix SPI read errors X-BeenThere: openwrt-devel@lists.openwrt.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: OpenWrt Development List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: openwrt-devel-bounces@lists.openwrt.org Sender: "openwrt-devel" SPI read errors are reported even with the board sitting idle. Fixes have been posted to linux-spi mailing list but not merged yet, so we'll just pull the patches and apply them locally: *001-spi-qup-Add-DMA-capabilities.patch is pulled from: http://marc.info/?l=linux-spi&m=140381685115859&w=2 *002-v3-spi-qup-Fix-incorrect-block-transfers.patch is pulled from: http://marc.info/?l=linux-spi&m=141211211011539&w=2 Signed-off-by: Mathieu Olivari --- .../patches/001-spi-qup-Add-DMA-capabilities.patch | 522 +++++++++++++++++++++ ...-v3-spi-qup-Fix-incorrect-block-transfers.patch | 376 +++++++++++++++ 2 files changed, 898 insertions(+) create mode 100644 target/linux/ipq806x/patches/001-spi-qup-Add-DMA-capabilities.patch create mode 100644 target/linux/ipq806x/patches/002-v3-spi-qup-Fix-incorrect-block-transfers.patch diff --git a/target/linux/ipq806x/patches/001-spi-qup-Add-DMA-capabilities.patch b/target/linux/ipq806x/patches/001-spi-qup-Add-DMA-capabilities.patch new file mode 100644 index 0000000..62badd5 --- /dev/null +++ b/target/linux/ipq806x/patches/001-spi-qup-Add-DMA-capabilities.patch @@ -0,0 +1,522 @@ +Content-Type: text/plain; charset="utf-8" +MIME-Version: 1.0 +Content-Transfer-Encoding: 7bit +Subject: spi: qup: Add DMA capabilities +From: Andy Gross +X-Patchwork-Id: 4432401 +Message-Id: <1403816781-31008-1-git-send-email-agross@codeaurora.org> +To: Mark Brown +Cc: linux-spi@vger.kernel.org, Sagar Dharia , + Daniel Sneddon , + Bjorn Andersson , + "Ivan T. Ivanov" , + linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, + linux-arm-msm@vger.kernel.org, Andy Gross +Date: Thu, 26 Jun 2014 16:06:21 -0500 + +This patch adds DMA capabilities to the spi-qup driver. If DMA channels are +present, the QUP will use DMA instead of block mode for transfers to/from SPI +peripherals for transactions larger than the length of a block. + +Signed-off-by: Andy Gross + +--- +.../devicetree/bindings/spi/qcom,spi-qup.txt | 10 + + drivers/spi/spi-qup.c | 361 ++++++++++++++++++-- + 2 files changed, 350 insertions(+), 21 deletions(-) + +--- a/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt ++++ b/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt +@@ -27,6 +27,11 @@ Optional properties: + - spi-max-frequency: Specifies maximum SPI clock frequency, + Units - Hz. Definition as per + Documentation/devicetree/bindings/spi/spi-bus.txt ++- dmas : Two DMA channel specifiers following the convention outlined ++ in bindings/dma/dma.txt ++- dma-names: Names for the dma channels, if present. There must be at ++ least one channel named "tx" for transmit and named "rx" for ++ receive. + - num-cs: total number of chipselects + - cs-gpios: should specify GPIOs used for chipselects. + The gpios will be referred to as reg = in the SPI child +@@ -51,6 +56,10 @@ Example: + clocks = <&gcc GCC_BLSP2_QUP2_SPI_APPS_CLK>, <&gcc GCC_BLSP2_AHB_CLK>; + clock-names = "core", "iface"; + ++ dmas = <&blsp2_bam 2>, ++ <&blsp2_bam 3>; ++ dma-names = "rx", "tx"; ++ + pinctrl-names = "default"; + pinctrl-0 = <&spi8_default>; + +--- a/drivers/spi/spi-qup.c ++++ b/drivers/spi/spi-qup.c +@@ -22,6 +22,8 @@ + #include + #include + #include ++#include ++#include + + #define QUP_CONFIG 0x0000 + #define QUP_STATE 0x0004 +@@ -116,6 +118,8 @@ + + #define SPI_NUM_CHIPSELECTS 4 + ++#define SPI_MAX_XFER (SZ_64K - 64) ++ + /* high speed mode is when bus rate is greater then 26MHz */ + #define SPI_HS_MIN_RATE 26000000 + #define SPI_MAX_RATE 50000000 +@@ -143,6 +147,17 @@ struct spi_qup { + int tx_bytes; + int rx_bytes; + int qup_v1; ++ ++ int use_dma; ++ ++ struct dma_chan *rx_chan; ++ struct dma_slave_config rx_conf; ++ struct dma_chan *tx_chan; ++ struct dma_slave_config tx_conf; ++ dma_addr_t rx_dma; ++ dma_addr_t tx_dma; ++ void *dummy; ++ atomic_t dma_outstanding; + }; + + +@@ -266,6 +281,221 @@ static void spi_qup_fifo_write(struct sp + } + } + ++static void qup_dma_callback(void *data) ++{ ++ struct spi_qup *controller = data; ++ ++ if (atomic_dec_and_test(&controller->dma_outstanding)) ++ complete(&controller->done); ++} ++ ++static int spi_qup_do_dma(struct spi_qup *controller, struct spi_transfer *xfer) ++{ ++ struct dma_async_tx_descriptor *rxd, *txd; ++ dma_cookie_t rx_cookie, tx_cookie; ++ u32 xfer_len, rx_align = 0, tx_align = 0, n_words; ++ struct scatterlist tx_sg[2], rx_sg[2]; ++ int ret = 0; ++ u32 bytes_to_xfer = xfer->len; ++ u32 offset = 0; ++ u32 rx_nents = 0, tx_nents = 0; ++ dma_addr_t rx_dma = 0, tx_dma = 0, rx_dummy_dma = 0, tx_dummy_dma = 0; ++ ++ ++ if (xfer->rx_buf) { ++ rx_dma = dma_map_single(controller->dev, xfer->rx_buf, ++ xfer->len, DMA_FROM_DEVICE); ++ ++ if (dma_mapping_error(controller->dev, rx_dma)) { ++ ret = -ENOMEM; ++ return ret; ++ } ++ ++ /* check to see if we need dummy buffer for leftover bytes */ ++ rx_align = xfer->len % controller->in_blk_sz; ++ if (rx_align) { ++ rx_dummy_dma = dma_map_single(controller->dev, ++ controller->dummy, controller->in_fifo_sz, ++ DMA_FROM_DEVICE); ++ ++ if (dma_mapping_error(controller->dev, rx_dummy_dma)) { ++ ret = -ENOMEM; ++ goto err_map_rx_dummy; ++ } ++ } ++ } ++ ++ if (xfer->tx_buf) { ++ tx_dma = dma_map_single(controller->dev, ++ (void *)xfer->tx_buf, xfer->len, DMA_TO_DEVICE); ++ ++ if (dma_mapping_error(controller->dev, tx_dma)) { ++ ret = -ENOMEM; ++ goto err_map_tx; ++ } ++ ++ /* check to see if we need dummy buffer for leftover bytes */ ++ tx_align = xfer->len % controller->out_blk_sz; ++ if (tx_align) { ++ memcpy(controller->dummy + SZ_1K, ++ xfer->tx_buf + xfer->len - tx_align, ++ tx_align); ++ memset(controller->dummy + SZ_1K + tx_align, 0, ++ controller->out_blk_sz - tx_align); ++ ++ tx_dummy_dma = dma_map_single(controller->dev, ++ controller->dummy + SZ_1K, ++ controller->out_blk_sz, DMA_TO_DEVICE); ++ ++ if (dma_mapping_error(controller->dev, tx_dummy_dma)) { ++ ret = -ENOMEM; ++ goto err_map_tx_dummy; ++ } ++ } ++ } ++ ++ atomic_set(&controller->dma_outstanding, 0); ++ ++ while (bytes_to_xfer > 0) { ++ xfer_len = min_t(u32, bytes_to_xfer, SPI_MAX_XFER); ++ n_words = DIV_ROUND_UP(xfer_len, controller->w_size); ++ ++ /* write out current word count to controller */ ++ writel_relaxed(n_words, controller->base + QUP_MX_INPUT_CNT); ++ writel_relaxed(n_words, controller->base + QUP_MX_OUTPUT_CNT); ++ ++ reinit_completion(&controller->done); ++ ++ if (xfer->tx_buf) { ++ /* recalc align for each transaction */ ++ tx_align = xfer_len % controller->out_blk_sz; ++ ++ if (tx_align) ++ tx_nents = 2; ++ else ++ tx_nents = 1; ++ ++ /* initialize scatterlists */ ++ sg_init_table(tx_sg, tx_nents); ++ sg_dma_len(&tx_sg[0]) = xfer_len - tx_align; ++ sg_dma_address(&tx_sg[0]) = tx_dma + offset; ++ ++ /* account for non block size transfer */ ++ if (tx_align) { ++ sg_dma_len(&tx_sg[1]) = controller->out_blk_sz; ++ sg_dma_address(&tx_sg[1]) = tx_dummy_dma; ++ } ++ ++ txd = dmaengine_prep_slave_sg(controller->tx_chan, ++ tx_sg, tx_nents, DMA_MEM_TO_DEV, 0); ++ if (!txd) { ++ ret = -ENOMEM; ++ goto err_unmap; ++ } ++ ++ atomic_inc(&controller->dma_outstanding); ++ ++ txd->callback = qup_dma_callback; ++ txd->callback_param = controller; ++ ++ tx_cookie = dmaengine_submit(txd); ++ ++ dma_async_issue_pending(controller->tx_chan); ++ } ++ ++ if (xfer->rx_buf) { ++ /* recalc align for each transaction */ ++ rx_align = xfer_len % controller->in_blk_sz; ++ ++ if (rx_align) ++ rx_nents = 2; ++ else ++ rx_nents = 1; ++ ++ /* initialize scatterlists */ ++ sg_init_table(rx_sg, rx_nents); ++ sg_dma_address(&rx_sg[0]) = rx_dma + offset; ++ sg_dma_len(&rx_sg[0]) = xfer_len - rx_align; ++ ++ /* account for non block size transfer */ ++ if (rx_align) { ++ sg_dma_len(&rx_sg[1]) = controller->in_blk_sz; ++ sg_dma_address(&rx_sg[1]) = rx_dummy_dma; ++ } ++ ++ rxd = dmaengine_prep_slave_sg(controller->rx_chan, ++ rx_sg, rx_nents, DMA_DEV_TO_MEM, 0); ++ if (!rxd) { ++ ret = -ENOMEM; ++ goto err_unmap; ++ } ++ ++ atomic_inc(&controller->dma_outstanding); ++ ++ rxd->callback = qup_dma_callback; ++ rxd->callback_param = controller; ++ ++ rx_cookie = dmaengine_submit(rxd); ++ ++ dma_async_issue_pending(controller->rx_chan); ++ } ++ ++ if (spi_qup_set_state(controller, QUP_STATE_RUN)) { ++ dev_warn(controller->dev, "cannot set EXECUTE state\n"); ++ goto err_unmap; ++ } ++ ++ if (!wait_for_completion_timeout(&controller->done, ++ msecs_to_jiffies(1000))) { ++ ret = -ETIMEDOUT; ++ ++ /* clear out all the DMA transactions */ ++ if (xfer->tx_buf) ++ dmaengine_terminate_all(controller->tx_chan); ++ if (xfer->rx_buf) ++ dmaengine_terminate_all(controller->rx_chan); ++ ++ goto err_unmap; ++ } ++ ++ if (rx_align) ++ memcpy(xfer->rx_buf + offset + xfer->len - rx_align, ++ controller->dummy, rx_align); ++ ++ /* adjust remaining bytes to transfer */ ++ bytes_to_xfer -= xfer_len; ++ offset += xfer_len; ++ ++ ++ /* reset mini-core state so we can program next transaction */ ++ if (spi_qup_set_state(controller, QUP_STATE_RESET)) { ++ dev_err(controller->dev, "cannot set RESET state\n"); ++ goto err_unmap; ++ } ++ } ++ ++ ret = 0; ++ ++err_unmap: ++ if (tx_align) ++ dma_unmap_single(controller->dev, tx_dummy_dma, ++ controller->out_fifo_sz, DMA_TO_DEVICE); ++err_map_tx_dummy: ++ if (xfer->tx_buf) ++ dma_unmap_single(controller->dev, tx_dma, xfer->len, ++ DMA_TO_DEVICE); ++err_map_tx: ++ if (rx_align) ++ dma_unmap_single(controller->dev, rx_dummy_dma, ++ controller->in_fifo_sz, DMA_FROM_DEVICE); ++err_map_rx_dummy: ++ if (xfer->rx_buf) ++ dma_unmap_single(controller->dev, rx_dma, xfer->len, ++ DMA_FROM_DEVICE); ++ ++ return ret; ++} ++ + static irqreturn_t spi_qup_qup_irq(int irq, void *dev_id) + { + struct spi_qup *controller = dev_id; +@@ -315,11 +545,13 @@ static irqreturn_t spi_qup_qup_irq(int i + error = -EIO; + } + +- if (opflags & QUP_OP_IN_SERVICE_FLAG) +- spi_qup_fifo_read(controller, xfer); ++ if (!controller->use_dma) { ++ if (opflags & QUP_OP_IN_SERVICE_FLAG) ++ spi_qup_fifo_read(controller, xfer); + +- if (opflags & QUP_OP_OUT_SERVICE_FLAG) +- spi_qup_fifo_write(controller, xfer); ++ if (opflags & QUP_OP_OUT_SERVICE_FLAG) ++ spi_qup_fifo_write(controller, xfer); ++ } + + spin_lock_irqsave(&controller->lock, flags); + controller->error = error; +@@ -339,6 +571,8 @@ static int spi_qup_io_config(struct spi_ + struct spi_qup *controller = spi_master_get_devdata(spi->master); + u32 config, iomode, mode; + int ret, n_words, w_size; ++ size_t dma_align = dma_get_cache_alignment(); ++ u32 dma_available = 0; + + if (spi->mode & SPI_LOOP && xfer->len > controller->in_fifo_sz) { + dev_err(controller->dev, "too big size for loopback %d > %d\n", +@@ -367,6 +601,11 @@ static int spi_qup_io_config(struct spi_ + n_words = xfer->len / w_size; + controller->w_size = w_size; + ++ if (controller->rx_chan && ++ IS_ALIGNED((size_t)xfer->tx_buf, dma_align) && ++ IS_ALIGNED((size_t)xfer->rx_buf, dma_align)) ++ dma_available = 1; ++ + if (n_words <= (controller->in_fifo_sz / sizeof(u32))) { + mode = QUP_IO_M_MODE_FIFO; + writel_relaxed(n_words, controller->base + QUP_MX_READ_CNT); +@@ -374,19 +613,31 @@ static int spi_qup_io_config(struct spi_ + /* must be zero for FIFO */ + writel_relaxed(0, controller->base + QUP_MX_INPUT_CNT); + writel_relaxed(0, controller->base + QUP_MX_OUTPUT_CNT); +- } else { ++ controller->use_dma = 0; ++ } else if (!dma_available) { + mode = QUP_IO_M_MODE_BLOCK; + writel_relaxed(n_words, controller->base + QUP_MX_INPUT_CNT); + writel_relaxed(n_words, controller->base + QUP_MX_OUTPUT_CNT); + /* must be zero for BLOCK and BAM */ + writel_relaxed(0, controller->base + QUP_MX_READ_CNT); + writel_relaxed(0, controller->base + QUP_MX_WRITE_CNT); ++ controller->use_dma = 0; ++ } else { ++ mode = QUP_IO_M_MODE_DMOV; ++ writel_relaxed(0, controller->base + QUP_MX_READ_CNT); ++ writel_relaxed(0, controller->base + QUP_MX_WRITE_CNT); ++ controller->use_dma = 1; + } + + iomode = readl_relaxed(controller->base + QUP_IO_M_MODES); + /* Set input and output transfer mode */ + iomode &= ~(QUP_IO_M_INPUT_MODE_MASK | QUP_IO_M_OUTPUT_MODE_MASK); +- iomode &= ~(QUP_IO_M_PACK_EN | QUP_IO_M_UNPACK_EN); ++ ++ if (!controller->use_dma) ++ iomode &= ~(QUP_IO_M_PACK_EN | QUP_IO_M_UNPACK_EN); ++ else ++ iomode |= QUP_IO_M_PACK_EN | QUP_IO_M_UNPACK_EN; ++ + iomode |= (mode << QUP_IO_M_OUTPUT_MODE_MASK_SHIFT); + iomode |= (mode << QUP_IO_M_INPUT_MODE_MASK_SHIFT); + +@@ -419,6 +670,14 @@ static int spi_qup_io_config(struct spi_ + config &= ~(QUP_CONFIG_NO_INPUT | QUP_CONFIG_NO_OUTPUT | QUP_CONFIG_N); + config |= xfer->bits_per_word - 1; + config |= QUP_CONFIG_SPI_MODE; ++ ++ if (controller->use_dma) { ++ if (!xfer->tx_buf) ++ config |= QUP_CONFIG_NO_OUTPUT; ++ if (!xfer->rx_buf) ++ config |= QUP_CONFIG_NO_INPUT; ++ } ++ + writel_relaxed(config, controller->base + QUP_CONFIG); + + /* only write to OPERATIONAL_MASK when register is present */ +@@ -452,25 +711,29 @@ static int spi_qup_transfer_one(struct s + controller->tx_bytes = 0; + spin_unlock_irqrestore(&controller->lock, flags); + +- if (spi_qup_set_state(controller, QUP_STATE_RUN)) { +- dev_warn(controller->dev, "cannot set RUN state\n"); +- goto exit; +- } ++ if (controller->use_dma) { ++ ret = spi_qup_do_dma(controller, xfer); ++ } else { ++ if (spi_qup_set_state(controller, QUP_STATE_RUN)) { ++ dev_warn(controller->dev, "cannot set RUN state\n"); ++ goto exit; ++ } + +- if (spi_qup_set_state(controller, QUP_STATE_PAUSE)) { +- dev_warn(controller->dev, "cannot set PAUSE state\n"); +- goto exit; +- } ++ if (spi_qup_set_state(controller, QUP_STATE_PAUSE)) { ++ dev_warn(controller->dev, "cannot set PAUSE state\n"); ++ goto exit; ++ } + +- spi_qup_fifo_write(controller, xfer); ++ spi_qup_fifo_write(controller, xfer); + +- if (spi_qup_set_state(controller, QUP_STATE_RUN)) { +- dev_warn(controller->dev, "cannot set EXECUTE state\n"); +- goto exit; +- } ++ if (spi_qup_set_state(controller, QUP_STATE_RUN)) { ++ dev_warn(controller->dev, "cannot set EXECUTE state\n"); ++ goto exit; ++ } + +- if (!wait_for_completion_timeout(&controller->done, timeout)) +- ret = -ETIMEDOUT; ++ if (!wait_for_completion_timeout(&controller->done, timeout)) ++ ret = -ETIMEDOUT; ++ } + exit: + spi_qup_set_state(controller, QUP_STATE_RESET); + spin_lock_irqsave(&controller->lock, flags); +@@ -553,6 +816,7 @@ static int spi_qup_probe(struct platform + master->transfer_one = spi_qup_transfer_one; + master->dev.of_node = pdev->dev.of_node; + master->auto_runtime_pm = true; ++ master->dma_alignment = dma_get_cache_alignment(); + + platform_set_drvdata(pdev, master); + +@@ -618,6 +882,56 @@ static int spi_qup_probe(struct platform + QUP_ERROR_INPUT_UNDER_RUN | QUP_ERROR_OUTPUT_UNDER_RUN, + base + QUP_ERROR_FLAGS_EN); + ++ /* allocate dma resources, if available */ ++ controller->rx_chan = dma_request_slave_channel(&pdev->dev, "rx"); ++ if (controller->rx_chan) { ++ controller->tx_chan = ++ dma_request_slave_channel(&pdev->dev, "tx"); ++ ++ if (!controller->tx_chan) { ++ dev_err(&pdev->dev, "Failed to allocate dma tx chan"); ++ dma_release_channel(controller->rx_chan); ++ } ++ ++ /* set DMA parameters */ ++ controller->rx_conf.device_fc = 1; ++ controller->rx_conf.src_addr = res->start + QUP_INPUT_FIFO; ++ controller->rx_conf.src_maxburst = controller->in_blk_sz; ++ ++ controller->tx_conf.device_fc = 1; ++ controller->tx_conf.dst_addr = res->start + QUP_OUTPUT_FIFO; ++ controller->tx_conf.dst_maxburst = controller->out_blk_sz; ++ ++ if (dmaengine_slave_config(controller->rx_chan, ++ &controller->rx_conf)) { ++ dev_err(&pdev->dev, "failed to configure RX channel\n"); ++ ++ dma_release_channel(controller->rx_chan); ++ dma_release_channel(controller->tx_chan); ++ controller->tx_chan = NULL; ++ controller->rx_chan = NULL; ++ } else if (dmaengine_slave_config(controller->tx_chan, ++ &controller->tx_conf)) { ++ dev_err(&pdev->dev, "failed to configure TX channel\n"); ++ ++ dma_release_channel(controller->rx_chan); ++ dma_release_channel(controller->tx_chan); ++ controller->tx_chan = NULL; ++ controller->rx_chan = NULL; ++ } ++ ++ controller->dummy = devm_kmalloc(controller->dev, PAGE_SIZE, ++ GFP_KERNEL); ++ ++ if (!controller->dummy) { ++ dma_release_channel(controller->rx_chan); ++ dma_release_channel(controller->tx_chan); ++ controller->tx_chan = NULL; ++ controller->rx_chan = NULL; ++ } ++ } ++ ++ + writel_relaxed(0, base + SPI_CONFIG); + writel_relaxed(SPI_IO_C_NO_TRI_STATE, base + SPI_IO_CONTROL); + +@@ -730,6 +1044,11 @@ static int spi_qup_remove(struct platfor + if (ret) + return ret; + ++ if (controller->rx_chan) ++ dma_release_channel(controller->rx_chan); ++ if (controller->tx_chan) ++ dma_release_channel(controller->tx_chan); ++ + clk_disable_unprepare(controller->cclk); + clk_disable_unprepare(controller->iclk); + diff --git a/target/linux/ipq806x/patches/002-v3-spi-qup-Fix-incorrect-block-transfers.patch b/target/linux/ipq806x/patches/002-v3-spi-qup-Fix-incorrect-block-transfers.patch new file mode 100644 index 0000000..62ee5b4 --- /dev/null +++ b/target/linux/ipq806x/patches/002-v3-spi-qup-Fix-incorrect-block-transfers.patch @@ -0,0 +1,376 @@ +Content-Type: text/plain; charset="utf-8" +MIME-Version: 1.0 +Content-Transfer-Encoding: 7bit +Subject: [v3] spi: qup: Fix incorrect block transfers +From: Andy Gross +X-Patchwork-Id: 5007321 +Message-Id: <1412112088-25928-1-git-send-email-agross@codeaurora.org> +To: Mark Brown +Cc: linux-spi@vger.kernel.org, linux-kernel@vger.kernel.org, + linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, + "Ivan T. Ivanov" , + Bjorn Andersson , + Kumar Gala , Andy Gross +Date: Tue, 30 Sep 2014 16:21:28 -0500 + +This patch fixes a number of errors with the QUP block transfer mode. Errors +manifested themselves as input underruns, output overruns, and timed out +transactions. + +The block mode does not require the priming that occurs in FIFO mode. At the +moment that the QUP is placed into the RUN state, the QUP will immediately raise +an interrupt if the request is a write. Therefore, there is no need to prime +the pump. + +In addition, the block transfers require that whole blocks of data are +read/written at a time. The last block of data that completes a transaction may +contain less than a full blocks worth of data. + +Each block of data results in an input/output service interrupt accompanied with +a input/output block flag set. Additional block reads/writes require clearing +of the service flag. It is ok to check for additional blocks of data in the +ISR, but you have to ack every block you transfer. Imbalanced acks result in +early return from complete transactions with pending interrupts that still have +to be ack'd. The next transaction can be affected by these interrupts. +Transactions are deemed complete when the MAX_INPUT or MAX_OUTPUT flag are set. + +Changes from v2: +- Added in additional completion check so that transaction done is not + prematurely signaled. +- Fixed various review comments. + +Changes from v1: +- Split out read/write block function. +- Removed extraneous checks for transfer length + +Signed-off-by: Andy Gross + +--- +drivers/spi/spi-qup.c | 201 ++++++++++++++++++++++++++++++++++++------------- + 1 file changed, 148 insertions(+), 53 deletions(-) + +--- a/drivers/spi/spi-qup.c ++++ b/drivers/spi/spi-qup.c +@@ -82,6 +82,8 @@ + #define QUP_IO_M_MODE_BAM 3 + + /* QUP_OPERATIONAL fields */ ++#define QUP_OP_IN_BLOCK_READ_REQ BIT(13) ++#define QUP_OP_OUT_BLOCK_WRITE_REQ BIT(12) + #define QUP_OP_MAX_INPUT_DONE_FLAG BIT(11) + #define QUP_OP_MAX_OUTPUT_DONE_FLAG BIT(10) + #define QUP_OP_IN_SERVICE_FLAG BIT(9) +@@ -147,6 +149,7 @@ struct spi_qup { + int tx_bytes; + int rx_bytes; + int qup_v1; ++ int mode; + + int use_dma; + +@@ -213,30 +216,14 @@ static int spi_qup_set_state(struct spi_ + return 0; + } + +- +-static void spi_qup_fifo_read(struct spi_qup *controller, +- struct spi_transfer *xfer) ++static void spi_qup_fill_read_buffer(struct spi_qup *controller, ++ struct spi_transfer *xfer, u32 data) + { + u8 *rx_buf = xfer->rx_buf; +- u32 word, state; +- int idx, shift, w_size; +- +- w_size = controller->w_size; +- +- while (controller->rx_bytes < xfer->len) { +- +- state = readl_relaxed(controller->base + QUP_OPERATIONAL); +- if (0 == (state & QUP_OP_IN_FIFO_NOT_EMPTY)) +- break; +- +- word = readl_relaxed(controller->base + QUP_INPUT_FIFO); +- +- if (!rx_buf) { +- controller->rx_bytes += w_size; +- continue; +- } ++ int idx, shift; + +- for (idx = 0; idx < w_size; idx++, controller->rx_bytes++) { ++ if (rx_buf) ++ for (idx = 0; idx < controller->w_size; idx++) { + /* + * The data format depends on bytes per SPI word: + * 4 bytes: 0x12345678 +@@ -244,41 +231,139 @@ static void spi_qup_fifo_read(struct spi + * 1 byte : 0x00000012 + */ + shift = BITS_PER_BYTE; +- shift *= (w_size - idx - 1); +- rx_buf[controller->rx_bytes] = word >> shift; ++ shift *= (controller->w_size - idx - 1); ++ rx_buf[controller->rx_bytes + idx] = data >> shift; ++ } ++ ++ controller->rx_bytes += controller->w_size; ++} ++ ++static void spi_qup_prepare_write_data(struct spi_qup *controller, ++ struct spi_transfer *xfer, u32 *data) ++{ ++ const u8 *tx_buf = xfer->tx_buf; ++ u32 val; ++ int idx; ++ ++ *data = 0; ++ ++ if (tx_buf) ++ for (idx = 0; idx < controller->w_size; idx++) { ++ val = tx_buf[controller->tx_bytes + idx]; ++ *data |= val << (BITS_PER_BYTE * (3 - idx)); + } ++ ++ controller->tx_bytes += controller->w_size; ++} ++ ++static void spi_qup_fifo_read(struct spi_qup *controller, ++ struct spi_transfer *xfer) ++{ ++ u32 data; ++ ++ /* clear service request */ ++ writel_relaxed(QUP_OP_IN_SERVICE_FLAG, ++ controller->base + QUP_OPERATIONAL); ++ ++ while (controller->rx_bytes < xfer->len) { ++ if (!(readl_relaxed(controller->base + QUP_OPERATIONAL) & ++ QUP_OP_IN_FIFO_NOT_EMPTY)) ++ break; ++ ++ data = readl_relaxed(controller->base + QUP_INPUT_FIFO); ++ ++ spi_qup_fill_read_buffer(controller, xfer, data); + } + } + + static void spi_qup_fifo_write(struct spi_qup *controller, +- struct spi_transfer *xfer) ++ struct spi_transfer *xfer) + { +- const u8 *tx_buf = xfer->tx_buf; +- u32 word, state, data; +- int idx, w_size; ++ u32 data; + +- w_size = controller->w_size; ++ /* clear service request */ ++ writel_relaxed(QUP_OP_OUT_SERVICE_FLAG, ++ controller->base + QUP_OPERATIONAL); + + while (controller->tx_bytes < xfer->len) { + +- state = readl_relaxed(controller->base + QUP_OPERATIONAL); +- if (state & QUP_OP_OUT_FIFO_FULL) ++ if (readl_relaxed(controller->base + QUP_OPERATIONAL) & ++ QUP_OP_OUT_FIFO_FULL) + break; + +- word = 0; +- for (idx = 0; idx < w_size; idx++, controller->tx_bytes++) { ++ spi_qup_prepare_write_data(controller, xfer, &data); ++ writel_relaxed(data, controller->base + QUP_OUTPUT_FIFO); + +- if (!tx_buf) { +- controller->tx_bytes += w_size; +- break; +- } ++ } ++} + +- data = tx_buf[controller->tx_bytes]; +- word |= data << (BITS_PER_BYTE * (3 - idx)); +- } ++static void spi_qup_block_read(struct spi_qup *controller, ++ struct spi_transfer *xfer) ++{ ++ u32 data; ++ u32 reads_per_blk = controller->in_blk_sz >> 2; ++ u32 num_words = (xfer->len - controller->rx_bytes) / controller->w_size; ++ int i; ++ ++ do { ++ /* ACK by clearing service flag */ ++ writel_relaxed(QUP_OP_IN_SERVICE_FLAG, ++ controller->base + QUP_OPERATIONAL); ++ ++ /* transfer up to a block size of data in a single pass */ ++ for (i = 0; num_words && i < reads_per_blk; i++, num_words--) { ++ ++ /* read data and fill up rx buffer */ ++ data = readl_relaxed(controller->base + QUP_INPUT_FIFO); ++ spi_qup_fill_read_buffer(controller, xfer, data); ++ } ++ ++ /* check to see if next block is ready */ ++ if (!(readl_relaxed(controller->base + QUP_OPERATIONAL) & ++ QUP_OP_IN_BLOCK_READ_REQ)) ++ break; + +- writel_relaxed(word, controller->base + QUP_OUTPUT_FIFO); +- } ++ } while (num_words); ++ ++ /* ++ * Due to extra stickiness of the QUP_OP_IN_SERVICE_FLAG during block ++ * reads, it has to be cleared again at the very end ++ */ ++ if (readl_relaxed(controller->base + QUP_OPERATIONAL) & ++ QUP_OP_MAX_INPUT_DONE_FLAG) ++ writel_relaxed(QUP_OP_IN_SERVICE_FLAG, ++ controller->base + QUP_OPERATIONAL); ++ ++} ++ ++static void spi_qup_block_write(struct spi_qup *controller, ++ struct spi_transfer *xfer) ++{ ++ u32 data; ++ u32 writes_per_blk = controller->out_blk_sz >> 2; ++ u32 num_words = (xfer->len - controller->tx_bytes) / controller->w_size; ++ int i; ++ ++ do { ++ /* ACK by clearing service flag */ ++ writel_relaxed(QUP_OP_OUT_SERVICE_FLAG, ++ controller->base + QUP_OPERATIONAL); ++ ++ /* transfer up to a block size of data in a single pass */ ++ for (i = 0; num_words && i < writes_per_blk; i++, num_words--) { ++ ++ /* swizzle the bytes for output and write out */ ++ spi_qup_prepare_write_data(controller, xfer, &data); ++ writel_relaxed(data, ++ controller->base + QUP_OUTPUT_FIFO); ++ } ++ ++ /* check to see if next block is ready */ ++ if (!(readl_relaxed(controller->base + QUP_OPERATIONAL) & ++ QUP_OP_OUT_BLOCK_WRITE_REQ)) ++ break; ++ ++ } while (num_words); + } + + static void qup_dma_callback(void *data) +@@ -515,9 +600,9 @@ static irqreturn_t spi_qup_qup_irq(int i + + writel_relaxed(qup_err, controller->base + QUP_ERROR_FLAGS); + writel_relaxed(spi_err, controller->base + SPI_ERROR_FLAGS); +- writel_relaxed(opflags, controller->base + QUP_OPERATIONAL); + + if (!xfer) { ++ writel_relaxed(opflags, controller->base + QUP_OPERATIONAL); + dev_err_ratelimited(controller->dev, "unexpected irq %08x %08x %08x\n", + qup_err, spi_err, opflags); + return IRQ_HANDLED; +@@ -546,11 +631,19 @@ static irqreturn_t spi_qup_qup_irq(int i + } + + if (!controller->use_dma) { +- if (opflags & QUP_OP_IN_SERVICE_FLAG) +- spi_qup_fifo_read(controller, xfer); ++ if (opflags & QUP_OP_IN_SERVICE_FLAG) { ++ if (opflags & QUP_OP_IN_BLOCK_READ_REQ) ++ spi_qup_block_read(controller, xfer); ++ else ++ spi_qup_fifo_read(controller, xfer); ++ } + +- if (opflags & QUP_OP_OUT_SERVICE_FLAG) +- spi_qup_fifo_write(controller, xfer); ++ if (opflags & QUP_OP_OUT_SERVICE_FLAG) { ++ if (opflags & QUP_OP_OUT_BLOCK_WRITE_REQ) ++ spi_qup_block_write(controller, xfer); ++ else ++ spi_qup_fifo_write(controller, xfer); ++ } + } + + spin_lock_irqsave(&controller->lock, flags); +@@ -558,7 +651,8 @@ static irqreturn_t spi_qup_qup_irq(int i + controller->xfer = xfer; + spin_unlock_irqrestore(&controller->lock, flags); + +- if (controller->rx_bytes == xfer->len || error) ++ if ((controller->rx_bytes == xfer->len && ++ (opflags & QUP_OP_MAX_INPUT_DONE_FLAG)) || error) + complete(&controller->done); + + return IRQ_HANDLED; +@@ -569,7 +663,7 @@ static irqreturn_t spi_qup_qup_irq(int i + static int spi_qup_io_config(struct spi_device *spi, struct spi_transfer *xfer) + { + struct spi_qup *controller = spi_master_get_devdata(spi->master); +- u32 config, iomode, mode; ++ u32 config, iomode; + int ret, n_words, w_size; + size_t dma_align = dma_get_cache_alignment(); + u32 dma_available = 0; +@@ -607,7 +701,7 @@ static int spi_qup_io_config(struct spi_ + dma_available = 1; + + if (n_words <= (controller->in_fifo_sz / sizeof(u32))) { +- mode = QUP_IO_M_MODE_FIFO; ++ controller->mode = QUP_IO_M_MODE_FIFO; + writel_relaxed(n_words, controller->base + QUP_MX_READ_CNT); + writel_relaxed(n_words, controller->base + QUP_MX_WRITE_CNT); + /* must be zero for FIFO */ +@@ -615,7 +709,7 @@ static int spi_qup_io_config(struct spi_ + writel_relaxed(0, controller->base + QUP_MX_OUTPUT_CNT); + controller->use_dma = 0; + } else if (!dma_available) { +- mode = QUP_IO_M_MODE_BLOCK; ++ controller->mode = QUP_IO_M_MODE_BLOCK; + writel_relaxed(n_words, controller->base + QUP_MX_INPUT_CNT); + writel_relaxed(n_words, controller->base + QUP_MX_OUTPUT_CNT); + /* must be zero for BLOCK and BAM */ +@@ -623,7 +717,7 @@ static int spi_qup_io_config(struct spi_ + writel_relaxed(0, controller->base + QUP_MX_WRITE_CNT); + controller->use_dma = 0; + } else { +- mode = QUP_IO_M_MODE_DMOV; ++ controller->mode = QUP_IO_M_MODE_DMOV; + writel_relaxed(0, controller->base + QUP_MX_READ_CNT); + writel_relaxed(0, controller->base + QUP_MX_WRITE_CNT); + controller->use_dma = 1; +@@ -638,8 +732,8 @@ static int spi_qup_io_config(struct spi_ + else + iomode |= QUP_IO_M_PACK_EN | QUP_IO_M_UNPACK_EN; + +- iomode |= (mode << QUP_IO_M_OUTPUT_MODE_MASK_SHIFT); +- iomode |= (mode << QUP_IO_M_INPUT_MODE_MASK_SHIFT); ++ iomode |= (controller->mode << QUP_IO_M_OUTPUT_MODE_MASK_SHIFT); ++ iomode |= (controller->mode << QUP_IO_M_INPUT_MODE_MASK_SHIFT); + + writel_relaxed(iomode, controller->base + QUP_IO_M_MODES); + +@@ -724,7 +818,8 @@ static int spi_qup_transfer_one(struct s + goto exit; + } + +- spi_qup_fifo_write(controller, xfer); ++ if (controller->mode == QUP_IO_M_MODE_FIFO) ++ spi_qup_fifo_write(controller, xfer); + + if (spi_qup_set_state(controller, QUP_STATE_RUN)) { + dev_warn(controller->dev, "cannot set EXECUTE state\n"); +@@ -741,6 +836,7 @@ exit: + if (!ret) + ret = controller->error; + spin_unlock_irqrestore(&controller->lock, flags); ++ + return ret; + } +