From patchwork Thu Dec 7 22:56:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Cassel X-Patchwork-Id: 845898 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yt9q352nhz9s83 for ; Fri, 8 Dec 2017 09:57:19 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752212AbdLGW5H (ORCPT ); Thu, 7 Dec 2017 17:57:07 -0500 Received: from bastet.se.axis.com ([195.60.68.11]:37653 "EHLO bastet.se.axis.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750993AbdLGW5G (ORCPT ); Thu, 7 Dec 2017 17:57:06 -0500 Received: from localhost (localhost [127.0.0.1]) by bastet.se.axis.com (Postfix) with ESMTP id 7E7F11846D; Thu, 7 Dec 2017 23:57:04 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at bastet.se.axis.com Received: from bastet.se.axis.com ([IPv6:::ffff:127.0.0.1]) by localhost (bastet.se.axis.com [::ffff:127.0.0.1]) (amavisd-new, port 10024) with LMTP id V062o_9DARNo; Thu, 7 Dec 2017 23:57:03 +0100 (CET) Received: from boulder02.se.axis.com (boulder02.se.axis.com [10.0.8.16]) by bastet.se.axis.com (Postfix) with ESMTPS id 3027B1844C; Thu, 7 Dec 2017 23:57:03 +0100 (CET) Received: from boulder02.se.axis.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EA8801A06B; Thu, 7 Dec 2017 23:57:02 +0100 (CET) Received: from boulder02.se.axis.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DF1901A069; Thu, 7 Dec 2017 23:57:02 +0100 (CET) Received: from seth.se.axis.com (unknown [10.0.2.172]) by boulder02.se.axis.com (Postfix) with ESMTP; Thu, 7 Dec 2017 23:57:02 +0100 (CET) Received: from lnxartpec1.se.axis.com (lnxartpec1.se.axis.com [10.88.4.10]) by seth.se.axis.com (Postfix) with ESMTP id D2A412AD6; Thu, 7 Dec 2017 23:57:02 +0100 (CET) Received: by lnxartpec1.se.axis.com (Postfix, from userid 20283) id CBA3940102; Thu, 7 Dec 2017 23:57:02 +0100 (CET) From: Niklas Cassel To: Joao.Pinto@synopsys.com, Giuseppe Cavallaro , Alexandre Torgue Cc: Niklas Cassel , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next] net: stmmac: fix broken dma_interrupt handling for multi-queues Date: Thu, 7 Dec 2017 23:56:10 +0100 Message-Id: <20171207225610.15572-1-niklas.cassel@axis.com> X-Mailer: git-send-email 2.14.2 X-TM-AS-GCONF: 00 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There is nothing that says that number of TX queues == number of RX queues. E.g. the ARTPEC-6 SoC has 2 TX queues and 1 RX queue. This code is obviously wrong: for (chan = 0; chan < tx_channel_count; chan++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; priv->rx_queue has size MTL_MAX_RX_QUEUES, so this will send an uninitialized napi_struct to __napi_schedule(), causing us to crash in net_rx_action(), because napi_struct->poll is zero. [12846.759880] Unable to handle kernel NULL pointer dereference at virtual address 00000000 [12846.768014] pgd = (ptrval) [12846.770742] [00000000] *pgd=39ec7831, *pte=00000000, *ppte=00000000 [12846.777023] Internal error: Oops: 80000007 [#1] PREEMPT SMP ARM [12846.782942] Modules linked in: [12846.785998] CPU: 0 PID: 161 Comm: dropbear Not tainted 4.15.0-rc2-00285-gf5fb5f2f39a7 #36 [12846.794177] Hardware name: Axis ARTPEC-6 Platform [12846.798879] task: (ptrval) task.stack: (ptrval) [12846.803407] PC is at 0x0 [12846.805942] LR is at net_rx_action+0x274/0x43c [12846.810383] pc : [<00000000>] lr : [<80bff064>] psr: 200e0113 [12846.816648] sp : b90d9ae8 ip : b90d9ae8 fp : b90d9b44 [12846.821871] r10: 00000008 r9 : 0013250e r8 : 00000100 [12846.827094] r7 : 0000012c r6 : 00000000 r5 : 00000001 r4 : bac84900 [12846.833619] r3 : 00000000 r2 : b90d9b08 r1 : 00000000 r0 : bac84900 Since each DMA channel can be used for rx and tx simultaneously, the current code should probably be rewritten so that napi_struct is embedded in a new struct stmmac_channel. That way, stmmac_poll() can call stmmac_tx_clean() on just the tx queue where we got the IRQ, instead of looping through all tx queues. This is also how the xgbe driver does it (another driver for this IP). Fixes: c22a3f48ef99 ("net: stmmac: adding multiple napi mechanism") Signed-off-by: Niklas Cassel --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 54 +++++++++++++++++++---- 1 file changed, 46 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index d7250539d0bd..c52a9963c19d 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1997,22 +1997,60 @@ static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode, static void stmmac_dma_interrupt(struct stmmac_priv *priv) { u32 tx_channel_count = priv->plat->tx_queues_to_use; - int status; + u32 rx_channel_count = priv->plat->rx_queues_to_use; + u32 channels_to_check = tx_channel_count > rx_channel_count ? + tx_channel_count : rx_channel_count; u32 chan; + bool poll_scheduled = false; + int status[channels_to_check]; + + /* Each DMA channel can be used for rx and tx simultaneously, yet + * napi_struct is embedded in struct stmmac_rx_queue rather than in a + * stmmac_channel struct. + * Because of this, stmmac_poll currently checks (and possibly wakes) + * all tx queues rather than just a single tx queue. + */ + for (chan = 0; chan < channels_to_check; chan++) + status[chan] = priv->hw->dma->dma_interrupt(priv->ioaddr, + &priv->xstats, + chan); - for (chan = 0; chan < tx_channel_count; chan++) { - struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; + for (chan = 0; chan < rx_channel_count; chan++) { + if (likely(status[chan] & handle_rx)) { + struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; - status = priv->hw->dma->dma_interrupt(priv->ioaddr, - &priv->xstats, chan); - if (likely((status & handle_rx)) || (status & handle_tx)) { if (likely(napi_schedule_prep(&rx_q->napi))) { stmmac_disable_dma_irq(priv, chan); __napi_schedule(&rx_q->napi); + poll_scheduled = true; + } + } + } + + /* If we scheduled poll, we already know that tx queues will be checked. + * If we didn't schedule poll, see if any DMA channel (used by tx) has a + * completed transmission, if so, call stmmac_poll (once). + */ + if (!poll_scheduled) { + for (chan = 0; chan < tx_channel_count; chan++) { + if (status[chan] & handle_tx) { + /* It doesn't matter what rx queue we choose + * here. We use 0 since it always exists. + */ + struct stmmac_rx_queue *rx_q = + &priv->rx_queue[0]; + + if (likely(napi_schedule_prep(&rx_q->napi))) { + stmmac_disable_dma_irq(priv, chan); + __napi_schedule(&rx_q->napi); + } + break; } } + } - if (unlikely(status & tx_hard_error_bump_tc)) { + for (chan = 0; chan < tx_channel_count; chan++) { + if (unlikely(status[chan] & tx_hard_error_bump_tc)) { /* Try to bump up the dma threshold on this failure */ if (unlikely(priv->xstats.threshold != SF_DMA_MODE) && (tc <= 256)) { @@ -2029,7 +2067,7 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv) chan); priv->xstats.threshold = tc; } - } else if (unlikely(status == tx_hard_error)) { + } else if (unlikely(status[chan] == tx_hard_error)) { stmmac_tx_err(priv, chan); } }