From patchwork Wed Nov 5 14:32:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karl Beldan X-Patchwork-Id: 407002 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CA2ED1400A0 for ; Thu, 6 Nov 2014 01:33:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754901AbaKEOde (ORCPT ); Wed, 5 Nov 2014 09:33:34 -0500 Received: from mail-wi0-f181.google.com ([209.85.212.181]:55734 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753898AbaKEOdd (ORCPT ); Wed, 5 Nov 2014 09:33:33 -0500 Received: by mail-wi0-f181.google.com with SMTP id n3so2170643wiv.14 for ; Wed, 05 Nov 2014 06:33:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=ncQgdlJB3NT5SBtnCVX4iZIOmWS0O05wbeOEV/oMdxw=; b=qeM9RSo4B/X3FuZd30FsDqdX4BwIGiYYxKjce+CLQhH18cFVzwNiLpoywkcUDoUfGk nY8o281JIeDTZqxy8/pcqMS66JTmEcIwM+dKuKJqNWyJAgb/taDkjsijC2KK8j7vJBDq Q719ggSOaTmJFEOohmOKAxBO50qnWIs0U8LAtFMkgwwmqaqxt9NFceuvySzLv9OvVgr0 mQUvHQbKUZhpTb5Qv1CoBltmxErbfVcpEpiCmm1sGBMU1pala6JvvL9JojL2f2oNTirf 6jrUIdc5bN3p3ahL2nK81WvglNl7fAp6TnQedL4zik7nZ0L6bwMkXwjnOou90FMjahg7 0ioA== X-Received: by 10.194.237.9 with SMTP id uy9mr64191463wjc.69.1415198012458; Wed, 05 Nov 2014 06:33:32 -0800 (PST) Received: from magnum.frso.rivierawaves.com (vpn.rivierawaves.com. [91.151.119.162]) by mx.google.com with ESMTPSA id he3sm4305769wjc.15.2014.11.05.06.33.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 05 Nov 2014 06:33:31 -0800 (PST) From: Karl Beldan To: David Miller Cc: Karl Beldan , netdev@vger.kernel.org, Ian Campbell , Eric Dumazet , Ezequiel Garcia , Sebastian Hesselbarth Subject: [PATCH] net: mv643xx_eth: reclaim TX skbs only when released by the HW Date: Wed, 5 Nov 2014 15:32:59 +0100 Message-Id: <1415197979-1702-1-git-send-email-karl.beldan@gmail.com> X-Mailer: git-send-email 2.0.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Karl Beldan ATM, txq_reclaim will dequeue and free an skb for each tx desc released by the hw that has TX_LAST_DESC set. However, in case of TSO, each hw desc embedding the last part of a segment has TX_LAST_DESC set, losing the one-to-one 'last skb frag'/'TX_LAST_DESC set' correspondance, which causes data corruption. Fix this by checking TX_ENABLE_INTERRUPT instead of TX_LAST_DESC, and warn when trying to dequeue from an empty txq (which can be symptomatic of releasing skbs prematurely). Fixes: 3ae8f4e0b98 ('net: mv643xx_eth: Implement software TSO') Reported-by: Slawomir Gajzner Reported-by: Julien D'Ascenzio Signed-off-by: Karl Beldan Cc: Ian Campbell Cc: Eric Dumazet Cc: Ezequiel Garcia Cc: Sebastian Hesselbarth --- Ian, I refrained myself from embedding your Tested-by since this change is a little bit different from my first. David, please consider queueing this one up for -stable, and possibly adjust Ian's Tested-by (I am not sure about the process). drivers/net/ethernet/marvell/mv643xx_eth.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c index b151a94..d44560d 100644 --- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -1047,7 +1047,6 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force) int tx_index; struct tx_desc *desc; u32 cmd_sts; - struct sk_buff *skb; tx_index = txq->tx_used_desc; desc = &txq->tx_desc_area[tx_index]; @@ -1066,19 +1065,22 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force) reclaimed++; txq->tx_desc_count--; - skb = NULL; - if (cmd_sts & TX_LAST_DESC) - skb = __skb_dequeue(&txq->tx_skb); + if (!IS_TSO_HEADER(txq, desc->buf_ptr)) + dma_unmap_single(mp->dev->dev.parent, desc->buf_ptr, + desc->byte_cnt, DMA_TO_DEVICE); + + if (cmd_sts & TX_ENABLE_INTERRUPT) { + struct sk_buff *skb = __skb_dequeue(&txq->tx_skb); + + if (!WARN_ON(!skb)) + dev_kfree_skb(skb); + } if (cmd_sts & ERROR_SUMMARY) { netdev_info(mp->dev, "tx error\n"); mp->dev->stats.tx_errors++; } - if (!IS_TSO_HEADER(txq, desc->buf_ptr)) - dma_unmap_single(mp->dev->dev.parent, desc->buf_ptr, - desc->byte_cnt, DMA_TO_DEVICE); - dev_kfree_skb(skb); } __netif_tx_unlock_bh(nq);