From patchwork Thu Feb 26 14:13:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Hutchings X-Patchwork-Id: 443928 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 88E271400D5 for ; Fri, 27 Feb 2015 01:13:43 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932780AbbBZONj (ORCPT ); Thu, 26 Feb 2015 09:13:39 -0500 Received: from ducie-dc1.codethink.co.uk ([185.25.241.215]:34630 "EHLO ducie-dc1.codethink.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932386AbbBZONh (ORCPT ); Thu, 26 Feb 2015 09:13:37 -0500 Received: from localhost (localhost [127.0.0.1]) by ducie-dc1.codethink.co.uk (Postfix) with ESMTP id 05E41460967; Thu, 26 Feb 2015 14:13:36 +0000 (GMT) X-Virus-Scanned: Debian amavisd-new at ducie-dc1.codethink.co.uk Received: from ducie-dc1.codethink.co.uk ([127.0.0.1]) by localhost (ducie-dc1.codethink.co.uk [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EKNUAZB-rsB0; Thu, 26 Feb 2015 14:13:33 +0000 (GMT) Received: from [192.168.25.61] (unknown [192.168.25.61]) by ducie-dc1.codethink.co.uk (Postfix) with ESMTPSA id A9EFA460987; Thu, 26 Feb 2015 14:13:32 +0000 (GMT) Message-ID: <1424960010.4444.13.camel@xylophone.i.decadent.org.uk> Subject: [PATCH net 1/4] sh_eth: Ensure proper ordering of descriptor active bit write/read From: Ben Hutchings To: netdev@vger.kernel.org Cc: linux-kernel@lists.codethink.co.uk, Nobuhiro Iwamatsu , Mitsuhiro Kimura , Yoshihiro Kaneko , Yoshihiro Shimoda Date: Thu, 26 Feb 2015 14:13:30 +0000 In-Reply-To: <1424959937.4444.12.camel@xylophone.i.decadent.org.uk> References: <1424959937.4444.12.camel@xylophone.i.decadent.org.uk> Organization: Codethink Ltd. X-Mailer: Evolution 3.4.4-3 Mime-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When submitting a DMA descriptor, the active bit must be written last. When reading a completed DMA descriptor, the active bit must be read first. Add memory barriers to ensure that this ordering is maintained. Signed-off-by: Ben Hutchings --- This is based on review only - I don't have a test case to show problematic reordering. Ben. drivers/net/ethernet/renesas/sh_eth.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index 4da8bd263997..2bc0be45c751 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -1407,6 +1407,8 @@ static int sh_eth_txfree(struct net_device *ndev) txdesc = &mdp->tx_ring[entry]; if (txdesc->status & cpu_to_edmac(mdp, TD_TACT)) break; + /* TACT bit must be checked before all the following reads */ + rmb(); /* Free the original skb. */ if (mdp->tx_skbuff[entry]) { dma_unmap_single(&ndev->dev, txdesc->addr, @@ -1444,6 +1446,8 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota) limit = boguscnt; rxdesc = &mdp->rx_ring[entry]; while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) { + /* RACT bit must be checked before all the following reads */ + rmb(); desc_status = edmac_to_cpu(mdp, rxdesc->status); pkt_len = rxdesc->frame_length; @@ -1523,6 +1527,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota) skb_checksum_none_assert(skb); rxdesc->addr = dma_addr; } + wmb(); /* RACT bit must be set after all the above writes */ if (entry >= mdp->num_rx_ring - 1) rxdesc->status |= cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL); @@ -2192,6 +2197,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) } txdesc->buffer_length = skb->len; + wmb(); /* TACT bit must be set after all the above writes */ if (entry >= mdp->num_tx_ring - 1) txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); else