From patchwork Wed Feb 12 15:55:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Simek X-Patchwork-Id: 319656 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id F00662C00AF for ; Thu, 13 Feb 2014 02:56:58 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752885AbaBLP4g (ORCPT ); Wed, 12 Feb 2014 10:56:36 -0500 Received: from mail-ea0-f174.google.com ([209.85.215.174]:35649 "EHLO mail-ea0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752757AbaBLP4e (ORCPT ); Wed, 12 Feb 2014 10:56:34 -0500 Received: by mail-ea0-f174.google.com with SMTP id z10so1190701ead.5 for ; Wed, 12 Feb 2014 07:56:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:in-reply-to:references:content-type; bh=XCCU19jliRd93TLyc8WuIPDDFGOhU7sI5/FNfiCJ7iA=; b=YuHYWoXLClEUXNs+IPv8viwDH8rBcP/4fWupO5kh9xMOPYeZhxsn4KEZLb4jngZCLo qW5oPZx/IHABWFYGBhbxfrbSGl7NRG9wHSL4kpP1/qDn4UfRkSTF4rn6Gr1/hsFUwUBV FAh+3g1IxvqCsNnNd1cpVEvN9hb4S2Co/DSDLygJkTdkno17gP6uzKGRslP6A87rNSDE hmgj3W+wCjBeLDr7tGXNWbqz1n/baxlPwKTJH5hZLARkyg0zZR+r0urwz+367987+E+/ nKuymO8obqZhFrXY+hS6w7ErvV4zPsk/Oqz+M0q/o5khfsNUEtjMFlUWVQ0bAtSmIcI7 x6Mw== X-Gm-Message-State: ALoCoQkdQsch95JpHCssR57V86+VuNWfticA/IKcDw0dd5+kUUDHGKImykNt9JBSPkaIALTNReG9 X-Received: by 10.15.26.199 with SMTP id n47mr4972560eeu.30.1392220592822; Wed, 12 Feb 2014 07:56:32 -0800 (PST) Received: from localhost (nat-63.starnet.cz. [178.255.168.63]) by mx.google.com with ESMTPSA id i43sm82749188eeu.13.2014.02.12.07.56.30 for (version=TLSv1.1 cipher=RC4-SHA bits=128/128); Wed, 12 Feb 2014 07:56:31 -0800 (PST) From: Michal Simek To: netdev@vger.kernel.org Cc: Srikanth Thokala , Peter Crosthwaite , Michal Simek , Anirudha Sarangi , John Linn , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/14] net: axienet: Handle 0 packet receive gracefully Date: Wed, 12 Feb 2014 16:55:38 +0100 Message-Id: X-Mailer: git-send-email 1.8.2.3 In-Reply-To: References: In-Reply-To: References: Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Peter Crosthwaite The AXI-DMA rx-delay interrupt can sometimes be triggered when there are 0 outstanding packets received. This is due to the fact that the receive function will greedily consume as many packets as possible on interrupt. So if two packets (with a very particular timing) arrive in succession they will each cause the rx-delay interrupt, but the first interrupt will consume both packets. This means the second interrupt is a 0 packet receive. This is mostly OK, except that the tail pointer register is updated unconditionally on receive. Currently the tail pointer is always set to the current bd-ring descriptor under the assumption that the hardware has moved onto the next descriptor. What this means for length 0 recv is the current descriptor that the hardware is potentially yet to use will be marked as the tail. This causes the hardware to think its run out of descriptors deadlocking the whole rx path. Fixed by updating the tail pointer to the most recent successfully consumed descriptor. Reported-by: Wendy Liang Signed-off-by: Peter Crosthwaite Signed-off-by: Michal Simek Acked-by: Michal Simek Tested-by: Jason Wu --- drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) -- 1.8.2.3 diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 3966d83..2e21ab2 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -726,15 +726,15 @@ static void axienet_recv(struct net_device *ndev) u32 csumstatus; u32 size = 0; u32 packets = 0; - dma_addr_t tail_p; + dma_addr_t tail_p = 0; struct axienet_local *lp = netdev_priv(ndev); struct sk_buff *skb, *new_skb; struct axidma_bd *cur_p; - tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci; cur_p = &lp->rx_bd_v[lp->rx_bd_ci]; while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) { + tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci; skb = (struct sk_buff *) (cur_p->sw_id_offset); length = cur_p->app4 & 0x0000FFFF; @@ -786,7 +786,8 @@ static void axienet_recv(struct net_device *ndev) ndev->stats.rx_packets += packets; ndev->stats.rx_bytes += size; - axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, tail_p); + if (tail_p) + axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, tail_p); } /**