From patchwork Fri Sep 8 12:28:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 811564 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpcMZ2LBkz9sRW for ; Fri, 8 Sep 2017 22:39:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755509AbdIHM0W (ORCPT ); Fri, 8 Sep 2017 08:26:22 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:50440 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754986AbdIHM0V (ORCPT ); Fri, 8 Sep 2017 08:26:21 -0400 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v88CQJJk006945 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 8 Sep 2017 12:26:20 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v88CQJAJ014324 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 8 Sep 2017 12:26:19 GMT Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v88CQJdW003380; Fri, 8 Sep 2017 12:26:19 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.186) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 08 Sep 2017 05:26:18 -0700 From: Zhu Yanjun To: davem@davemloft.net, netdev@vger.kernel.org Subject: [PATCH 1/1] forcedeth: remove tx_stop variable Date: Fri, 8 Sep 2017 08:28:45 -0400 Message-Id: <1504873725-30180-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The variable tx_stop is used to indicate the tx queue state: started or stopped. In fact, the inline function netif_queue_stopped can do the same work. So replace the variable tx_stop with the function netif_queue_stopped. Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 994a83a..e6e0de4 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -834,7 +834,6 @@ struct fe_priv { u32 tx_pkts_in_progress; struct nv_skb_map *tx_change_owner; struct nv_skb_map *tx_end_flip; - int tx_stop; /* TX software stats */ struct u64_stats_sync swstats_tx_syncp; @@ -1939,7 +1938,6 @@ static void nv_init_tx(struct net_device *dev) np->tx_pkts_in_progress = 0; np->tx_change_owner = NULL; np->tx_end_flip = NULL; - np->tx_stop = 0; for (i = 0; i < np->tx_ring_size; i++) { if (!nv_optimized(np)) { @@ -2211,7 +2209,6 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) empty_slots = nv_get_empty_tx_slots(np); if (unlikely(empty_slots <= entries)) { netif_stop_queue(dev); - np->tx_stop = 1; spin_unlock_irqrestore(&np->lock, flags); return NETDEV_TX_BUSY; } @@ -2359,7 +2356,6 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, empty_slots = nv_get_empty_tx_slots(np); if (unlikely(empty_slots <= entries)) { netif_stop_queue(dev); - np->tx_stop = 1; spin_unlock_irqrestore(&np->lock, flags); return NETDEV_TX_BUSY; } @@ -2583,8 +2579,8 @@ static int nv_tx_done(struct net_device *dev, int limit) netdev_completed_queue(np->dev, tx_work, bytes_compl); - if (unlikely((np->tx_stop == 1) && (np->get_tx.orig != orig_get_tx))) { - np->tx_stop = 0; + if (unlikely(netif_queue_stopped(dev) && + (np->get_tx.orig != orig_get_tx))) { netif_wake_queue(dev); } return tx_work; @@ -2637,8 +2633,8 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) netdev_completed_queue(np->dev, tx_work, bytes_cleaned); - if (unlikely((np->tx_stop == 1) && (np->get_tx.ex != orig_get_tx))) { - np->tx_stop = 0; + if (unlikely(netif_queue_stopped(dev) && + (np->get_tx.ex != orig_get_tx))) { netif_wake_queue(dev); } return tx_work; @@ -2724,7 +2720,6 @@ static void nv_tx_timeout(struct net_device *dev) /* 2) complete any outstanding tx and do not give HW any limited tx pkts */ saved_tx_limit = np->tx_limit; np->tx_limit = 0; /* prevent giving HW any limited pkts */ - np->tx_stop = 0; /* prevent waking tx queue */ if (!nv_optimized(np)) nv_tx_done(dev, np->tx_ring_size); else