From patchwork Thu Dec 7 04:15:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 845418 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=oracle.com header.i=@oracle.com header.b="MPr1GYZ1"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yshsL4fq8z9rxl for ; Thu, 7 Dec 2017 15:12:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752433AbdLGEMf (ORCPT ); Wed, 6 Dec 2017 23:12:35 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:36279 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752212AbdLGEMe (ORCPT ); Wed, 6 Dec 2017 23:12:34 -0500 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.21/8.16.0.21) with SMTP id vB74C18F155526; Thu, 7 Dec 2017 04:12:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id; s=corp-2017-10-26; bh=ns0FWsyt6YnGNqA7MQuvfG9LvO9lgP8BJNBVHjZ37jY=; b=MPr1GYZ1jD/QIa3A/FLkIKucdZPge5bDzbpd+d5n2m2tJ/5yqsLluppu1455ZZMpYMV/ G39pL1ZPA1HHxspDowrc9UXGGrkDj8ngplaO4o3f2hu1D0BDPq6u4/L6FwsfQpP9YY7a YOvsmRK0f2LVp8qR5O5NV+Iz3gxOgNSA+9J7ro7CsFOlibNyoV0LeeKNIIq02isYMklX 4Axw5rIF00DOxqWAqw2AKLwVW7OwJoTlfJdQrHvnQET9CPfecrr6W6h5j63yeCGDZ2c1 45vDLXo4Vt+ARiG8qJsAap29WSxbJV9f5YtQsfUjDyn6X9/uZQxhW1RFiZcRwmU6Pk38 wg== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp2130.oracle.com with ESMTP id 2epct9crf1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 07 Dec 2017 04:12:32 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id vB74CVEK030480 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 7 Dec 2017 04:12:31 GMT Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id vB74CVC9028787; Thu, 7 Dec 2017 04:12:31 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.78) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 06 Dec 2017 20:12:30 -0800 From: Zhu Yanjun To: netdev@vger.kernel.org, keescook@chromium.org Subject: [PATCH net-next 1/1] forcedeth: remove unnecessary variable Date: Wed, 6 Dec 2017 23:15:15 -0500 Message-Id: <1512620115-21544-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8737 signatures=668643 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1712070063 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since both tx_ring and first_tx are the head of tx ring, it not necessary to use two variables. So first_tx is removed. CC: Srinivas Eeda CC: Joe Jin CC: Junxiao Bi Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 53614ed..cadea67 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -822,7 +822,7 @@ struct fe_priv { /* * tx specific fields. */ - union ring_type get_tx, put_tx, first_tx, last_tx; + union ring_type get_tx, put_tx, last_tx; struct nv_skb_map *get_tx_ctx, *put_tx_ctx; struct nv_skb_map *first_tx_ctx, *last_tx_ctx; struct nv_skb_map *tx_skb; @@ -1932,7 +1932,8 @@ static void nv_init_tx(struct net_device *dev) struct fe_priv *np = netdev_priv(dev); int i; - np->get_tx = np->put_tx = np->first_tx = np->tx_ring; + np->get_tx = np->tx_ring; + np->put_tx = np->tx_ring; if (!nv_optimized(np)) np->last_tx.orig = &np->tx_ring.orig[np->tx_ring_size-1]; @@ -2248,7 +2249,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) offset += bcnt; size -= bcnt; if (unlikely(put_tx++ == np->last_tx.orig)) - put_tx = np->first_tx.orig; + put_tx = np->tx_ring.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; } while (size); @@ -2294,13 +2295,13 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) offset += bcnt; frag_size -= bcnt; if (unlikely(put_tx++ == np->last_tx.orig)) - put_tx = np->first_tx.orig; + put_tx = np->tx_ring.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; } while (frag_size); } - if (unlikely(put_tx == np->first_tx.orig)) + if (unlikely(put_tx == np->tx_ring.orig)) prev_tx = np->last_tx.orig; else prev_tx = put_tx - 1; @@ -2406,7 +2407,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, offset += bcnt; size -= bcnt; if (unlikely(put_tx++ == np->last_tx.ex)) - put_tx = np->first_tx.ex; + put_tx = np->tx_ring.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; } while (size); @@ -2452,13 +2453,13 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, offset += bcnt; frag_size -= bcnt; if (unlikely(put_tx++ == np->last_tx.ex)) - put_tx = np->first_tx.ex; + put_tx = np->tx_ring.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; } while (frag_size); } - if (unlikely(put_tx == np->first_tx.ex)) + if (unlikely(put_tx == np->tx_ring.ex)) prev_tx = np->last_tx.ex; else prev_tx = put_tx - 1; @@ -2597,7 +2598,7 @@ static int nv_tx_done(struct net_device *dev, int limit) } } if (unlikely(np->get_tx.orig++ == np->last_tx.orig)) - np->get_tx.orig = np->first_tx.orig; + np->get_tx.orig = np->tx_ring.orig; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; } @@ -2651,7 +2652,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) } if (unlikely(np->get_tx.ex++ == np->last_tx.ex)) - np->get_tx.ex = np->first_tx.ex; + np->get_tx.ex = np->tx_ring.ex; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; }