From patchwork Sat Dec 16 09:31:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 849493 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=oracle.com header.i=@oracle.com header.b="XJA9YVbD"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yzMRX2BYsz9t3F for ; Sat, 16 Dec 2017 20:28:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752226AbdLPJ2P (ORCPT ); Sat, 16 Dec 2017 04:28:15 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:47058 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750850AbdLPJ2L (ORCPT ); Sat, 16 Dec 2017 04:28:11 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.21/8.16.0.21) with SMTP id vBG9NWlC054209; Sat, 16 Dec 2017 09:28:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id; s=corp-2017-10-26; bh=NYoOi38y6n8dZPe1B90tpTpfEt6vPDI8VcexunkxwEU=; b=XJA9YVbDESxSZQcy7hx9A+jsfapSKdsEwbmCeB4uWqh9b8kSxjyeoZ1i1jLlfrelezEt roowt+W3x0Bms6q6SUZPNPwsvdDXdDV/DZOSP70YOOjSA7ay6fEktIrBbVGxZDHfBXRW 6FifjqPpPsQuH+YMWBEny0G/yxbB2qk8bjqz1GNomBbs6kYDup0rbZmW7Xr2tVqQpp4+ Q3Ec3ODTUoE5Vj4JBQQ1/9tKP2/2ttS8s2mkh/9H0hiflYb9HqQAqOu7J6CHP7OimB4F 1It98dQhkMzAB5XGuKQyvfBd3zQrkzko4SoCxTNPyOe92QJx7BPzN3tSqudW6XsD9di0 Pg== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2120.oracle.com with ESMTP id 2ew046826g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 16 Dec 2017 09:28:09 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id vBG9S9pp007486 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 16 Dec 2017 09:28:09 GMT Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id vBG9S8q4000865; Sat, 16 Dec 2017 09:28:08 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.78) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 16 Dec 2017 01:28:08 -0800 From: Zhu Yanjun To: netdev@vger.kernel.org, stephen@networkplumber.org Subject: [PATCH net-next 1/1] forcedeth: remove duplicate structure member in xmit Date: Sat, 16 Dec 2017 04:31:03 -0500 Message-Id: <1513416663-9321-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8746 signatures=668648 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1712160147 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since both first_tx_ctx and tx_skb are the head of tx ctx, it not necessary to use two structure members to statically indicate the head of tx ctx. So first_tx_ctx is removed. CC: Srinivas Eeda CC: Joe Jin CC: Junxiao Bi Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index cadea67..49d6d78 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -824,7 +824,7 @@ struct fe_priv { */ union ring_type get_tx, put_tx, last_tx; struct nv_skb_map *get_tx_ctx, *put_tx_ctx; - struct nv_skb_map *first_tx_ctx, *last_tx_ctx; + struct nv_skb_map *last_tx_ctx; struct nv_skb_map *tx_skb; union ring_type tx_ring; @@ -1939,7 +1939,8 @@ static void nv_init_tx(struct net_device *dev) np->last_tx.orig = &np->tx_ring.orig[np->tx_ring_size-1]; else np->last_tx.ex = &np->tx_ring.ex[np->tx_ring_size-1]; - np->get_tx_ctx = np->put_tx_ctx = np->first_tx_ctx = np->tx_skb; + np->get_tx_ctx = np->tx_skb; + np->put_tx_ctx = np->tx_skb; np->last_tx_ctx = &np->tx_skb[np->tx_ring_size-1]; netdev_reset_queue(np->dev); np->tx_pkts_in_progress = 0; @@ -2251,7 +2252,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(put_tx++ == np->last_tx.orig)) put_tx = np->tx_ring.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) - np->put_tx_ctx = np->first_tx_ctx; + np->put_tx_ctx = np->tx_skb; } while (size); /* setup the fragments */ @@ -2277,7 +2278,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) do { nv_unmap_txskb(np, start_tx_ctx); if (unlikely(tmp_tx_ctx++ == np->last_tx_ctx)) - tmp_tx_ctx = np->first_tx_ctx; + tmp_tx_ctx = np->tx_skb; } while (tmp_tx_ctx != np->put_tx_ctx); dev_kfree_skb_any(skb); np->put_tx_ctx = start_tx_ctx; @@ -2297,7 +2298,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(put_tx++ == np->last_tx.orig)) put_tx = np->tx_ring.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) - np->put_tx_ctx = np->first_tx_ctx; + np->put_tx_ctx = np->tx_skb; } while (frag_size); } @@ -2306,7 +2307,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) else prev_tx = put_tx - 1; - if (unlikely(np->put_tx_ctx == np->first_tx_ctx)) + if (unlikely(np->put_tx_ctx == np->tx_skb)) prev_tx_ctx = np->last_tx_ctx; else prev_tx_ctx = np->put_tx_ctx - 1; @@ -2409,7 +2410,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, if (unlikely(put_tx++ == np->last_tx.ex)) put_tx = np->tx_ring.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) - np->put_tx_ctx = np->first_tx_ctx; + np->put_tx_ctx = np->tx_skb; } while (size); /* setup the fragments */ @@ -2435,7 +2436,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, do { nv_unmap_txskb(np, start_tx_ctx); if (unlikely(tmp_tx_ctx++ == np->last_tx_ctx)) - tmp_tx_ctx = np->first_tx_ctx; + tmp_tx_ctx = np->tx_skb; } while (tmp_tx_ctx != np->put_tx_ctx); dev_kfree_skb_any(skb); np->put_tx_ctx = start_tx_ctx; @@ -2455,7 +2456,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, if (unlikely(put_tx++ == np->last_tx.ex)) put_tx = np->tx_ring.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) - np->put_tx_ctx = np->first_tx_ctx; + np->put_tx_ctx = np->tx_skb; } while (frag_size); } @@ -2464,7 +2465,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, else prev_tx = put_tx - 1; - if (unlikely(np->put_tx_ctx == np->first_tx_ctx)) + if (unlikely(np->put_tx_ctx == np->tx_skb)) prev_tx_ctx = np->last_tx_ctx; else prev_tx_ctx = np->put_tx_ctx - 1; @@ -2600,7 +2601,7 @@ static int nv_tx_done(struct net_device *dev, int limit) if (unlikely(np->get_tx.orig++ == np->last_tx.orig)) np->get_tx.orig = np->tx_ring.orig; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) - np->get_tx_ctx = np->first_tx_ctx; + np->get_tx_ctx = np->tx_skb; } netdev_completed_queue(np->dev, tx_work, bytes_compl); @@ -2654,7 +2655,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) if (unlikely(np->get_tx.ex++ == np->last_tx.ex)) np->get_tx.ex = np->tx_ring.ex; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) - np->get_tx_ctx = np->first_tx_ctx; + np->get_tx_ctx = np->tx_skb; } netdev_completed_queue(np->dev, tx_work, bytes_cleaned);