From patchwork Tue Jan 23 07:03:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 864612 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=oracle.com header.i=@oracle.com header.b="F9lbNsHu"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zQfMS3KfPz9s71 for ; Tue, 23 Jan 2018 18:00:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751143AbeAWHAe (ORCPT ); Tue, 23 Jan 2018 02:00:34 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:46750 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750907AbeAWHAd (ORCPT ); Tue, 23 Jan 2018 02:00:33 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w0N6ufFI003454; Tue, 23 Jan 2018 07:00:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id; s=corp-2017-10-26; bh=bXC5Q/f7h6QvPWbK7zadfzJAi2Rf2lih2jSclgSRa2g=; b=F9lbNsHuE+DE3jHYQqtg9LZPrGtwJPihUP0UISVE2JmyUg3K3ec5YMZXJuBIc7B6q5Bl EjH9tsKyPjQ/S7eC0bkNq8qhzlSd541zK2DjphycXFsdixcVjAj9QqzgoEWyzC3yJsxe 96w4i3ju/8P92nfaLjhX6/mbRc4l0ROehcLSogm2NwRe7hs+Q1Q+CDHAmVIoBxEsxlaD MQZcJ54lR2UUFlpTOKyxvjOqsIGny6BTyWnEi1OR1SbdyrUuGr7FkZIAoStiBBIxGzVC O8nv/YJDzzI1Ye6pwM84aZFT+SgdK+yNK7iEplhygRuRPTP1Nx+YjA006mYQh9vrsfAy jw== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2130.oracle.com with ESMTP id 2fnyhq04yk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 23 Jan 2018 07:00:31 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w0N70U4k006529 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 23 Jan 2018 07:00:31 GMT Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w0N70Uno031129; Tue, 23 Jan 2018 07:00:30 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.78) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 22 Jan 2018 23:00:30 -0800 From: Zhu Yanjun To: keescook@chromium.org, netdev@vger.kernel.org Subject: [PATCH net-next 1/1] forcedeth: remove duplicate structure member in rx Date: Tue, 23 Jan 2018 02:03:37 -0500 Message-Id: <1516691017-27837-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8782 signatures=668655 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1801230097 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since both first_rx_ctx and rx_skb are the head of rx ctx, it not necessary to use two structure members to statically indicate the head of rx ctx. So first_rx_ctx is removed. CC: Srinivas Eeda CC: Joe Jin CC: Junxiao Bi Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index a3f6d51..66c665d 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -795,7 +795,7 @@ struct fe_priv { */ union ring_type get_rx, put_rx, last_rx; struct nv_skb_map *get_rx_ctx, *put_rx_ctx; - struct nv_skb_map *first_rx_ctx, *last_rx_ctx; + struct nv_skb_map *last_rx_ctx; struct nv_skb_map *rx_skb; union ring_type rx_ring; @@ -1835,7 +1835,7 @@ static int nv_alloc_rx(struct net_device *dev) if (unlikely(np->put_rx.orig++ == np->last_rx.orig)) np->put_rx.orig = np->rx_ring.orig; if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) - np->put_rx_ctx = np->first_rx_ctx; + np->put_rx_ctx = np->rx_skb; } else { packet_dropped: u64_stats_update_begin(&np->swstats_rx_syncp); @@ -1877,7 +1877,7 @@ static int nv_alloc_rx_optimized(struct net_device *dev) if (unlikely(np->put_rx.ex++ == np->last_rx.ex)) np->put_rx.ex = np->rx_ring.ex; if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) - np->put_rx_ctx = np->first_rx_ctx; + np->put_rx_ctx = np->rx_skb; } else { packet_dropped: u64_stats_update_begin(&np->swstats_rx_syncp); @@ -1910,7 +1910,8 @@ static void nv_init_rx(struct net_device *dev) np->last_rx.orig = &np->rx_ring.orig[np->rx_ring_size-1]; else np->last_rx.ex = &np->rx_ring.ex[np->rx_ring_size-1]; - np->get_rx_ctx = np->put_rx_ctx = np->first_rx_ctx = np->rx_skb; + np->get_rx_ctx = np->rx_skb; + np->put_rx_ctx = np->rx_skb; np->last_rx_ctx = &np->rx_skb[np->rx_ring_size-1]; for (i = 0; i < np->rx_ring_size; i++) { @@ -2914,7 +2915,7 @@ static int nv_rx_process(struct net_device *dev, int limit) if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) np->get_rx.orig = np->rx_ring.orig; if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) - np->get_rx_ctx = np->first_rx_ctx; + np->get_rx_ctx = np->rx_skb; rx_work++; } @@ -3003,7 +3004,7 @@ static int nv_rx_process_optimized(struct net_device *dev, int limit) if (unlikely(np->get_rx.ex++ == np->last_rx.ex)) np->get_rx.ex = np->rx_ring.ex; if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) - np->get_rx_ctx = np->first_rx_ctx; + np->get_rx_ctx = np->rx_skb; rx_work++; }