From patchwork Tue Dec 26 06:22:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 852936 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=oracle.com header.i=@oracle.com header.b="s5PYMdw1"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3z5QnY6zcQz9ryT for ; Tue, 26 Dec 2017 17:20:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750758AbdLZGTp (ORCPT ); Tue, 26 Dec 2017 01:19:45 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:51234 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750705AbdLZGTo (ORCPT ); Tue, 26 Dec 2017 01:19:44 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.21/8.16.0.21) with SMTP id vBQ6HONC051134; Tue, 26 Dec 2017 06:19:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id; s=corp-2017-10-26; bh=a61bQ7ydQSm0+c0O1i/DEY3Art3i0aIPP5Ccw8Y/xmo=; b=s5PYMdw1gXL+rXWHWhPxlWoMVtRt16aWj1CviWyYny0Tgbo6X4NR5nDeleM0c3AaLCc8 8PQLFGNYCj06SM1Txpq1CHyUWobmd9HW7tjwg4oJlmt5Cuy8kn5phNLx5jpDt0Iz+K8+ mKR8CPHiS136Y6GjBY1/ugJOYr7B7BXWnGAuQsu1LVH72BIkOgg4e9SmT64sjZ1c0iM0 5SlnaQcPbgxnUCuX7M/5EcckxEuSWowzAuLB4gP5Y2qgCnhEd0QDlzDYytM5AyzFs3Dy ZuYj+rINTOTE+oTeira+NaECSTe0BYzHN6GlCg3CaV4cGRLTtm6DK/IBB5vNtFfhtzef eg== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2120.oracle.com with ESMTP id 2f386s0h2h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 26 Dec 2017 06:19:42 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id vBQ6Je0l008078 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 26 Dec 2017 06:19:40 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id vBQ6JdaL007459; Tue, 26 Dec 2017 06:19:40 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.78) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 25 Dec 2017 22:19:39 -0800 From: Zhu Yanjun To: keescook@chromium.org, netdev@vger.kernel.org Subject: [PATCH net-next 1/1] forcedeth: optimize the rx with likely Date: Tue, 26 Dec 2017 01:22:50 -0500 Message-Id: <1514269370-29241-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8755 signatures=668650 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=712 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1712260091 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In the rx fastpath, the function netdev_alloc_skb rarely fails. Therefore, a likely() optimization is added to this error check conditional. CC: Srinivas Eeda CC: Joe Jin CC: Junxiao Bi Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 49d6d78..a79b9f8 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -1817,7 +1817,7 @@ static int nv_alloc_rx(struct net_device *dev) while (np->put_rx.orig != less_rx) { struct sk_buff *skb = netdev_alloc_skb(dev, np->rx_buf_sz + NV_RX_ALLOC_PAD); - if (skb) { + if (likely(skb)) { np->put_rx_ctx->skb = skb; np->put_rx_ctx->dma = dma_map_single(&np->pci_dev->dev, skb->data, @@ -1858,7 +1858,7 @@ static int nv_alloc_rx_optimized(struct net_device *dev) while (np->put_rx.ex != less_rx) { struct sk_buff *skb = netdev_alloc_skb(dev, np->rx_buf_sz + NV_RX_ALLOC_PAD); - if (skb) { + if (likely(skb)) { np->put_rx_ctx->skb = skb; np->put_rx_ctx->dma = dma_map_single(&np->pci_dev->dev, skb->data,