From patchwork Mon Feb 20 12:53:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 729972 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3vRkHL5YPbz9s9Z for ; Mon, 20 Feb 2017 23:59:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753396AbdBTM7F (ORCPT ); Mon, 20 Feb 2017 07:59:05 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:42930 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752562AbdBTM7C (ORCPT ); Mon, 20 Feb 2017 07:59:02 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v1KCpTI5021639; Mon, 20 Feb 2017 04:58:29 -0800 Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 28ppp12978-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Feb 2017 04:58:29 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Mon, 20 Feb 2017 04:58:28 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server id 15.0.1210.3 via Frontend Transport; Mon, 20 Feb 2017 04:58:28 -0800 Received: from xhacker.marvell.com (unknown [10.37.130.223]) by maili.marvell.com (Postfix) with ESMTP id 34E603F7040; Mon, 20 Feb 2017 04:58:26 -0800 (PST) From: Jisheng Zhang To: , , , , CC: , , , Jisheng Zhang Subject: [PATCH net-next v3 3/4] net: mvneta: avoid reading from tx_desc as much as possible Date: Mon, 20 Feb 2017 20:53:43 +0800 Message-ID: <20170220125344.3555-4-jszhang@marvell.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170220125344.3555-1-jszhang@marvell.com> References: <20170220125344.3555-1-jszhang@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-02-20_11:, , signatures=0 X-Proofpoint-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1612050000 definitions=main-1702200127 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In hot code path such as mvneta_tx(), mvneta_txq_bufs_free() etc. we access tx_desc several times. The tx_desc is allocated by dma_alloc_coherent, it's uncacheable if the device isn't cache-coherent, reading from uncached memory is fairly slow. So use local variable to store what we need to avoid extra reading from uncached memory. We get the following performance data on Marvell BG4CT Platforms (tested with iperf): before the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns after the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns we saved 9.2% time. Signed-off-by: Jisheng Zhang --- drivers/net/ethernet/marvell/mvneta.c | 50 ++++++++++++++++++----------------- 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index a25042801eec..b6cda4131c78 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1770,6 +1770,7 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, struct mvneta_tx_desc *tx_desc = txq->descs + txq->txq_get_index; struct sk_buff *skb = txq->tx_skb[txq->txq_get_index]; + u32 dma_addr = tx_desc->buf_phys_addr; if (skb) { bytes_compl += skb->len; @@ -1778,9 +1779,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, mvneta_txq_inc_get(txq); - if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr)) - dma_unmap_single(pp->dev->dev.parent, - tx_desc->buf_phys_addr, + if (!IS_TSO_HEADER(txq, dma_addr)) + dma_unmap_single(pp->dev->dev.parent, dma_addr, tx_desc->data_size, DMA_TO_DEVICE); if (!skb) continue; @@ -2191,17 +2191,18 @@ mvneta_tso_put_data(struct net_device *dev, struct mvneta_tx_queue *txq, bool last_tcp, bool is_last) { struct mvneta_tx_desc *tx_desc; + dma_addr_t dma_addr; tx_desc = mvneta_txq_next_desc_get(txq); tx_desc->data_size = size; - tx_desc->buf_phys_addr = dma_map_single(dev->dev.parent, data, - size, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(dev->dev.parent, - tx_desc->buf_phys_addr))) { + + dma_addr = dma_map_single(dev->dev.parent, data, size, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, dma_addr))) { mvneta_txq_desc_put(txq); return -ENOMEM; } + tx_desc->buf_phys_addr = dma_addr; tx_desc->command = 0; txq->tx_skb[txq->txq_put_index] = NULL; @@ -2278,9 +2279,10 @@ static int mvneta_tx_tso(struct sk_buff *skb, struct net_device *dev, */ for (i = desc_count - 1; i >= 0; i--) { struct mvneta_tx_desc *tx_desc = txq->descs + i; - if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr)) + u32 dma_addr = tx_desc->buf_phys_addr; + if (!IS_TSO_HEADER(txq, dma_addr)) dma_unmap_single(pp->dev->dev.parent, - tx_desc->buf_phys_addr, + dma_addr, tx_desc->data_size, DMA_TO_DEVICE); mvneta_txq_desc_put(txq); @@ -2296,21 +2298,20 @@ static int mvneta_tx_frag_process(struct mvneta_port *pp, struct sk_buff *skb, int i, nr_frags = skb_shinfo(skb)->nr_frags; for (i = 0; i < nr_frags; i++) { + dma_addr_t dma_addr; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; void *addr = page_address(frag->page.p) + frag->page_offset; tx_desc = mvneta_txq_next_desc_get(txq); tx_desc->data_size = frag->size; - tx_desc->buf_phys_addr = - dma_map_single(pp->dev->dev.parent, addr, - tx_desc->data_size, DMA_TO_DEVICE); - - if (dma_mapping_error(pp->dev->dev.parent, - tx_desc->buf_phys_addr)) { + dma_addr = dma_map_single(pp->dev->dev.parent, addr, + frag->size, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { mvneta_txq_desc_put(txq); goto error; } + tx_desc->buf_phys_addr = dma_addr; if (i == nr_frags - 1) { /* Last descriptor */ @@ -2351,7 +2352,8 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) struct mvneta_tx_desc *tx_desc; int len = skb->len; int frags = 0; - u32 tx_cmd; + u32 tx_cmd, size; + dma_addr_t dma_addr; if (!netif_running(dev)) goto out; @@ -2368,17 +2370,17 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) tx_cmd = mvneta_skb_tx_csum(pp, skb); - tx_desc->data_size = skb_headlen(skb); + size = skb_headlen(skb); + tx_desc->data_size = size; - tx_desc->buf_phys_addr = dma_map_single(dev->dev.parent, skb->data, - tx_desc->data_size, - DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(dev->dev.parent, - tx_desc->buf_phys_addr))) { + dma_addr = dma_map_single(dev->dev.parent, skb->data, + size, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, dma_addr))) { mvneta_txq_desc_put(txq); frags = 0; goto out; } + tx_desc->buf_phys_addr = dma_addr; if (frags == 1) { /* First and Last descriptor */ @@ -2395,8 +2397,8 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) /* Continue with other skb fragments */ if (mvneta_tx_frag_process(pp, skb, txq)) { dma_unmap_single(dev->dev.parent, - tx_desc->buf_phys_addr, - tx_desc->data_size, + dma_addr, + size, DMA_TO_DEVICE); mvneta_txq_desc_put(txq); frags = 0;