From patchwork Mon Feb 20 12:53:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 729973 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3vRkHm5ZC1z9s9Z for ; Tue, 21 Feb 2017 00:00:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753472AbdBTM7o (ORCPT ); Mon, 20 Feb 2017 07:59:44 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:42932 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752793AbdBTM7F (ORCPT ); Mon, 20 Feb 2017 07:59:05 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v1KCpTI6021639; Mon, 20 Feb 2017 04:58:31 -0800 Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 28ppp1297f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Feb 2017 04:58:31 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Mon, 20 Feb 2017 04:58:30 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server id 15.0.1210.3 via Frontend Transport; Mon, 20 Feb 2017 04:58:30 -0800 Received: from xhacker.marvell.com (unknown [10.37.130.223]) by maili.marvell.com (Postfix) with ESMTP id 8EE763F703F; Mon, 20 Feb 2017 04:58:28 -0800 (PST) From: Jisheng Zhang To: , , , , CC: , , , Jisheng Zhang Subject: [PATCH net-next v3 4/4] net: mvneta: Use cacheable memory to store the rx buffer DMA address Date: Mon, 20 Feb 2017 20:53:44 +0800 Message-ID: <20170220125344.3555-5-jszhang@marvell.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170220125344.3555-1-jszhang@marvell.com> References: <20170220125344.3555-1-jszhang@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-02-20_11:, , signatures=0 X-Proofpoint-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1612050000 definitions=main-1702200127 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In hot code path such as mvneta_rx_swbm(), the buf_phys_addr field of rx_dec is accessed. The rx_desc is allocated by dma_alloc_coherent, it's uncacheable if the device isn't cache coherent, reading from uncached memory is fairly slow. This patch uses cacheable memory to store the rx buffer DMA address. We get the following performance data on Marvell BG4CT Platforms (tested with iperf): before the patch: recving 1GB in mvneta_rx_swbm() costs 1492659600 ns after the patch: recving 1GB in mvneta_rx_swbm() costs 1421565640 ns We saved 4.76% time. Signed-off-by: Jisheng Zhang Suggested-by: Arnd Bergmann --- drivers/net/ethernet/marvell/mvneta.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index b6cda4131c78..ccd3f2601446 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -580,6 +580,9 @@ struct mvneta_rx_queue { /* Virtual address of the RX buffer */ void **buf_virt_addr; + /* DMA address of the RX buffer */ + dma_addr_t *buf_dma_addr; + /* Virtual address of the RX DMA descriptors array */ struct mvneta_rx_desc *descs; @@ -1617,6 +1620,7 @@ static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, rx_desc->buf_phys_addr = phys_addr; i = rx_desc - rxq->descs; + rxq->buf_dma_addr[i] = phys_addr; rxq->buf_virt_addr[i] = virt_addr; } @@ -1912,10 +1916,9 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, } for (i = 0; i < rxq->size; i++) { - struct mvneta_rx_desc *rx_desc = rxq->descs + i; void *data = rxq->buf_virt_addr[i]; - dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, + dma_unmap_single(pp->dev->dev.parent, rxq->buf_dma_addr[i], MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); mvneta_frag_free(pp->frag_size, data); } @@ -1953,7 +1956,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo, rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); index = rx_desc - rxq->descs; data = rxq->buf_virt_addr[index]; - phys_addr = rx_desc->buf_phys_addr; + phys_addr = rxq->buf_dma_addr[index]; if (!mvneta_rxq_desc_is_first_last(rx_status) || (rx_status & MVNETA_RXD_ERR_SUMMARY)) { @@ -4019,7 +4022,10 @@ static int mvneta_init(struct device *dev, struct mvneta_port *pp) rxq->buf_virt_addr = devm_kmalloc(pp->dev->dev.parent, rxq->size * sizeof(void *), GFP_KERNEL); - if (!rxq->buf_virt_addr) + rxq->buf_dma_addr = devm_kmalloc(pp->dev->dev.parent, + rxq->size * sizeof(dma_addr_t), + GFP_KERNEL); + if (!rxq->buf_virt_addr || !rxq->buf_dma_addr) return -ENOMEM; }