From patchwork Thu Jul 2 14:12:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 1321503 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49yKnq3r5mz9sDX for ; Fri, 3 Jul 2020 00:13:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729772AbgGBONd (ORCPT ); Thu, 2 Jul 2020 10:13:33 -0400 Received: from mail-ed1-f66.google.com ([209.85.208.66]:44301 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728651AbgGBONc (ORCPT ); Thu, 2 Jul 2020 10:13:32 -0400 Received: by mail-ed1-f66.google.com with SMTP id by13so14192771edb.11; Thu, 02 Jul 2020 07:13:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zsPbjoVXLMKo7otPkywLiwQyR35vOhEsRAVtfPQ7uPY=; b=sEM5aWXv4SmbHZXmm0E8kY3oV1f4yzwmF3ZSKCxxEDwXsa8hDQDLzA04T1f8Bl4NWq NGDphfhYEVnXEmzT8sF+aWR/XDcrffWuDPgkTo15qNOl+FyRr2bEf5WEaFjfhUTUbiQT dnbti5s1FmTas0ealGdwdNX52pvGpg2EBnwje+Old0xnkc7L5gbJLZSmHyMxXUJPNPM+ gvBwP5Fec5StRfzkJ8ojdwQe37fIAZ3OLOB6UrmVD+HYmI8YE3gOaXInZ8oc7uHZe4l+ 5gJXizZkTnV2uDsrX/hqk0Hn36XYKvZJPrwpTZW5MNnw0bdHm9HiwLNdbABSpzFO80uF owqA== X-Gm-Message-State: AOAM532Ry9Rn87sO/RDe1KVwjEjPI/xHnAHXsmKMWINQBsWiixJGukSd 4JJ2gd+QeBDYIt9yM5p4LvywsNEQKNTWfw== X-Google-Smtp-Source: ABdhPJzN3WH5DusuvBg/bp0+HLS2X/QP/5kA0A8Qh+i0r7cP/N2R+tHokcqv6IIMbdiBMlR+Zx9v0Q== X-Received: by 2002:aa7:c31a:: with SMTP id l26mr33707846edq.61.1593699209854; Thu, 02 Jul 2020 07:13:29 -0700 (PDT) Received: from msft-t490s.lan ([2001:b07:5d26:7f46:d7c1:f090:1563:f81f]) by smtp.gmail.com with ESMTPSA id fi29sm6841274ejb.83.2020.07.02.07.13.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 07:13:29 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Sven Auhagen , Lorenzo Bianconi , "David S. Miller" , Jesper Dangaard Brouer , Stefan Chulski , Marcin Wojtas , maxime.chevallier@bootlin.com, antoine.tenart@bootlin.com, thomas.petazzoni@bootlin.com, Maciej Fijalkowski , Russell King Subject: [PATCH net-next v2 1/5] mvpp2: refactor BM pool init percpu code Date: Thu, 2 Jul 2020 16:12:40 +0200 Message-Id: <20200702141244.51295-2-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200702141244.51295-1-mcroce@linux.microsoft.com> References: <20200702141244.51295-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce In mvpp2_swf_bm_pool_init_percpu(), a reference to a struct mvpp2_bm_pool is obtained traversing multiple structs, when a local variable already points to the same object. Fix it and, while at it, give the variable a meaningful name. Signed-off-by: Matteo Croce --- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 23 +++++++++---------- 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 212fc3b54310..027de7291f92 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -907,28 +907,27 @@ static int mvpp2_swf_bm_pool_init_shared(struct mvpp2_port *port) /* Initialize pools for swf, percpu buffers variant */ static int mvpp2_swf_bm_pool_init_percpu(struct mvpp2_port *port) { - struct mvpp2_bm_pool *p; + struct mvpp2_bm_pool *bm_pool; int i; for (i = 0; i < port->nrxqs; i++) { - p = mvpp2_bm_pool_use_percpu(port, MVPP2_BM_SHORT, i, - mvpp2_pools[MVPP2_BM_SHORT].pkt_size); - if (!p) + bm_pool = mvpp2_bm_pool_use_percpu(port, MVPP2_BM_SHORT, i, + mvpp2_pools[MVPP2_BM_SHORT].pkt_size); + if (!bm_pool) return -ENOMEM; - port->priv->bm_pools[i].port_map |= BIT(port->id); - mvpp2_rxq_short_pool_set(port, i, port->priv->bm_pools[i].id); + bm_pool->port_map |= BIT(port->id); + mvpp2_rxq_short_pool_set(port, i, bm_pool->id); } for (i = 0; i < port->nrxqs; i++) { - p = mvpp2_bm_pool_use_percpu(port, MVPP2_BM_LONG, i + port->nrxqs, - mvpp2_pools[MVPP2_BM_LONG].pkt_size); - if (!p) + bm_pool = mvpp2_bm_pool_use_percpu(port, MVPP2_BM_LONG, i + port->nrxqs, + mvpp2_pools[MVPP2_BM_LONG].pkt_size); + if (!bm_pool) return -ENOMEM; - port->priv->bm_pools[i + port->nrxqs].port_map |= BIT(port->id); - mvpp2_rxq_long_pool_set(port, i, - port->priv->bm_pools[i + port->nrxqs].id); + bm_pool->port_map |= BIT(port->id); + mvpp2_rxq_long_pool_set(port, i, bm_pool->id); } port->pool_long = NULL; From patchwork Thu Jul 2 14:12:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 1321505 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49yKpl4N7zz9sTN for ; Fri, 3 Jul 2020 00:14:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729511AbgGBOOT (ORCPT ); Thu, 2 Jul 2020 10:14:19 -0400 Received: from mail-ej1-f67.google.com ([209.85.218.67]:39540 "EHLO mail-ej1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728516AbgGBOOT (ORCPT ); Thu, 2 Jul 2020 10:14:19 -0400 Received: by mail-ej1-f67.google.com with SMTP id w6so29850011ejq.6; Thu, 02 Jul 2020 07:14:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q89I1ySKymBId6rZo4L3MUOtVh2K642+GZDrzgVlFFE=; b=nBPANgJfiEpYRmkMIt14ZzSH1ebIW4ZKvGgJD6ZJqBBcjIKOthBjUbcsMm2Gl0cAAs fjPbo2ob0rYdNOybDgEpO5ZMNbYzKy6kmWci72CJo38CtGuKWNGf0FaWh16uLwH2Jpo5 IctGVCnXs8Lw31aC0QYfM/WhSiwLcHJulyT6/Mip1/RUWX+I2x4itmYSagMo8ufUsgG4 rYnTD7CeTq2uRkh/S6JZUD8A9I8NejNpm0AkJ6KkL60bdTkBssR5O8pWlFWG1bM6rkyD SGgJ/sAxi7Lm8e/GrHitYkRpNGwDYzwr5nMrRju/jAmgY8EwShr/TGT4V8MVWjAzUEqW 9TuQ== X-Gm-Message-State: AOAM532vXto/vOI/stw8PP6g9mSnjPd+itdOAczZgvvIlcJXhYKBcHhE vXKByKgW+sx1K/WPlic1rQtVqcZW7dZR5Q== X-Google-Smtp-Source: ABdhPJzm6v+ZaFsaVn+W0rB3MAAT8fHDsWvd83PkkkZfPoPeqshuO4rj7QmY4OXtNDrOpSfaRkRpQg== X-Received: by 2002:a17:906:1a54:: with SMTP id j20mr27301344ejf.455.1593699254591; Thu, 02 Jul 2020 07:14:14 -0700 (PDT) Received: from msft-t490s.lan ([2001:b07:5d26:7f46:d7c1:f090:1563:f81f]) by smtp.gmail.com with ESMTPSA id fi29sm6841274ejb.83.2020.07.02.07.14.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 07:14:13 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Sven Auhagen , Lorenzo Bianconi , "David S. Miller" , Jesper Dangaard Brouer , Stefan Chulski , Marcin Wojtas , maxime.chevallier@bootlin.com, antoine.tenart@bootlin.com, thomas.petazzoni@bootlin.com, Maciej Fijalkowski , Russell King Subject: [PATCH net-next v2 2/5] mvpp2: use page_pool allocator Date: Thu, 2 Jul 2020 16:12:41 +0200 Message-Id: <20200702141244.51295-3-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200702141244.51295-1-mcroce@linux.microsoft.com> References: <20200702141244.51295-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce Use the page_pool API for memory management. This is a prerequisite for native XDP support. Tested-by: Sven Auhagen Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/Kconfig | 1 + drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 8 + .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 156 +++++++++++++++--- 3 files changed, 140 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig index cd8ddd1ef6f2..ef4f35ba077d 100644 --- a/drivers/net/ethernet/marvell/Kconfig +++ b/drivers/net/ethernet/marvell/Kconfig @@ -87,6 +87,7 @@ config MVPP2 depends on ARCH_MVEBU || COMPILE_TEST select MVMDIO select PHYLINK + select PAGE_POOL help This driver supports the network interface units in the Marvell ARMADA 375, 7K and 8K SoCs. diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h index 543a310ec102..4c16c9e9c1e5 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h @@ -15,6 +15,7 @@ #include #include #include +#include /* Fifo Registers */ #define MVPP2_RX_DATA_FIFO_SIZE_REG(port) (0x00 + 4 * (port)) @@ -820,6 +821,9 @@ struct mvpp2 { /* RSS Indirection tables */ struct mvpp2_rss_table *rss_tables[MVPP22_N_RSS_TABLES]; + + /* page_pool allocator */ + struct page_pool *page_pool[MVPP2_PORT_MAX_RXQ]; }; struct mvpp2_pcpu_stats { @@ -1161,6 +1165,10 @@ struct mvpp2_rx_queue { /* Port's logic RXQ number to which physical RXQ is mapped */ int logic_rxq; + + /* XDP memory accounting */ + struct xdp_rxq_info xdp_rxq_short; + struct xdp_rxq_info xdp_rxq_long; }; struct mvpp2_bm_pool { diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 027de7291f92..5d0e02c161a6 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -95,6 +95,22 @@ static inline u32 mvpp2_cpu_to_thread(struct mvpp2 *priv, int cpu) return cpu % priv->nthreads; } +static struct page_pool * +mvpp2_create_page_pool(struct device *dev, int num, int len) +{ + struct page_pool_params pp_params = { + /* internal DMA mapping in page_pool */ + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .pool_size = num, + .nid = NUMA_NO_NODE, + .dev = dev, + .dma_dir = DMA_FROM_DEVICE, + .max_len = len, + }; + + return page_pool_create(&pp_params); +} + /* These accessors should be used to access: * * - per-thread registers, where each thread has its own copy of the @@ -327,17 +343,25 @@ static inline int mvpp2_txq_phys(int port, int txq) return (MVPP2_MAX_TCONT + port) * MVPP2_MAX_TXQ + txq; } -static void *mvpp2_frag_alloc(const struct mvpp2_bm_pool *pool) +/* Returns a struct page if page_pool is set, otherwise a buffer */ +static void *mvpp2_frag_alloc(const struct mvpp2_bm_pool *pool, + struct page_pool *page_pool) { + if (page_pool) + return page_pool_dev_alloc_pages(page_pool); + if (likely(pool->frag_size <= PAGE_SIZE)) return netdev_alloc_frag(pool->frag_size); - else - return kmalloc(pool->frag_size, GFP_ATOMIC); + + return kmalloc(pool->frag_size, GFP_ATOMIC); } -static void mvpp2_frag_free(const struct mvpp2_bm_pool *pool, void *data) +static void mvpp2_frag_free(const struct mvpp2_bm_pool *pool, + struct page_pool *page_pool, void *data) { - if (likely(pool->frag_size <= PAGE_SIZE)) + if (page_pool) + page_pool_put_full_page(page_pool, virt_to_head_page(data), false); + else if (likely(pool->frag_size <= PAGE_SIZE)) skb_free_frag(data); else kfree(data); @@ -442,6 +466,7 @@ static void mvpp2_bm_bufs_get_addrs(struct device *dev, struct mvpp2 *priv, static void mvpp2_bm_bufs_free(struct device *dev, struct mvpp2 *priv, struct mvpp2_bm_pool *bm_pool, int buf_num) { + struct page_pool *pp = NULL; int i; if (buf_num > bm_pool->buf_num) { @@ -450,6 +475,9 @@ static void mvpp2_bm_bufs_free(struct device *dev, struct mvpp2 *priv, buf_num = bm_pool->buf_num; } + if (priv->percpu_pools) + pp = priv->page_pool[bm_pool->id]; + for (i = 0; i < buf_num; i++) { dma_addr_t buf_dma_addr; phys_addr_t buf_phys_addr; @@ -458,14 +486,15 @@ static void mvpp2_bm_bufs_free(struct device *dev, struct mvpp2 *priv, mvpp2_bm_bufs_get_addrs(dev, priv, bm_pool, &buf_dma_addr, &buf_phys_addr); - dma_unmap_single(dev, buf_dma_addr, - bm_pool->buf_size, DMA_FROM_DEVICE); + if (!pp) + dma_unmap_single(dev, buf_dma_addr, + bm_pool->buf_size, DMA_FROM_DEVICE); data = (void *)phys_to_virt(buf_phys_addr); if (!data) break; - mvpp2_frag_free(bm_pool, data); + mvpp2_frag_free(bm_pool, pp, data); } /* Update BM driver with number of buffers removed from pool */ @@ -496,6 +525,9 @@ static int mvpp2_bm_pool_destroy(struct device *dev, struct mvpp2 *priv, int buf_num; u32 val; + if (priv->percpu_pools) + page_pool_destroy(priv->page_pool[bm_pool->id]); + buf_num = mvpp2_check_hw_buf_num(priv, bm_pool); mvpp2_bm_bufs_free(dev, priv, bm_pool, buf_num); @@ -548,8 +580,20 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv) { int i, err, poolnum = MVPP2_BM_POOLS_NUM; - if (priv->percpu_pools) + if (priv->percpu_pools) { poolnum = mvpp2_get_nrxqs(priv) * 2; + for (i = 0; i < poolnum; i++) { + /* the pool in use */ + int pn = i / (poolnum / 2); + + priv->page_pool[i] = + mvpp2_create_page_pool(dev, + mvpp2_pools[pn].buf_num, + mvpp2_pools[pn].pkt_size); + if (IS_ERR(priv->page_pool[i])) + return PTR_ERR(priv->page_pool[i]); + } + } dev_info(dev, "using %d %s buffers\n", poolnum, priv->percpu_pools ? "per-cpu" : "shared"); @@ -632,23 +676,31 @@ static void mvpp2_rxq_short_pool_set(struct mvpp2_port *port, static void *mvpp2_buf_alloc(struct mvpp2_port *port, struct mvpp2_bm_pool *bm_pool, + struct page_pool *page_pool, dma_addr_t *buf_dma_addr, phys_addr_t *buf_phys_addr, gfp_t gfp_mask) { dma_addr_t dma_addr; + struct page *page; void *data; - data = mvpp2_frag_alloc(bm_pool); + data = mvpp2_frag_alloc(bm_pool, page_pool); if (!data) return NULL; - dma_addr = dma_map_single(port->dev->dev.parent, data, - MVPP2_RX_BUF_SIZE(bm_pool->pkt_size), - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(port->dev->dev.parent, dma_addr))) { - mvpp2_frag_free(bm_pool, data); - return NULL; + if (page_pool) { + page = (struct page *)data; + dma_addr = page_pool_get_dma_addr(page); + data = page_to_virt(page); + } else { + dma_addr = dma_map_single(port->dev->dev.parent, data, + MVPP2_RX_BUF_SIZE(bm_pool->pkt_size), + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(port->dev->dev.parent, dma_addr))) { + mvpp2_frag_free(bm_pool, NULL, data); + return NULL; + } } *buf_dma_addr = dma_addr; *buf_phys_addr = virt_to_phys(data); @@ -706,6 +758,7 @@ static int mvpp2_bm_bufs_add(struct mvpp2_port *port, int i, buf_size, total_size; dma_addr_t dma_addr; phys_addr_t phys_addr; + struct page_pool *pp = NULL; void *buf; if (port->priv->percpu_pools && @@ -726,8 +779,10 @@ static int mvpp2_bm_bufs_add(struct mvpp2_port *port, return 0; } + if (port->priv->percpu_pools) + pp = port->priv->page_pool[bm_pool->id]; for (i = 0; i < buf_num; i++) { - buf = mvpp2_buf_alloc(port, bm_pool, &dma_addr, + buf = mvpp2_buf_alloc(port, bm_pool, pp, &dma_addr, &phys_addr, GFP_KERNEL); if (!buf) break; @@ -2374,10 +2429,11 @@ static int mvpp2_aggr_txq_init(struct platform_device *pdev, /* Create a specified Rx queue */ static int mvpp2_rxq_init(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq) - { + struct mvpp2 *priv = port->priv; unsigned int thread; u32 rxq_dma; + int err; rxq->size = port->rx_ring_size; @@ -2415,7 +2471,43 @@ static int mvpp2_rxq_init(struct mvpp2_port *port, /* Add number of descriptors ready for receiving packets */ mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size); + if (priv->percpu_pools) { + err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->id); + if (err < 0) + goto err_free_dma; + + err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->id); + if (err < 0) + goto err_unregister_rxq_short; + + /* Every RXQ has a pool for short and another for long packets */ + err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq_short, + MEM_TYPE_PAGE_POOL, + priv->page_pool[rxq->logic_rxq]); + if (err < 0) + goto err_unregister_rxq_long; + + err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq_long, + MEM_TYPE_PAGE_POOL, + priv->page_pool[rxq->logic_rxq + + port->nrxqs]); + if (err < 0) + goto err_unregister_mem_rxq_short; + } + return 0; + +err_unregister_mem_rxq_short: + xdp_rxq_info_unreg_mem_model(&rxq->xdp_rxq_short); +err_unregister_rxq_long: + xdp_rxq_info_unreg(&rxq->xdp_rxq_long); +err_unregister_rxq_short: + xdp_rxq_info_unreg(&rxq->xdp_rxq_short); +err_free_dma: + dma_free_coherent(port->dev->dev.parent, + rxq->size * MVPP2_DESC_ALIGNED_SIZE, + rxq->descs, rxq->descs_dma); + return err; } /* Push packets received by the RXQ to BM pool */ @@ -2449,6 +2541,12 @@ static void mvpp2_rxq_deinit(struct mvpp2_port *port, { unsigned int thread; + if (xdp_rxq_info_is_reg(&rxq->xdp_rxq_short)) + xdp_rxq_info_unreg(&rxq->xdp_rxq_short); + + if (xdp_rxq_info_is_reg(&rxq->xdp_rxq_long)) + xdp_rxq_info_unreg(&rxq->xdp_rxq_long); + mvpp2_rxq_drop_pkts(port, rxq); if (rxq->descs) @@ -2890,14 +2988,15 @@ static void mvpp2_rx_csum(struct mvpp2_port *port, u32 status, /* Allocate a new skb and add it to BM pool */ static int mvpp2_rx_refill(struct mvpp2_port *port, - struct mvpp2_bm_pool *bm_pool, int pool) + struct mvpp2_bm_pool *bm_pool, + struct page_pool *page_pool, int pool) { dma_addr_t dma_addr; phys_addr_t phys_addr; void *buf; - buf = mvpp2_buf_alloc(port, bm_pool, &dma_addr, &phys_addr, - GFP_ATOMIC); + buf = mvpp2_buf_alloc(port, bm_pool, page_pool, + &dma_addr, &phys_addr, GFP_ATOMIC); if (!buf) return -ENOMEM; @@ -2956,6 +3055,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, while (rx_done < rx_todo) { struct mvpp2_rx_desc *rx_desc = mvpp2_rxq_next_desc_get(rxq); struct mvpp2_bm_pool *bm_pool; + struct page_pool *pp = NULL; struct sk_buff *skb; unsigned int frag_size; dma_addr_t dma_addr; @@ -2989,6 +3089,9 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, DMA_FROM_DEVICE); prefetch(data); + if (port->priv->percpu_pools) + pp = port->priv->page_pool[pool]; + if (bm_pool->frag_size > PAGE_SIZE) frag_size = 0; else @@ -3000,15 +3103,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, goto err_drop_frame; } - err = mvpp2_rx_refill(port, bm_pool, pool); + err = mvpp2_rx_refill(port, bm_pool, pp, pool); if (err) { netdev_err(port->dev, "failed to refill BM pools\n"); goto err_drop_frame; } - dma_unmap_single_attrs(dev->dev.parent, dma_addr, - bm_pool->buf_size, DMA_FROM_DEVICE, - DMA_ATTR_SKIP_CPU_SYNC); + if (pp) + page_pool_release_page(pp, virt_to_page(data)); + else + dma_unmap_single_attrs(dev->dev.parent, dma_addr, + bm_pool->buf_size, DMA_FROM_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC); rcvd_pkts++; rcvd_bytes += rx_bytes; From patchwork Thu Jul 2 14:12:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 1321507 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49yKqC4MV6z9sDX for ; Fri, 3 Jul 2020 00:14:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729571AbgGBOOq (ORCPT ); Thu, 2 Jul 2020 10:14:46 -0400 Received: from mail-ej1-f67.google.com ([209.85.218.67]:36513 "EHLO mail-ej1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729479AbgGBOOp (ORCPT ); Thu, 2 Jul 2020 10:14:45 -0400 Received: by mail-ej1-f67.google.com with SMTP id dr13so29843935ejc.3; Thu, 02 Jul 2020 07:14:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GJfwaaPLeAUUyYHkaJZUQ+w7nbizgi29vC2YoywuAGM=; b=CYYQNB33JSlYJnFu2LhZmb28LyB3CJ7GqyKgnb4C8dCk7ekktsu47UycKMlj2HfawG dee7KaI7Xx9Rjc6NoI9IKutP8yio/lTxKlgpEL7Qp548p+AAP5gEgzodbkRBc+Tgbmbv eCafoVyJWB3YdrqsFpOPht448zPnQrBJroZRAkKGVWBLNf20/yD6Xcvu/uw7kvv+5nch TDuxk6cpejlTcf/RrlDMN0WeiFeNsIyOH/tpsUZfGhLic2tfuee1BDR/p6Xfd9ae4uYi zWueNYa0kNJgX2Dbve37rI2eDpgq9JP7RTchK6Ay2pmol+DH1v6ra/fSQVq0Zwim7KGj QyBQ== X-Gm-Message-State: AOAM530VLfzID0GyXKjvlT1T4sgkcmaRcfjr6fMeGLTOdKhCL3QaUcQH 2ma2YoMLFiXruBbrGLmWSIrhuOxRJj3wVg== X-Google-Smtp-Source: ABdhPJw0jjr+oCpZniRHbqPDpHdO4ATd3TeYdqCKE0PNtKtmmkmnHUf/enagFXtT+gDHYYn4rO5fDw== X-Received: by 2002:a17:906:2b54:: with SMTP id b20mr27817965ejg.366.1593699281395; Thu, 02 Jul 2020 07:14:41 -0700 (PDT) Received: from msft-t490s.lan ([2001:b07:5d26:7f46:d7c1:f090:1563:f81f]) by smtp.gmail.com with ESMTPSA id fi29sm6841274ejb.83.2020.07.02.07.14.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 07:14:40 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Sven Auhagen , Lorenzo Bianconi , "David S. Miller" , Jesper Dangaard Brouer , Stefan Chulski , Marcin Wojtas , maxime.chevallier@bootlin.com, antoine.tenart@bootlin.com, thomas.petazzoni@bootlin.com, Maciej Fijalkowski , Russell King Subject: [PATCH net-next v2 3/5] mvpp2: add basic XDP support Date: Thu, 2 Jul 2020 16:12:42 +0200 Message-Id: <20200702141244.51295-4-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200702141244.51295-1-mcroce@linux.microsoft.com> References: <20200702141244.51295-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Matteo Croce Add XDP native support. By now only XDP_DROP, XDP_PASS and XDP_REDIRECT verdicts are supported. Co-developed-by: Sven Auhagen Signed-off-by: Sven Auhagen Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 28 ++- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 165 +++++++++++++++++- 2 files changed, 185 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h index 4c16c9e9c1e5..f351e41c9da6 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h @@ -16,6 +16,18 @@ #include #include #include +#include +#include + +/* The PacketOffset field is measured in units of 32 bytes and is 3 bits wide, + * so the maximum offset is 7 * 32 = 224 + */ +#define MVPP2_SKB_HEADROOM min(max(XDP_PACKET_HEADROOM, NET_SKB_PAD), 224) + +#define MVPP2_XDP_PASS 0 +#define MVPP2_XDP_DROPPED BIT(0) +#define MVPP2_XDP_TX BIT(1) +#define MVPP2_XDP_REDIR BIT(2) /* Fifo Registers */ #define MVPP2_RX_DATA_FIFO_SIZE_REG(port) (0x00 + 4 * (port)) @@ -629,10 +641,12 @@ ALIGN((mtu) + MVPP2_MH_SIZE + MVPP2_VLAN_TAG_LEN + \ ETH_HLEN + ETH_FCS_LEN, cache_line_size()) -#define MVPP2_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) +#define MVPP2_RX_BUF_SIZE(pkt_size) ((pkt_size) + MVPP2_SKB_HEADROOM) #define MVPP2_RX_TOTAL_SIZE(buf_size) ((buf_size) + MVPP2_SKB_SHINFO_SIZE) #define MVPP2_RX_MAX_PKT_SIZE(total_size) \ - ((total_size) - NET_SKB_PAD - MVPP2_SKB_SHINFO_SIZE) + ((total_size) - MVPP2_SKB_HEADROOM - MVPP2_SKB_SHINFO_SIZE) + +#define MVPP2_MAX_RX_BUF_SIZE (PAGE_SIZE - MVPP2_SKB_SHINFO_SIZE - MVPP2_SKB_HEADROOM) #define MVPP2_BIT_TO_BYTE(bit) ((bit) / 8) #define MVPP2_BIT_TO_WORD(bit) ((bit) / 32) @@ -690,9 +704,9 @@ enum mvpp2_prs_l3_cast { #define MVPP2_BM_COOKIE_POOL_OFFS 8 #define MVPP2_BM_COOKIE_CPU_OFFS 24 -#define MVPP2_BM_SHORT_FRAME_SIZE 512 -#define MVPP2_BM_LONG_FRAME_SIZE 2048 -#define MVPP2_BM_JUMBO_FRAME_SIZE 10240 +#define MVPP2_BM_SHORT_FRAME_SIZE 704 /* frame size 128 */ +#define MVPP2_BM_LONG_FRAME_SIZE 2240 /* frame size 1664 */ +#define MVPP2_BM_JUMBO_FRAME_SIZE 10432 /* frame size 9856 */ /* BM short pool packet size * These value assure that for SWF the total number * of bytes allocated for each buffer will be 512 @@ -913,6 +927,8 @@ struct mvpp2_port { unsigned int ntxqs; struct net_device *dev; + struct bpf_prog *xdp_prog; + int pkt_size; /* Per-CPU port control */ @@ -932,6 +948,8 @@ struct mvpp2_port { struct mvpp2_pcpu_stats __percpu *stats; u64 *ethtool_stats; + unsigned long state; + /* Per-port work and its lock to gather hardware statistics */ struct mutex gather_stats_lock; struct delayed_work stats_work; diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 5d0e02c161a6..5f740c96a02c 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -36,6 +36,7 @@ #include #include #include +#include #include "mvpp2.h" #include "mvpp2_prs.h" @@ -105,6 +106,7 @@ mvpp2_create_page_pool(struct device *dev, int num, int len) .nid = NUMA_NO_NODE, .dev = dev, .dma_dir = DMA_FROM_DEVICE, + .offset = MVPP2_SKB_HEADROOM, .max_len = len, }; @@ -2462,7 +2464,7 @@ static int mvpp2_rxq_init(struct mvpp2_port *port, put_cpu(); /* Set Offset */ - mvpp2_rxq_offset_set(port, rxq->id, NET_SKB_PAD); + mvpp2_rxq_offset_set(port, rxq->id, MVPP2_SKB_HEADROOM); /* Set coalescing pkts and time */ mvpp2_rx_pkts_coal_set(port, rxq); @@ -3037,16 +3039,69 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb) return MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE; } +static int +mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, + struct bpf_prog *prog, struct xdp_buff *xdp, + struct page_pool *pp) +{ + unsigned int len, sync, err; + struct page *page; + u32 ret, act; + + len = xdp->data_end - xdp->data_hard_start - MVPP2_SKB_HEADROOM; + act = bpf_prog_run_xdp(prog, xdp); + + /* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */ + sync = xdp->data_end - xdp->data_hard_start - MVPP2_SKB_HEADROOM; + sync = max(sync, len); + + switch (act) { + case XDP_PASS: + ret = MVPP2_XDP_PASS; + break; + case XDP_REDIRECT: + err = xdp_do_redirect(port->dev, xdp, prog); + if (unlikely(err)) { + ret = MVPP2_XDP_DROPPED; + page = virt_to_head_page(xdp->data); + page_pool_put_page(pp, page, sync, true); + } else { + ret = MVPP2_XDP_REDIR; + } + break; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(port->dev, prog, act); + fallthrough; + case XDP_DROP: + page = virt_to_head_page(xdp->data); + page_pool_put_page(pp, page, sync, true); + ret = MVPP2_XDP_DROPPED; + break; + } + + return ret; +} + /* Main rx processing */ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, int rx_todo, struct mvpp2_rx_queue *rxq) { struct net_device *dev = port->dev; + struct bpf_prog *xdp_prog; + struct xdp_buff xdp; int rx_received; int rx_done = 0; + u32 xdp_ret = 0; u32 rcvd_pkts = 0; u32 rcvd_bytes = 0; + rcu_read_lock(); + + xdp_prog = READ_ONCE(port->xdp_prog); + /* Get number of received packets and clamp the to-do */ rx_received = mvpp2_rxq_received(port, rxq->id); if (rx_todo > rx_received) @@ -3061,7 +3116,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, dma_addr_t dma_addr; phys_addr_t phys_addr; u32 rx_status; - int pool, rx_bytes, err; + int pool, rx_bytes, err, ret; void *data; rx_done++; @@ -3097,6 +3152,33 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, else frag_size = bm_pool->frag_size; + if (xdp_prog) { + xdp.data_hard_start = data; + xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM; + xdp.data_end = xdp.data + rx_bytes; + xdp.frame_sz = PAGE_SIZE; + + if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE) + xdp.rxq = &rxq->xdp_rxq_short; + else + xdp.rxq = &rxq->xdp_rxq_long; + + xdp_set_data_meta_invalid(&xdp); + + ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp); + + if (ret) { + xdp_ret |= ret; + err = mvpp2_rx_refill(port, bm_pool, pp, pool); + if (err) { + netdev_err(port->dev, "failed to refill BM pools\n"); + goto err_drop_frame; + } + + continue; + } + } + skb = build_skb(data, frag_size); if (!skb) { netdev_warn(port->dev, "skb build failed\n"); @@ -3119,7 +3201,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, rcvd_pkts++; rcvd_bytes += rx_bytes; - skb_reserve(skb, MVPP2_MH_SIZE + NET_SKB_PAD); + skb_reserve(skb, MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM); skb_put(skb, rx_bytes); skb->protocol = eth_type_trans(skb, dev); mvpp2_rx_csum(port, rx_status, skb); @@ -3134,6 +3216,8 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr); } + rcu_read_unlock(); + if (rcvd_pkts) { struct mvpp2_pcpu_stats *stats = this_cpu_ptr(port->stats); @@ -3609,6 +3693,8 @@ static void mvpp2_start_dev(struct mvpp2_port *port) } netif_tx_start_all_queues(port->dev); + + clear_bit(0, &port->state); } /* Set hw internals when stopping port */ @@ -3616,6 +3702,8 @@ static void mvpp2_stop_dev(struct mvpp2_port *port) { int i; + set_bit(0, &port->state); + /* Disable interrupts on all threads */ mvpp2_interrupts_disable(port); @@ -4022,6 +4110,10 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu) } if (MVPP2_RX_PKT_SIZE(mtu) > MVPP2_BM_LONG_PKT_SIZE) { + if (port->xdp_prog) { + netdev_err(dev, "Jumbo frames are not supported with XDP\n"); + return -EINVAL; + } if (priv->percpu_pools) { netdev_warn(dev, "mtu %d too high, switching to shared buffers", mtu); mvpp2_bm_switch_buffers(priv, false); @@ -4160,6 +4252,72 @@ static int mvpp2_set_features(struct net_device *dev, return 0; } +static int mvpp2_xdp_setup(struct mvpp2_port *port, struct netdev_bpf *bpf) +{ + struct bpf_prog *prog = bpf->prog, *old_prog; + bool running = netif_running(port->dev); + bool reset = !prog != !port->xdp_prog; + + if (port->dev->mtu > ETH_DATA_LEN) { + NL_SET_ERR_MSG_MOD(bpf->extack, "XDP is not supported with jumbo frames enabled"); + return -EOPNOTSUPP; + } + + if (!port->priv->percpu_pools) { + NL_SET_ERR_MSG_MOD(bpf->extack, "Per CPU Pools required for XDP"); + return -EOPNOTSUPP; + } + + /* device is up and bpf is added/removed, must setup the RX queues */ + if (running && reset) { + mvpp2_stop_dev(port); + mvpp2_cleanup_rxqs(port); + mvpp2_cleanup_txqs(port); + } + + old_prog = xchg(&port->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + /* bpf is just replaced, RXQ and MTU are already setup */ + if (!reset) + return 0; + + /* device was up, restore the link */ + if (running) { + int ret = mvpp2_setup_rxqs(port); + + if (ret) { + netdev_err(port->dev, "mvpp2_setup_rxqs failed\n"); + return ret; + } + ret = mvpp2_setup_txqs(port); + if (ret) { + netdev_err(port->dev, "mvpp2_setup_txqs failed\n"); + return ret; + } + + mvpp2_start_dev(port); + } + + return 0; +} + +static int mvpp2_xdp(struct net_device *dev, struct netdev_bpf *xdp) +{ + struct mvpp2_port *port = netdev_priv(dev); + + switch (xdp->command) { + case XDP_SETUP_PROG: + return mvpp2_xdp_setup(port, xdp); + case XDP_QUERY_PROG: + xdp->prog_id = port->xdp_prog ? port->xdp_prog->aux->id : 0; + return 0; + default: + return -EINVAL; + } +} + /* Ethtool methods */ static int mvpp2_ethtool_nway_reset(struct net_device *dev) @@ -4510,6 +4668,7 @@ static const struct net_device_ops mvpp2_netdev_ops = { .ndo_vlan_rx_add_vid = mvpp2_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = mvpp2_vlan_rx_kill_vid, .ndo_set_features = mvpp2_set_features, + .ndo_bpf = mvpp2_xdp, }; static const struct ethtool_ops mvpp2_eth_tool_ops = { From patchwork Thu Jul 2 14:12:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 1321509 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49yKqd26djz9sDX for ; Fri, 3 Jul 2020 00:15:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728179AbgGBOPI (ORCPT ); Thu, 2 Jul 2020 10:15:08 -0400 Received: from mail-ed1-f67.google.com ([209.85.208.67]:37779 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728651AbgGBOPH (ORCPT ); Thu, 2 Jul 2020 10:15:07 -0400 Received: by mail-ed1-f67.google.com with SMTP id g20so23868319edm.4; Thu, 02 Jul 2020 07:15:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jZ5bfRXqxL3ylBVnjbJiEghl+4XoO/oYizBWKcVs5ec=; b=NED8YGI54GARI9Mhk0onshCThItCe7g5FNd/l9lhsDyjbdFA3g4HQCVk9PFsHklHl2 v43tEcV102q/yTlzaxC80y+HQTei39icFDuut6MBRxIizzz431y/tnZbNnUhm6dIACCq sIU2ApMrlPcul4IQA1gEIwgcueTAOX4r9mlvgJbY+98Tgmc5s/AlAmgyFi0vSPJVbRdX 3gpazwtH+TBCO/dkVrQkJAez0A1JBKJNbx8F/tWRQQrKWKakpmOJ2bMRVeVSSmdWl2br Y4SFknuTm/ZIxc+v92i6svtAYQJ/Q7cZ/LiZWK/kB2r6K3Ip+ZwRXqj5muN5TE36oV8Q 9czA== X-Gm-Message-State: AOAM533+wshMLAYsspxSpZVk75DFRv3SCPo9+b4pSc4p1k8C8hSFdkQk V0a6oA3hbvnvZCBjqEqlUsACd8iqWr7dlw== X-Google-Smtp-Source: ABdhPJx6wcZSn+GeGdvme9FxlI8WWcM1e3wAaBXpUzajayA9PhN8HlZi7faKzCEdSy2+EHFhieQNLg== X-Received: by 2002:a50:fc97:: with SMTP id f23mr26021986edq.255.1593699304361; Thu, 02 Jul 2020 07:15:04 -0700 (PDT) Received: from msft-t490s.lan ([2001:b07:5d26:7f46:d7c1:f090:1563:f81f]) by smtp.gmail.com with ESMTPSA id fi29sm6841274ejb.83.2020.07.02.07.15.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 07:15:03 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Sven Auhagen , Lorenzo Bianconi , "David S. Miller" , Jesper Dangaard Brouer , Stefan Chulski , Marcin Wojtas , maxime.chevallier@bootlin.com, antoine.tenart@bootlin.com, thomas.petazzoni@bootlin.com, Maciej Fijalkowski , Russell King Subject: [PATCH net-next v2 4/5] mvpp2: XDP TX support Date: Thu, 2 Jul 2020 16:12:43 +0200 Message-Id: <20200702141244.51295-5-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200702141244.51295-1-mcroce@linux.microsoft.com> References: <20200702141244.51295-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Matteo Croce Add the transmit part of XDP support, which includes: - support for XDP_TX in mvpp2_xdp() - .ndo_xdp_xmit hook for AF_XDP and XDP_REDIRECT with mvpp2 as destination mvpp2_xdp_submit_frame() is a generic function which is called by mvpp2_xdp_xmit_back() when doing XDP_TX, and by mvpp2_xdp_xmit when doing AF_XDP or XDP_REDIRECT target. The buffer allocation has been reworked to be able to map the buffers as DMA_FROM_DEVICE or DMA_BIDIRECTIONAL depending if native XDP is in use or not. Co-developed-by: Sven Auhagen Signed-off-by: Sven Auhagen Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 13 +- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 304 +++++++++++++++--- 2 files changed, 272 insertions(+), 45 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h index f351e41c9da6..c52955b33fab 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h @@ -1082,9 +1082,20 @@ struct mvpp2_rx_desc { }; }; +enum mvpp2_tx_buf_type { + MVPP2_TYPE_SKB, + MVPP2_TYPE_XDP_TX, + MVPP2_TYPE_XDP_NDO, +}; + struct mvpp2_txq_pcpu_buf { + enum mvpp2_tx_buf_type type; + /* Transmitted SKB */ - struct sk_buff *skb; + union { + struct xdp_frame *xdpf; + struct sk_buff *skb; + }; /* Physical address of transmitted buffer */ dma_addr_t dma; diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 5f740c96a02c..d49a814311be 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -97,7 +97,8 @@ static inline u32 mvpp2_cpu_to_thread(struct mvpp2 *priv, int cpu) } static struct page_pool * -mvpp2_create_page_pool(struct device *dev, int num, int len) +mvpp2_create_page_pool(struct device *dev, int num, int len, + enum dma_data_direction dma_dir) { struct page_pool_params pp_params = { /* internal DMA mapping in page_pool */ @@ -105,7 +106,7 @@ mvpp2_create_page_pool(struct device *dev, int num, int len) .pool_size = num, .nid = NUMA_NO_NODE, .dev = dev, - .dma_dir = DMA_FROM_DEVICE, + .dma_dir = dma_dir, .offset = MVPP2_SKB_HEADROOM, .max_len = len, }; @@ -299,12 +300,17 @@ static void mvpp2_txq_inc_get(struct mvpp2_txq_pcpu *txq_pcpu) static void mvpp2_txq_inc_put(struct mvpp2_port *port, struct mvpp2_txq_pcpu *txq_pcpu, - struct sk_buff *skb, - struct mvpp2_tx_desc *tx_desc) + void *data, + struct mvpp2_tx_desc *tx_desc, + enum mvpp2_tx_buf_type buf_type) { struct mvpp2_txq_pcpu_buf *tx_buf = txq_pcpu->buffs + txq_pcpu->txq_put_index; - tx_buf->skb = skb; + tx_buf->type = buf_type; + if (buf_type == MVPP2_TYPE_SKB) + tx_buf->skb = data; + else + tx_buf->xdpf = data; tx_buf->size = mvpp2_txdesc_size_get(port, tx_desc); tx_buf->dma = mvpp2_txdesc_dma_addr_get(port, tx_desc) + mvpp2_txdesc_offset_get(port, tx_desc); @@ -527,9 +533,6 @@ static int mvpp2_bm_pool_destroy(struct device *dev, struct mvpp2 *priv, int buf_num; u32 val; - if (priv->percpu_pools) - page_pool_destroy(priv->page_pool[bm_pool->id]); - buf_num = mvpp2_check_hw_buf_num(priv, bm_pool); mvpp2_bm_bufs_free(dev, priv, bm_pool, buf_num); @@ -545,6 +548,9 @@ static int mvpp2_bm_pool_destroy(struct device *dev, struct mvpp2 *priv, val |= MVPP2_BM_STOP_MASK; mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(bm_pool->id), val); + if (priv->percpu_pools) + page_pool_destroy(priv->page_pool[bm_pool->id]); + dma_free_coherent(dev, bm_pool->size_bytes, bm_pool->virt_addr, bm_pool->dma_addr); @@ -580,9 +586,19 @@ static int mvpp2_bm_pools_init(struct device *dev, struct mvpp2 *priv) static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv) { + enum dma_data_direction dma_dir = DMA_FROM_DEVICE; int i, err, poolnum = MVPP2_BM_POOLS_NUM; + struct mvpp2_port *port; if (priv->percpu_pools) { + for (i = 0; i < priv->port_count; i++) { + port = priv->port_list[i]; + if (port->xdp_prog) { + dma_dir = DMA_BIDIRECTIONAL; + break; + } + } + poolnum = mvpp2_get_nrxqs(priv) * 2; for (i = 0; i < poolnum; i++) { /* the pool in use */ @@ -591,7 +607,8 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv) priv->page_pool[i] = mvpp2_create_page_pool(dev, mvpp2_pools[pn].buf_num, - mvpp2_pools[pn].pkt_size); + mvpp2_pools[pn].pkt_size, + dma_dir); if (IS_ERR(priv->page_pool[i])) return PTR_ERR(priv->page_pool[i]); } @@ -2318,11 +2335,15 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port, struct mvpp2_txq_pcpu_buf *tx_buf = txq_pcpu->buffs + txq_pcpu->txq_get_index; - if (!IS_TSO_HEADER(txq_pcpu, tx_buf->dma)) + if (!IS_TSO_HEADER(txq_pcpu, tx_buf->dma) && + tx_buf->type != MVPP2_TYPE_XDP_TX) dma_unmap_single(port->dev->dev.parent, tx_buf->dma, tx_buf->size, DMA_TO_DEVICE); - if (tx_buf->skb) + if (tx_buf->type == MVPP2_TYPE_SKB && tx_buf->skb) dev_kfree_skb_any(tx_buf->skb); + else if (tx_buf->type == MVPP2_TYPE_XDP_TX || + tx_buf->type == MVPP2_TYPE_XDP_NDO) + xdp_return_frame(tx_buf->xdpf); mvpp2_txq_inc_get(txq_pcpu); } @@ -2810,7 +2831,7 @@ static int mvpp2_setup_rxqs(struct mvpp2_port *port) static int mvpp2_setup_txqs(struct mvpp2_port *port) { struct mvpp2_tx_queue *txq; - int queue, err, cpu; + int queue, err; for (queue = 0; queue < port->ntxqs; queue++) { txq = port->txqs[queue]; @@ -2819,8 +2840,8 @@ static int mvpp2_setup_txqs(struct mvpp2_port *port) goto err_cleanup; /* Assign this queue to a CPU */ - cpu = queue % num_present_cpus(); - netif_set_xps_queue(port->dev, cpumask_of(cpu), queue); + if (queue < num_possible_cpus()) + netif_set_xps_queue(port->dev, cpumask_of(queue), queue); } if (port->has_tx_irqs) { @@ -3039,6 +3060,164 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb) return MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE; } +static void mvpp2_xdp_finish_tx(struct mvpp2_port *port, u16 txq_id, int nxmit, int nxmit_byte) +{ + unsigned int thread = mvpp2_cpu_to_thread(port->priv, smp_processor_id()); + struct mvpp2_pcpu_stats *stats = per_cpu_ptr(port->stats, thread); + struct mvpp2_tx_queue *aggr_txq; + struct mvpp2_txq_pcpu *txq_pcpu; + struct mvpp2_tx_queue *txq; + struct netdev_queue *nq; + + txq = port->txqs[txq_id]; + txq_pcpu = per_cpu_ptr(txq->pcpu, thread); + nq = netdev_get_tx_queue(port->dev, txq_id); + aggr_txq = &port->priv->aggr_txqs[thread]; + + txq_pcpu->reserved_num -= nxmit; + txq_pcpu->count += nxmit; + aggr_txq->count += nxmit; + + /* Enable transmit */ + wmb(); + mvpp2_aggr_txq_pend_desc_add(port, nxmit); + + if (txq_pcpu->count >= txq_pcpu->stop_threshold) + netif_tx_stop_queue(nq); + + u64_stats_update_begin(&stats->syncp); + stats->tx_bytes += nxmit_byte; + stats->tx_packets += nxmit; + u64_stats_update_end(&stats->syncp); + + /* Finalize TX processing */ + if (!port->has_tx_irqs && txq_pcpu->count >= txq->done_pkts_coal) + mvpp2_txq_done(port, txq, txq_pcpu); +} + +static int +mvpp2_xdp_submit_frame(struct mvpp2_port *port, u16 txq_id, + struct xdp_frame *xdpf, bool dma_map) +{ + unsigned int thread = mvpp2_cpu_to_thread(port->priv, smp_processor_id()); + u32 tx_cmd = MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE | + MVPP2_TXD_F_DESC | MVPP2_TXD_L_DESC; + enum mvpp2_tx_buf_type buf_type; + struct mvpp2_txq_pcpu *txq_pcpu; + struct mvpp2_tx_queue *aggr_txq; + struct mvpp2_tx_desc *tx_desc; + struct mvpp2_tx_queue *txq; + int ret = MVPP2_XDP_TX; + dma_addr_t dma_addr; + + txq = port->txqs[txq_id]; + txq_pcpu = per_cpu_ptr(txq->pcpu, thread); + aggr_txq = &port->priv->aggr_txqs[thread]; + + /* Check number of available descriptors */ + if (mvpp2_aggr_desc_num_check(port, aggr_txq, 1) || + mvpp2_txq_reserved_desc_num_proc(port, txq, txq_pcpu, 1)) { + ret = MVPP2_XDP_DROPPED; + goto out; + } + + /* Get a descriptor for the first part of the packet */ + tx_desc = mvpp2_txq_next_desc_get(aggr_txq); + mvpp2_txdesc_txq_set(port, tx_desc, txq->id); + mvpp2_txdesc_size_set(port, tx_desc, xdpf->len); + + if (dma_map) { + /* XDP_REDIRECT or AF_XDP */ + dma_addr = dma_map_single(port->dev->dev.parent, xdpf->data, + xdpf->len, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(port->dev->dev.parent, dma_addr))) { + mvpp2_txq_desc_put(txq); + ret = MVPP2_XDP_DROPPED; + goto out; + } + + buf_type = MVPP2_TYPE_XDP_NDO; + } else { + /* XDP_TX */ + struct page *page = virt_to_page(xdpf->data); + + dma_addr = page_pool_get_dma_addr(page) + + sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(port->dev->dev.parent, dma_addr, + xdpf->len, DMA_BIDIRECTIONAL); + + buf_type = MVPP2_TYPE_XDP_TX; + } + + mvpp2_txdesc_dma_addr_set(port, tx_desc, dma_addr); + + mvpp2_txdesc_cmd_set(port, tx_desc, tx_cmd); + mvpp2_txq_inc_put(port, txq_pcpu, xdpf, tx_desc, buf_type); + +out: + return ret; +} + +static int +mvpp2_xdp_xmit_back(struct mvpp2_port *port, struct xdp_buff *xdp) +{ + struct xdp_frame *xdpf; + u16 txq_id; + int ret; + + xdpf = xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) + return MVPP2_XDP_DROPPED; + + /* The first of the TX queues are used for XPS, + * the second half for XDP_TX + */ + txq_id = mvpp2_cpu_to_thread(port->priv, smp_processor_id()) + (port->ntxqs / 2); + + ret = mvpp2_xdp_submit_frame(port, txq_id, xdpf, false); + if (ret == MVPP2_XDP_TX) + mvpp2_xdp_finish_tx(port, txq_id, 1, xdpf->len); + + return ret; +} + +static int +mvpp2_xdp_xmit(struct net_device *dev, int num_frame, + struct xdp_frame **frames, u32 flags) +{ + struct mvpp2_port *port = netdev_priv(dev); + int i, nxmit_byte = 0, nxmit = num_frame; + u16 txq_id; + u32 ret; + + if (unlikely(test_bit(0, &port->state))) + return -ENETDOWN; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + /* The first of the TX queues are used for XPS, + * the second half for XDP_TX + */ + txq_id = mvpp2_cpu_to_thread(port->priv, smp_processor_id()) + (port->ntxqs / 2); + + for (i = 0; i < num_frame; i++) { + ret = mvpp2_xdp_submit_frame(port, txq_id, frames[i], true); + if (ret == MVPP2_XDP_TX) { + nxmit_byte += frames[i]->len; + } else { + xdp_return_frame_rx_napi(frames[i]); + nxmit--; + } + } + + if (nxmit > 0) + mvpp2_xdp_finish_tx(port, txq_id, nxmit, nxmit_byte); + + return nxmit; +} + static int mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, struct bpf_prog *prog, struct xdp_buff *xdp, @@ -3069,6 +3248,13 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, ret = MVPP2_XDP_REDIR; } break; + case XDP_TX: + ret = mvpp2_xdp_xmit_back(port, xdp); + if (ret != MVPP2_XDP_TX) { + page = virt_to_head_page(xdp->data); + page_pool_put_page(pp, page, sync, true); + } + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -3090,6 +3276,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, int rx_todo, struct mvpp2_rx_queue *rxq) { struct net_device *dev = port->dev; + enum dma_data_direction dma_dir; struct bpf_prog *xdp_prog; struct xdp_buff xdp; int rx_received; @@ -3139,13 +3326,19 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, if (rx_status & MVPP2_RXD_ERR_SUMMARY) goto err_drop_frame; + if (port->priv->percpu_pools) { + pp = port->priv->page_pool[pool]; + dma_dir = page_pool_get_dma_dir(pp); + } else { + dma_dir = DMA_FROM_DEVICE; + } + dma_sync_single_for_cpu(dev->dev.parent, dma_addr, rx_bytes + MVPP2_MH_SIZE, - DMA_FROM_DEVICE); - prefetch(data); + dma_dir); - if (port->priv->percpu_pools) - pp = port->priv->page_pool[pool]; + /* Prefetch header */ + prefetch(data); if (bm_pool->frag_size > PAGE_SIZE) frag_size = 0; @@ -3218,6 +3411,9 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, rcu_read_unlock(); + if (xdp_ret & MVPP2_XDP_REDIR) + xdp_do_flush_map(); + if (rcvd_pkts) { struct mvpp2_pcpu_stats *stats = this_cpu_ptr(port->stats); @@ -3284,11 +3480,11 @@ static int mvpp2_tx_frag_process(struct mvpp2_port *port, struct sk_buff *skb, /* Last descriptor */ mvpp2_txdesc_cmd_set(port, tx_desc, MVPP2_TXD_L_DESC); - mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc, MVPP2_TYPE_SKB); } else { /* Descriptor in the middle: Not First, Not Last */ mvpp2_txdesc_cmd_set(port, tx_desc, 0); - mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc, MVPP2_TYPE_SKB); } } @@ -3326,7 +3522,7 @@ static inline void mvpp2_tso_put_hdr(struct sk_buff *skb, mvpp2_txdesc_cmd_set(port, tx_desc, mvpp2_skb_tx_csum(port, skb) | MVPP2_TXD_F_DESC | MVPP2_TXD_PADDING_DISABLE); - mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc, MVPP2_TYPE_SKB); } static inline int mvpp2_tso_put_data(struct sk_buff *skb, @@ -3355,14 +3551,14 @@ static inline int mvpp2_tso_put_data(struct sk_buff *skb, if (!left) { mvpp2_txdesc_cmd_set(port, tx_desc, MVPP2_TXD_L_DESC); if (last) { - mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc, MVPP2_TYPE_SKB); return 0; } } else { mvpp2_txdesc_cmd_set(port, tx_desc, 0); } - mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc, MVPP2_TYPE_SKB); return 0; } @@ -3475,12 +3671,12 @@ static netdev_tx_t mvpp2_tx(struct sk_buff *skb, struct net_device *dev) /* First and Last descriptor */ tx_cmd |= MVPP2_TXD_F_DESC | MVPP2_TXD_L_DESC; mvpp2_txdesc_cmd_set(port, tx_desc, tx_cmd); - mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, skb, tx_desc, MVPP2_TYPE_SKB); } else { /* First but not Last */ tx_cmd |= MVPP2_TXD_F_DESC | MVPP2_TXD_PADDING_DISABLE; mvpp2_txdesc_cmd_set(port, tx_desc, tx_cmd); - mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc); + mvpp2_txq_inc_put(port, txq_pcpu, NULL, tx_desc, MVPP2_TYPE_SKB); /* Continue with other skb fragments */ if (mvpp2_tx_frag_process(port, skb, aggr_txq, txq)) { @@ -4159,6 +4355,33 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu) return err; } +static int mvpp2_check_pagepool_dma(struct mvpp2_port *port) +{ + enum dma_data_direction dma_dir = DMA_FROM_DEVICE; + struct mvpp2 *priv = port->priv; + int err = -1, i; + + if (!priv->percpu_pools) + return err; + + if (!priv->page_pool) + return -ENOMEM; + + for (i = 0; i < priv->port_count; i++) { + port = priv->port_list[i]; + if (port->xdp_prog) { + dma_dir = DMA_BIDIRECTIONAL; + break; + } + } + + /* All pools are equal in terms of DMA direction */ + if (priv->page_pool[0]->p.dma_dir != dma_dir) + err = mvpp2_bm_switch_buffers(priv, true); + + return err; +} + static void mvpp2_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats) { @@ -4268,13 +4491,15 @@ static int mvpp2_xdp_setup(struct mvpp2_port *port, struct netdev_bpf *bpf) return -EOPNOTSUPP; } - /* device is up and bpf is added/removed, must setup the RX queues */ - if (running && reset) { - mvpp2_stop_dev(port); - mvpp2_cleanup_rxqs(port); - mvpp2_cleanup_txqs(port); + if (port->ntxqs < num_possible_cpus() * 2) { + NL_SET_ERR_MSG_MOD(bpf->extack, "XDP_TX needs two TX queues per CPU"); + return -EOPNOTSUPP; } + /* device is up and bpf is added/removed, must setup the RX queues */ + if (running && reset) + mvpp2_stop(port->dev); + old_prog = xchg(&port->xdp_prog, prog); if (old_prog) bpf_prog_put(old_prog); @@ -4284,21 +4509,11 @@ static int mvpp2_xdp_setup(struct mvpp2_port *port, struct netdev_bpf *bpf) return 0; /* device was up, restore the link */ - if (running) { - int ret = mvpp2_setup_rxqs(port); - - if (ret) { - netdev_err(port->dev, "mvpp2_setup_rxqs failed\n"); - return ret; - } - ret = mvpp2_setup_txqs(port); - if (ret) { - netdev_err(port->dev, "mvpp2_setup_txqs failed\n"); - return ret; - } + if (running) + mvpp2_open(port->dev); - mvpp2_start_dev(port); - } + /* Check Page Pool DMA Direction */ + mvpp2_check_pagepool_dma(port); return 0; } @@ -4669,6 +4884,7 @@ static const struct net_device_ops mvpp2_netdev_ops = { .ndo_vlan_rx_kill_vid = mvpp2_vlan_rx_kill_vid, .ndo_set_features = mvpp2_set_features, .ndo_bpf = mvpp2_xdp, + .ndo_xdp_xmit = mvpp2_xdp_xmit, }; static const struct ethtool_ops mvpp2_eth_tool_ops = { From patchwork Thu Jul 2 14:12:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 1321512 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49yKrC3kRYz9sDX for ; Fri, 3 Jul 2020 00:15:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729474AbgGBOPi (ORCPT ); Thu, 2 Jul 2020 10:15:38 -0400 Received: from mail-ej1-f66.google.com ([209.85.218.66]:36653 "EHLO mail-ej1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729353AbgGBOPi (ORCPT ); Thu, 2 Jul 2020 10:15:38 -0400 Received: by mail-ej1-f66.google.com with SMTP id dr13so29846671ejc.3; Thu, 02 Jul 2020 07:15:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xKBi0nSTMNFgZZlMu5ixIplMnKQbkbUnfaBBIy3bv9I=; b=ZNpBSueOK+m2roziYrnLxkzX5HaA81iI7jjEtYm7mqpzOd9ybC+oiEDexwDSs4/fOY CH5rbYJVhctLszOURcsDr7bWmTctBQwn8QpLpDQTrK49Zvajv8uiLoJOB96UBUT8Jmw3 Updsd5BDWFDlIeyILy5NNF8Hm9wI3+cYEaH8LICLXrK8Vts/ILhvM6OjgMiK6/3S0jlu bnDEsVSzDHA7db2g7UJwm8vp7SBU5DoSf/wAnR7sqvr+YVAA1idVcuaIzdch7177RgvX iWZqvQVrSTDN5Ifo8Hx3vqoTEKqHgGXgU7+HhzTJH30GSP4C7E7o5Lj9+YzmgFq9DDdD gQ/g== X-Gm-Message-State: AOAM5310rbVAflHigUgcxOrnvR+LrlRLi8UIOSqhSVKZt+U1RmxAHw6d fYg0HQpXH+0X4rMJBH+OVW0igUzU1ksFTg== X-Google-Smtp-Source: ABdhPJw79E6H0yTBcGr91I/9Ml3iDqZMy4Iqflxu2Za9eIDOkY+bQALg7WyifFLvILxlLRZcgDVzEw== X-Received: by 2002:a17:906:2b12:: with SMTP id a18mr27352485ejg.186.1593699334300; Thu, 02 Jul 2020 07:15:34 -0700 (PDT) Received: from msft-t490s.lan ([2001:b07:5d26:7f46:d7c1:f090:1563:f81f]) by smtp.gmail.com with ESMTPSA id fi29sm6841274ejb.83.2020.07.02.07.15.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 07:15:33 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Sven Auhagen , Lorenzo Bianconi , "David S. Miller" , Jesper Dangaard Brouer , Stefan Chulski , Marcin Wojtas , maxime.chevallier@bootlin.com, antoine.tenart@bootlin.com, thomas.petazzoni@bootlin.com, Maciej Fijalkowski , Russell King Subject: [PATCH net-next v2 5/5] mvpp2: xdp ethtool stats Date: Thu, 2 Jul 2020 16:12:44 +0200 Message-Id: <20200702141244.51295-6-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200702141244.51295-1-mcroce@linux.microsoft.com> References: <20200702141244.51295-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Sven Auhagen Add ethtool statistics for XDP. Signed-off-by: Sven Auhagen --- drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 8 + .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 158 ++++++++++++++++-- 2 files changed, 148 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h index c52955b33fab..ebec47087b27 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h @@ -846,6 +846,14 @@ struct mvpp2_pcpu_stats { u64 rx_bytes; u64 tx_packets; u64 tx_bytes; + /* XDP */ + u64 xdp_redirect; + u64 xdp_pass; + u64 xdp_drop; + u64 xdp_xmit; + u64 xdp_xmit_err; + u64 xdp_tx; + u64 xdp_tx_err; }; /* Per-CPU port control */ diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index d49a814311be..d8e238ed533e 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -1485,6 +1485,16 @@ static void mvpp2_port_loopback_set(struct mvpp2_port *port, writel(val, port->base + MVPP2_GMAC_CTRL_1_REG); } +enum { + ETHTOOL_XDP_REDIRECT, + ETHTOOL_XDP_PASS, + ETHTOOL_XDP_DROP, + ETHTOOL_XDP_TX, + ETHTOOL_XDP_TX_ERR, + ETHTOOL_XDP_XMIT, + ETHTOOL_XDP_XMIT_ERR, +}; + struct mvpp2_ethtool_counter { unsigned int offset; const char string[ETH_GSTRING_LEN]; @@ -1577,10 +1587,21 @@ static const struct mvpp2_ethtool_counter mvpp2_ethtool_rxq_regs[] = { { MVPP2_RX_PKTS_BM_DROP_CTR, "rxq_%d_packets_bm_drops" }, }; +static const struct mvpp2_ethtool_counter mvpp2_ethtool_xdp[] = { + { ETHTOOL_XDP_REDIRECT, "rx_xdp_redirect", }, + { ETHTOOL_XDP_PASS, "rx_xdp_pass", }, + { ETHTOOL_XDP_DROP, "rx_xdp_drop", }, + { ETHTOOL_XDP_TX, "rx_xdp_tx", }, + { ETHTOOL_XDP_TX_ERR, "rx_xdp_tx_errors", }, + { ETHTOOL_XDP_XMIT, "tx_xdp_xmit", }, + { ETHTOOL_XDP_XMIT_ERR, "tx_xdp_xmit_errors", }, +}; + #define MVPP2_N_ETHTOOL_STATS(ntxqs, nrxqs) (ARRAY_SIZE(mvpp2_ethtool_mib_regs) + \ ARRAY_SIZE(mvpp2_ethtool_port_regs) + \ (ARRAY_SIZE(mvpp2_ethtool_txq_regs) * (ntxqs)) + \ - (ARRAY_SIZE(mvpp2_ethtool_rxq_regs) * (nrxqs))) + (ARRAY_SIZE(mvpp2_ethtool_rxq_regs) * (nrxqs)) + \ + ARRAY_SIZE(mvpp2_ethtool_xdp)) static void mvpp2_ethtool_get_strings(struct net_device *netdev, u32 sset, u8 *data) @@ -1619,10 +1640,57 @@ static void mvpp2_ethtool_get_strings(struct net_device *netdev, u32 sset, data += ETH_GSTRING_LEN; } } + + for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_xdp); i++) { + strscpy(data, mvpp2_ethtool_xdp[i].string, + ETH_GSTRING_LEN); + data += ETH_GSTRING_LEN; + } +} + +static void +mvpp2_get_xdp_stats(struct mvpp2_port *port, struct mvpp2_pcpu_stats *xdp_stats) +{ + unsigned int start; + unsigned int cpu; + + /* Gather XDP Statistics */ + for_each_possible_cpu(cpu) { + struct mvpp2_pcpu_stats *cpu_stats; + u64 xdp_redirect; + u64 xdp_pass; + u64 xdp_drop; + u64 xdp_xmit; + u64 xdp_xmit_err; + u64 xdp_tx; + u64 xdp_tx_err; + + cpu_stats = per_cpu_ptr(port->stats, cpu); + do { + start = u64_stats_fetch_begin_irq(&cpu_stats->syncp); + xdp_redirect = cpu_stats->xdp_redirect; + xdp_pass = cpu_stats->xdp_pass; + xdp_drop = cpu_stats->xdp_drop; + xdp_xmit = cpu_stats->xdp_xmit; + xdp_xmit_err = cpu_stats->xdp_xmit_err; + xdp_tx = cpu_stats->xdp_tx; + xdp_tx_err = cpu_stats->xdp_tx_err; + } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start)); + + xdp_stats->xdp_redirect += xdp_redirect; + xdp_stats->xdp_pass += xdp_pass; + xdp_stats->xdp_drop += xdp_drop; + xdp_stats->xdp_xmit += xdp_xmit; + xdp_stats->xdp_xmit_err += xdp_xmit_err; + xdp_stats->xdp_tx += xdp_tx; + xdp_stats->xdp_tx_err += xdp_tx_err; + } } static void mvpp2_read_stats(struct mvpp2_port *port) { + struct mvpp2_pcpu_stats xdp_stats = {}; + const struct mvpp2_ethtool_counter *s; u64 *pstats; int i, q; @@ -1650,6 +1718,37 @@ static void mvpp2_read_stats(struct mvpp2_port *port) *pstats++ += mvpp2_read_index(port->priv, port->first_rxq + q, mvpp2_ethtool_rxq_regs[i].offset); + + /* Gather XDP Statistics */ + mvpp2_get_xdp_stats(port, &xdp_stats); + + for (i = 0, s = mvpp2_ethtool_xdp; + s < mvpp2_ethtool_xdp + ARRAY_SIZE(mvpp2_ethtool_xdp); + s++, i++) { + switch (s->offset) { + case ETHTOOL_XDP_REDIRECT: + *pstats++ = xdp_stats.xdp_redirect; + break; + case ETHTOOL_XDP_PASS: + *pstats++ = xdp_stats.xdp_pass; + break; + case ETHTOOL_XDP_DROP: + *pstats++ = xdp_stats.xdp_drop; + break; + case ETHTOOL_XDP_TX: + *pstats++ = xdp_stats.xdp_tx; + break; + case ETHTOOL_XDP_TX_ERR: + *pstats++ = xdp_stats.xdp_tx_err; + break; + case ETHTOOL_XDP_XMIT: + *pstats++ = xdp_stats.xdp_xmit; + break; + case ETHTOOL_XDP_XMIT_ERR: + *pstats++ = xdp_stats.xdp_xmit_err; + break; + } + } } static void mvpp2_gather_hw_statistics(struct work_struct *work) @@ -3063,7 +3162,6 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb) static void mvpp2_xdp_finish_tx(struct mvpp2_port *port, u16 txq_id, int nxmit, int nxmit_byte) { unsigned int thread = mvpp2_cpu_to_thread(port->priv, smp_processor_id()); - struct mvpp2_pcpu_stats *stats = per_cpu_ptr(port->stats, thread); struct mvpp2_tx_queue *aggr_txq; struct mvpp2_txq_pcpu *txq_pcpu; struct mvpp2_tx_queue *txq; @@ -3085,11 +3183,6 @@ static void mvpp2_xdp_finish_tx(struct mvpp2_port *port, u16 txq_id, int nxmit, if (txq_pcpu->count >= txq_pcpu->stop_threshold) netif_tx_stop_queue(nq); - u64_stats_update_begin(&stats->syncp); - stats->tx_bytes += nxmit_byte; - stats->tx_packets += nxmit; - u64_stats_update_end(&stats->syncp); - /* Finalize TX processing */ if (!port->has_tx_irqs && txq_pcpu->count >= txq->done_pkts_coal) mvpp2_txq_done(port, txq, txq_pcpu); @@ -3162,6 +3255,7 @@ mvpp2_xdp_submit_frame(struct mvpp2_port *port, u16 txq_id, static int mvpp2_xdp_xmit_back(struct mvpp2_port *port, struct xdp_buff *xdp) { + struct mvpp2_pcpu_stats *stats = this_cpu_ptr(port->stats); struct xdp_frame *xdpf; u16 txq_id; int ret; @@ -3176,8 +3270,19 @@ mvpp2_xdp_xmit_back(struct mvpp2_port *port, struct xdp_buff *xdp) txq_id = mvpp2_cpu_to_thread(port->priv, smp_processor_id()) + (port->ntxqs / 2); ret = mvpp2_xdp_submit_frame(port, txq_id, xdpf, false); - if (ret == MVPP2_XDP_TX) + if (ret == MVPP2_XDP_TX) { + u64_stats_update_begin(&stats->syncp); + stats->tx_bytes += xdpf->len; + stats->tx_packets++; + stats->xdp_tx++; + u64_stats_update_end(&stats->syncp); + mvpp2_xdp_finish_tx(port, txq_id, 1, xdpf->len); + } else { + u64_stats_update_begin(&stats->syncp); + stats->xdp_tx_err++; + u64_stats_update_end(&stats->syncp); + } return ret; } @@ -3188,6 +3293,7 @@ mvpp2_xdp_xmit(struct net_device *dev, int num_frame, { struct mvpp2_port *port = netdev_priv(dev); int i, nxmit_byte = 0, nxmit = num_frame; + struct mvpp2_pcpu_stats *stats; u16 txq_id; u32 ret; @@ -3212,16 +3318,24 @@ mvpp2_xdp_xmit(struct net_device *dev, int num_frame, } } - if (nxmit > 0) + if (likely(nxmit > 0)) mvpp2_xdp_finish_tx(port, txq_id, nxmit, nxmit_byte); + stats = this_cpu_ptr(port->stats); + u64_stats_update_begin(&stats->syncp); + stats->tx_bytes += nxmit_byte; + stats->tx_packets += nxmit; + stats->xdp_xmit += nxmit; + stats->xdp_xmit_err += num_frame - nxmit; + u64_stats_update_end(&stats->syncp); + return nxmit; } static int mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, struct bpf_prog *prog, struct xdp_buff *xdp, - struct page_pool *pp) + struct page_pool *pp, struct mvpp2_pcpu_stats *stats) { unsigned int len, sync, err; struct page *page; @@ -3236,6 +3350,7 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, switch (act) { case XDP_PASS: + stats->xdp_pass++; ret = MVPP2_XDP_PASS; break; case XDP_REDIRECT: @@ -3246,6 +3361,7 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, page_pool_put_page(pp, page, sync, true); } else { ret = MVPP2_XDP_REDIR; + stats->xdp_redirect++; } break; case XDP_TX: @@ -3265,6 +3381,7 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, page = virt_to_head_page(xdp->data); page_pool_put_page(pp, page, sync, true); ret = MVPP2_XDP_DROPPED; + stats->xdp_drop++; break; } @@ -3276,14 +3393,13 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, int rx_todo, struct mvpp2_rx_queue *rxq) { struct net_device *dev = port->dev; + struct mvpp2_pcpu_stats ps = {}; enum dma_data_direction dma_dir; struct bpf_prog *xdp_prog; struct xdp_buff xdp; int rx_received; int rx_done = 0; u32 xdp_ret = 0; - u32 rcvd_pkts = 0; - u32 rcvd_bytes = 0; rcu_read_lock(); @@ -3358,7 +3474,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, xdp_set_data_meta_invalid(&xdp); - ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp); + ret = mvpp2_run_xdp(port, rxq, xdp_prog, &xdp, pp, &ps); if (ret) { xdp_ret |= ret; @@ -3368,6 +3484,8 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, goto err_drop_frame; } + ps.rx_packets++; + ps.rx_bytes += rx_bytes; continue; } } @@ -3391,8 +3509,8 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, bm_pool->buf_size, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); - rcvd_pkts++; - rcvd_bytes += rx_bytes; + ps.rx_packets++; + ps.rx_bytes += rx_bytes; skb_reserve(skb, MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM); skb_put(skb, rx_bytes); @@ -3414,12 +3532,16 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, if (xdp_ret & MVPP2_XDP_REDIR) xdp_do_flush_map(); - if (rcvd_pkts) { + if (ps.rx_packets) { struct mvpp2_pcpu_stats *stats = this_cpu_ptr(port->stats); u64_stats_update_begin(&stats->syncp); - stats->rx_packets += rcvd_pkts; - stats->rx_bytes += rcvd_bytes; + stats->rx_packets += ps.rx_packets; + stats->rx_bytes += ps.rx_bytes; + /* xdp */ + stats->xdp_redirect += ps.xdp_redirect; + stats->xdp_pass += ps.xdp_pass; + stats->xdp_drop += ps.xdp_drop; u64_stats_update_end(&stats->syncp); }