From patchwork Tue Apr 18 15:56:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1770337 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=f9YvCWVt; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q17p15b8yz1yZy for ; Wed, 19 Apr 2023 01:56:40 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1pongx-0005Mv-Db; Tue, 18 Apr 2023 15:56:27 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pongv-0005Me-73 for kernel-team@lists.ubuntu.com; Tue, 18 Apr 2023 15:56:25 +0000 Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 80DF03F230 for ; Tue, 18 Apr 2023 15:56:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1681833384; bh=cjJSkC0N1aGCEBVSILRDY8j55/bCteF90b2XXNCdyLc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=f9YvCWVtBY/+WscGqNqaQx5/e2j2n6CcUNPoZMhU8NUfwh0g9HWG8eli1mfprsdUt F3Ow6JuheEpxtHVGq1VLbRC/hpte/86yjvC0ep99EzHxDJ3oMQhzaek88GB4bhLt4U Fs2BNKMNdr/oI5Vbxap/x06mwrV4nnRxhu83xNHBUZfcVDrfnx61MWgM4LZscqaEAt NjTu8ZNzuHu3Y7XGsvdxmR2AuxWzTNg6tUpQWgUh/NgjpunKF4EmxXUrOLxKRG88HJ Wgx+sfWo13YaJNqseJ3rWVGi7jbcnfv+o48ZW0P4d7cOIQB3vSuds07vIhGzyBRRta riQYAQIC/Gc5g== Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-51bb4164162so944440a12.2 for ; Tue, 18 Apr 2023 08:56:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681833382; x=1684425382; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cjJSkC0N1aGCEBVSILRDY8j55/bCteF90b2XXNCdyLc=; b=QmzdvDe8esjF2Ib4twEsY0+90lG7dWwIPR41rHAT+NTr85v3vbsxEBMqhjfcgFDDh+ vdM4Ddk6FPoQeeSaPDgOn4P1WAcpvzfsdnYheoGQ9KHalTxF4F+Zhoe0jmJdgYzP95KT m/SwA5qpVlM4Un+PbWBayztF1FXfO7tfyZaWxrTDqDl/4obmDfBkXRvS+YE5uI7lugFg oDoXw4dvJZRJ4mXKqykTa9YVLnr9dJtVODUPWuxuHkScM2XVkUexgSOqYU59+RhVkou+ fQY/Mu2WL5oN+BAFgCuUNWzyL68LWgcRXVVbCK7hqCSjmelq4mt0SQ7y4BIreJrWq4S4 4Fbw== X-Gm-Message-State: AAQBX9fxqBJUBFqrj2COMFUHJvK7GqIkgySwpiSL7OM9AGx5QaugY/Ur licUH4+PDCGg6U3k9N11hUDIWQKQZ4eN5eTDcuc6mGEvlZkfSIt8bK2nEyNbvOh0SWyAStnYOho 0dO5RCIxCrdPIkxx7chIxQoeELis0H0U/79+f0sIwXCwzPphCXg== X-Received: by 2002:a17:902:daca:b0:1a6:9079:2bb3 with SMTP id q10-20020a170902daca00b001a690792bb3mr2897366plx.33.1681833382696; Tue, 18 Apr 2023 08:56:22 -0700 (PDT) X-Google-Smtp-Source: AKy350bk4Z1BXpZFK0lp1oZQfjay63i/byCqDEF1voraJsUGCu5duFKg+JqofPIpCJ2ttaCrgg8hDg== X-Received: by 2002:a17:902:daca:b0:1a6:9079:2bb3 with SMTP id q10-20020a170902daca00b001a690792bb3mr2897353plx.33.1681833382437; Tue, 18 Apr 2023 08:56:22 -0700 (PDT) Received: from smtp.gmail.com ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id y23-20020a1709029b9700b001a63deeb5d1sm6962333plp.27.2023.04.18.08.56.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 08:56:22 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/4] net: mana: Use napi_build_skb in RX path Date: Tue, 18 Apr 2023 09:56:14 -0600 Message-Id: <20230418155617.153531-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418155617.153531-1-tim.gardner@canonical.com> References: <20230418155617.153531-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2016898 Use napi_build_skb() instead of build_skb() to take advantage of the NAPI percpu caches to obtain skbuff_head. Signed-off-by: Haiyang Zhang Reviewed-by: Jesse Brandeburg Signed-off-by: David S. Miller (cherry picked from commit ce518bc3e9ca342309995c9270c3ec4892963695 linux-next) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 55bf40e5ee71..a1b7905ed2f7 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1188,7 +1188,7 @@ static void mana_post_pkt_rxq(struct mana_rxq *rxq) static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len, struct xdp_buff *xdp) { - struct sk_buff *skb = build_skb(buf_va, PAGE_SIZE); + struct sk_buff *skb = napi_build_skb(buf_va, PAGE_SIZE); if (!skb) return NULL; From patchwork Tue Apr 18 15:56:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1770339 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=TABHJV+E; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q17p15jFGz23tW for ; Wed, 19 Apr 2023 01:56:40 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ponh2-0005Pi-UO; Tue, 18 Apr 2023 15:56:32 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pongz-0005NA-As for kernel-team@lists.ubuntu.com; Tue, 18 Apr 2023 15:56:29 +0000 Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id E79273F445 for ; Tue, 18 Apr 2023 15:56:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1681833385; bh=Ze3ftgsix4goNBlUN3IZR2yCa/k1hbA6MLmj2W/yk9Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TABHJV+E6KniL7pyphyem47qjTQM1ilKRLElmsVcZ6booFnzIwGFuT8LIwyN7K3HZ hq4gfyr0pHjUK7/Mr0wHIZRb4xaS5ebauvkfjZQEyVV5Q1MZt+u6zq7RYBXpXLwxxJ zUM+km00fhfEWae9H6VZBjp++ZSo+aYSKuTkXFqjan4GfgsIE5hMa4b4uv6e/C7MdE qMswnMce2C4k3Rx67UCKvBjw7X1pYETGf8aMXXX6MlAnJWgJwelGCyZs8kYRyaSYbG mbDNUuZL1AeOwKpQMtYEZAVJpBz8uSLno35+uWrid7P//OzTKmvOV9INBwlOrIiJmE EMvOgUmMmwwgw== Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-1a6ee59714aso7168275ad.0 for ; Tue, 18 Apr 2023 08:56:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681833383; x=1684425383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ze3ftgsix4goNBlUN3IZR2yCa/k1hbA6MLmj2W/yk9Y=; b=MQbqQacWjtvo+zkkQGrdmz4u1MCl3WlQ7JHcmRf+rvhJ3hYk4megbYnfF0Dwb26kp3 e/S1zRQcohbTFNVO7pbvPrUUHbLdDLpaEdaSn2CxXu4IerqasRC+E4+bvvwtoioLjzvC 1hfhd22mkjBw6mCOTcBFYScV0p4nr4RzDKda+dM6Ttksy7RqEWuwQFuMWPUyD39V4nPJ w/xdTeWVQSM2ilUKJ/5n/VYIzKkoQ+NLnkZY276EDK+Bg73YF267+zWNRArfsymZWJ98 T9EY9rqjjfouMt2/y0USb3q++NZGAJTI5YMJhdR0BWU+GsUR3Vjr+xez+55ne15fYyB4 7AQQ== X-Gm-Message-State: AAQBX9cwZBwjlR65/2+ru8h/lqbPkKRpcZTB8JrCnzJ6UHt2bXOsBaTG wFeP9sN0i/oXae2hWvWaHnBLaEc4rQcicbhbjEAjiCPt2iXMw0YgrmHcUshqDLo5d+N6bXRP6+b nR7roqfCV9prQ334Zhnvmhj4HQgwVgnRr04g0E9o/mLI9EjawoQ== X-Received: by 2002:a17:902:eb8d:b0:19c:f476:4793 with SMTP id q13-20020a170902eb8d00b0019cf4764793mr1987428plg.51.1681833383483; Tue, 18 Apr 2023 08:56:23 -0700 (PDT) X-Google-Smtp-Source: AKy350Z+qmRrCrqO//KyJZTubFhD25mMyLlXrW4ltS3sBJSL88xvGS22gdbYkILbD3Yu0qig7QWyWw== X-Received: by 2002:a17:902:eb8d:b0:19c:f476:4793 with SMTP id q13-20020a170902eb8d00b0019cf4764793mr1987418plg.51.1681833383162; Tue, 18 Apr 2023 08:56:23 -0700 (PDT) Received: from smtp.gmail.com ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id y23-20020a1709029b9700b001a63deeb5d1sm6962333plp.27.2023.04.18.08.56.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 08:56:22 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 2/4] net: mana: Refactor RX buffer allocation code to prepare for various MTU Date: Tue, 18 Apr 2023 09:56:15 -0600 Message-Id: <20230418155617.153531-3-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418155617.153531-1-tim.gardner@canonical.com> References: <20230418155617.153531-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2016898 Move out common buffer allocation code from mana_process_rx_cqe() and mana_alloc_rx_wqe() to helper functions. Refactor related variables so they can be changed in one place, and buffer sizes are in sync. Signed-off-by: Haiyang Zhang Reviewed-by: Jesse Brandeburg Signed-off-by: David S. Miller (cherry picked from commit a2917b23497e4205db32271e4e06e142a9f8a6aa linux-next) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 154 ++++++++++-------- include/net/mana/mana.h | 6 +- 2 files changed, 91 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index a1b7905ed2f7..af0c0ee95d87 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1282,14 +1282,64 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, u64_stats_update_end(&rx_stats->syncp); drop: - WARN_ON_ONCE(rxq->xdp_save_page); - rxq->xdp_save_page = virt_to_page(buf_va); + WARN_ON_ONCE(rxq->xdp_save_va); + /* Save for reuse */ + rxq->xdp_save_va = buf_va; ++ndev->stats.rx_dropped; return; } +static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, + dma_addr_t *da, bool is_napi) +{ + struct page *page; + void *va; + + /* Reuse XDP dropped page if available */ + if (rxq->xdp_save_va) { + va = rxq->xdp_save_va; + rxq->xdp_save_va = NULL; + } else { + page = dev_alloc_page(); + if (!page) + return NULL; + + va = page_to_virt(page); + } + + *da = dma_map_single(dev, va + XDP_PACKET_HEADROOM, rxq->datasize, + DMA_FROM_DEVICE); + + if (dma_mapping_error(dev, *da)) { + put_page(virt_to_head_page(va)); + return NULL; + } + + return va; +} + +/* Allocate frag for rx buffer, and save the old buf */ +static void mana_refill_rxoob(struct device *dev, struct mana_rxq *rxq, + struct mana_recv_buf_oob *rxoob, void **old_buf) +{ + dma_addr_t da; + void *va; + + va = mana_get_rxfrag(rxq, dev, &da, true); + + if (!va) + return; + + dma_unmap_single(dev, rxoob->sgl[0].address, rxq->datasize, + DMA_FROM_DEVICE); + *old_buf = rxoob->buf_va; + + rxoob->buf_va = va; + rxoob->sgl[0].address = da; +} + static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, struct gdma_comp *cqe) { @@ -1299,10 +1349,8 @@ static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, struct mana_recv_buf_oob *rxbuf_oob; struct mana_port_context *apc; struct device *dev = gc->dev; - void *new_buf, *old_buf; - struct page *new_page; + void *old_buf = NULL; u32 curr, pktlen; - dma_addr_t da; apc = netdev_priv(ndev); @@ -1345,40 +1393,11 @@ static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, rxbuf_oob = &rxq->rx_oobs[curr]; WARN_ON_ONCE(rxbuf_oob->wqe_inf.wqe_size_in_bu != 1); - /* Reuse XDP dropped page if available */ - if (rxq->xdp_save_page) { - new_page = rxq->xdp_save_page; - rxq->xdp_save_page = NULL; - } else { - new_page = alloc_page(GFP_ATOMIC); - } - - if (new_page) { - da = dma_map_page(dev, new_page, XDP_PACKET_HEADROOM, rxq->datasize, - DMA_FROM_DEVICE); - - if (dma_mapping_error(dev, da)) { - __free_page(new_page); - new_page = NULL; - } - } - - new_buf = new_page ? page_to_virt(new_page) : NULL; - - if (new_buf) { - dma_unmap_page(dev, rxbuf_oob->buf_dma_addr, rxq->datasize, - DMA_FROM_DEVICE); - - old_buf = rxbuf_oob->buf_va; - - /* refresh the rxbuf_oob with the new page */ - rxbuf_oob->buf_va = new_buf; - rxbuf_oob->buf_dma_addr = da; - rxbuf_oob->sgl[0].address = rxbuf_oob->buf_dma_addr; - } else { - old_buf = NULL; /* drop the packet if no memory */ - } + mana_refill_rxoob(dev, rxq, rxbuf_oob, &old_buf); + /* Unsuccessful refill will have old_buf == NULL. + * In this case, mana_rx_skb() will drop the packet. + */ mana_rx_skb(old_buf, oob, rxq); drop: @@ -1659,8 +1678,8 @@ static void mana_destroy_rxq(struct mana_port_context *apc, mana_deinit_cq(apc, &rxq->rx_cq); - if (rxq->xdp_save_page) - __free_page(rxq->xdp_save_page); + if (rxq->xdp_save_va) + put_page(virt_to_head_page(rxq->xdp_save_va)); for (i = 0; i < rxq->num_rx_buf; i++) { rx_oob = &rxq->rx_oobs[i]; @@ -1668,10 +1687,10 @@ static void mana_destroy_rxq(struct mana_port_context *apc, if (!rx_oob->buf_va) continue; - dma_unmap_page(dev, rx_oob->buf_dma_addr, rxq->datasize, - DMA_FROM_DEVICE); + dma_unmap_single(dev, rx_oob->sgl[0].address, + rx_oob->sgl[0].size, DMA_FROM_DEVICE); - free_page((unsigned long)rx_oob->buf_va); + put_page(virt_to_head_page(rx_oob->buf_va)); rx_oob->buf_va = NULL; } @@ -1681,6 +1700,26 @@ static void mana_destroy_rxq(struct mana_port_context *apc, kfree(rxq); } +static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key, + struct mana_rxq *rxq, struct device *dev) +{ + dma_addr_t da; + void *va; + + va = mana_get_rxfrag(rxq, dev, &da, false); + + if (!va) + return -ENOMEM; + + rx_oob->buf_va = va; + + rx_oob->sgl[0].address = da; + rx_oob->sgl[0].size = rxq->datasize; + rx_oob->sgl[0].mem_key = mem_key; + + return 0; +} + #define MANA_WQE_HEADER_SIZE 16 #define MANA_WQE_SGE_SIZE 16 @@ -1690,9 +1729,8 @@ static int mana_alloc_rx_wqe(struct mana_port_context *apc, struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; struct mana_recv_buf_oob *rx_oob; struct device *dev = gc->dev; - struct page *page; - dma_addr_t da; u32 buf_idx; + int ret; WARN_ON(rxq->datasize == 0 || rxq->datasize > PAGE_SIZE); @@ -1703,25 +1741,12 @@ static int mana_alloc_rx_wqe(struct mana_port_context *apc, rx_oob = &rxq->rx_oobs[buf_idx]; memset(rx_oob, 0, sizeof(*rx_oob)); - page = alloc_page(GFP_KERNEL); - if (!page) - return -ENOMEM; - - da = dma_map_page(dev, page, XDP_PACKET_HEADROOM, rxq->datasize, - DMA_FROM_DEVICE); - - if (dma_mapping_error(dev, da)) { - __free_page(page); - return -ENOMEM; - } - - rx_oob->buf_va = page_to_virt(page); - rx_oob->buf_dma_addr = da; - rx_oob->num_sge = 1; - rx_oob->sgl[0].address = rx_oob->buf_dma_addr; - rx_oob->sgl[0].size = rxq->datasize; - rx_oob->sgl[0].mem_key = apc->ac->gdma_dev->gpa_mkey; + + ret = mana_fill_rx_oob(rx_oob, apc->ac->gdma_dev->gpa_mkey, rxq, + dev); + if (ret) + return ret; rx_oob->wqe_req.sgl = rx_oob->sgl; rx_oob->wqe_req.num_sge = rx_oob->num_sge; @@ -1780,9 +1805,10 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, rxq->ndev = ndev; rxq->num_rx_buf = RX_BUFFERS_PER_QUEUE; rxq->rxq_idx = rxq_idx; - rxq->datasize = ALIGN(MAX_FRAME_SIZE, 64); rxq->rxobj = INVALID_MANA_HANDLE; + rxq->datasize = ALIGN(ETH_FRAME_LEN, 64); + err = mana_alloc_rx_wqe(apc, rxq, &rq_size, &cq_size); if (err) goto out; diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index bb11a6535d80..037bcabf6b98 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -36,9 +36,6 @@ enum TRI_STATE { #define COMP_ENTRY_SIZE 64 -#define ADAPTER_MTU_SIZE 1500 -#define MAX_FRAME_SIZE (ADAPTER_MTU_SIZE + 14) - #define RX_BUFFERS_PER_QUEUE 512 #define MAX_SEND_BUFFERS_PER_QUEUE 256 @@ -282,7 +279,6 @@ struct mana_recv_buf_oob { struct gdma_wqe_request wqe_req; void *buf_va; - dma_addr_t buf_dma_addr; /* SGL of the buffer going to be sent has part of the work request. */ u32 num_sge; @@ -322,7 +318,7 @@ struct mana_rxq { struct bpf_prog __rcu *bpf_prog; struct xdp_rxq_info xdp_rxq; - struct page *xdp_save_page; + void *xdp_save_va; /* for reusing */ bool xdp_flush; int xdp_rc; /* XDP redirect return code */ From patchwork Tue Apr 18 15:56:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1770336 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=DS3twwgl; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q17p15fvCz23tV for ; Wed, 19 Apr 2023 01:56:40 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ponh0-0005O1-KC; Tue, 18 Apr 2023 15:56:30 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pongx-0005Mu-DK for kernel-team@lists.ubuntu.com; Tue, 18 Apr 2023 15:56:27 +0000 Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 070993F118 for ; Tue, 18 Apr 2023 15:56:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1681833386; bh=SMwTIvoP+aJplL6ntbHCLoFCKm9o46Gss9lB1jHdxOI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DS3twwglFcyN0YLwz0J3OWT6Yy+mzib2oRj+1rMOVRMdmKGbC6fEXc2eo1Rb8n3Zq 2pdSNevgGCb4eCzdDWFKbtCDZRM++GvMSFQ1TswqonzyM6gK9B4o/YbmHjVXGVG27C n0PnAoZ96DPsRt0NA21jNQhFpF0fDWmzkAfD4Vn7eRClJ6XbZHzp0LzpdSAKI9BgUz 1T/Q/TW9e9lUdjDW2XQFTDBwUYPSqSaBhCh24ncYu8/ewGcWwbWnRqVy6rtxOny5ot Jl6eWMQEtmp9iPNTevXL6IFWHY384/H4N5SDzPqLf6/ZiNK6MzMBF9OJJ9RgjQk6lD /sLkztZrlU4sA== Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-51b51394f32so949287a12.1 for ; Tue, 18 Apr 2023 08:56:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681833384; x=1684425384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SMwTIvoP+aJplL6ntbHCLoFCKm9o46Gss9lB1jHdxOI=; b=AGHFoBI0QBu2BQAieDLxbaL+0hao2RofTqQlraAASyaNK+0EA4k2BP5LVjU963Lz0e tiiRkq7pPZisdCUVS1CnJjn5qTsYYsjWEd6SZ4614IQXibCygEls4bRGldV6FGl3WjBZ I9a7HG+SRefXNFciYZ84Cc9W5SH4Vf52e1IyctIHhIBE8djP9E5VW09MxFN9svSXy9ti qfc9gckDodNOWoWc9owtLsl6cP4g7xMR5oTJQcWmUZeDPVWm1It6lMXPINESDgG3qyvP 8RC85BZfymEpCV/UO6FwWaWhtz8LnBwHN+yccBK1Q3gen4LE7nc83yYmK54nNTS8zibY l51g== X-Gm-Message-State: AAQBX9e5Spy2GozPInTrCCx1i9a96+0oYbac/4NoaexhwtJroVUE4cZg p5ieaZEns/Hnl8gwpkQWxtMmiCM9L/5bP1mnoOXEJIbEg8EEUA6x1csZJVGUwdw5w82CYyNjPFT NyKaHcBbS1A6fr7vbK0f5l798qqvzrzwNc7QaAxZSbONTYQ8+IA== X-Received: by 2002:a17:902:cec8:b0:1a6:dba5:2e60 with SMTP id d8-20020a170902cec800b001a6dba52e60mr3279475plg.25.1681833384149; Tue, 18 Apr 2023 08:56:24 -0700 (PDT) X-Google-Smtp-Source: AKy350YuxA3ie4iOA+YewFHxj9eKj1xBTePUZBEZY4AaztQc/YR+r45oQsRKxDq0o90YsFjHTDr2yw== X-Received: by 2002:a17:902:cec8:b0:1a6:dba5:2e60 with SMTP id d8-20020a170902cec800b001a6dba52e60mr3279457plg.25.1681833383850; Tue, 18 Apr 2023 08:56:23 -0700 (PDT) Received: from smtp.gmail.com ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id y23-20020a1709029b9700b001a63deeb5d1sm6962333plp.27.2023.04.18.08.56.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 08:56:23 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/4] net: mana: Enable RX path to handle various MTU sizes Date: Tue, 18 Apr 2023 09:56:16 -0600 Message-Id: <20230418155617.153531-4-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418155617.153531-1-tim.gardner@canonical.com> References: <20230418155617.153531-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2016898 Update RX data path to allocate and use RX queue DMA buffers with proper size based on potentially various MTU sizes. Signed-off-by: Haiyang Zhang Reviewed-by: Jesse Brandeburg Signed-off-by: David S. Miller (cherry picked from commit 2fbbd712baf1c60996554326728bbdbef5616e12 linux-next) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 38 ++++++++++++++----- include/net/mana/mana.h | 7 ++++ 2 files changed, 35 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index af0c0ee95d87..afbbe447de1d 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1185,10 +1185,10 @@ static void mana_post_pkt_rxq(struct mana_rxq *rxq) WARN_ON_ONCE(recv_buf_oob->wqe_inf.wqe_size_in_bu != 1); } -static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len, - struct xdp_buff *xdp) +static struct sk_buff *mana_build_skb(struct mana_rxq *rxq, void *buf_va, + uint pkt_len, struct xdp_buff *xdp) { - struct sk_buff *skb = napi_build_skb(buf_va, PAGE_SIZE); + struct sk_buff *skb = napi_build_skb(buf_va, rxq->alloc_size); if (!skb) return NULL; @@ -1196,11 +1196,12 @@ static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len, if (xdp->data_hard_start) { skb_reserve(skb, xdp->data - xdp->data_hard_start); skb_put(skb, xdp->data_end - xdp->data); - } else { - skb_reserve(skb, XDP_PACKET_HEADROOM); - skb_put(skb, pkt_len); + return skb; } + skb_reserve(skb, rxq->headroom); + skb_put(skb, pkt_len); + return skb; } @@ -1233,7 +1234,7 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, if (act != XDP_PASS && act != XDP_TX) goto drop_xdp; - skb = mana_build_skb(buf_va, pkt_len, &xdp); + skb = mana_build_skb(rxq, buf_va, pkt_len, &xdp); if (!skb) goto drop; @@ -1301,6 +1302,14 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, if (rxq->xdp_save_va) { va = rxq->xdp_save_va; rxq->xdp_save_va = NULL; + } else if (rxq->alloc_size > PAGE_SIZE) { + if (is_napi) + va = napi_alloc_frag(rxq->alloc_size); + else + va = netdev_alloc_frag(rxq->alloc_size); + + if (!va) + return NULL; } else { page = dev_alloc_page(); if (!page) @@ -1309,7 +1318,7 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, va = page_to_virt(page); } - *da = dma_map_single(dev, va + XDP_PACKET_HEADROOM, rxq->datasize, + *da = dma_map_single(dev, va + rxq->headroom, rxq->datasize, DMA_FROM_DEVICE); if (dma_mapping_error(dev, *da)) { @@ -1732,7 +1741,7 @@ static int mana_alloc_rx_wqe(struct mana_port_context *apc, u32 buf_idx; int ret; - WARN_ON(rxq->datasize == 0 || rxq->datasize > PAGE_SIZE); + WARN_ON(rxq->datasize == 0); *rxq_size = 0; *cq_size = 0; @@ -1788,6 +1797,7 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, struct gdma_dev *gd = apc->ac->gdma_dev; struct mana_obj_spec wq_spec; struct mana_obj_spec cq_spec; + unsigned int mtu = ndev->mtu; struct gdma_queue_spec spec; struct mana_cq *cq = NULL; struct gdma_context *gc; @@ -1807,7 +1817,15 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, rxq->rxq_idx = rxq_idx; rxq->rxobj = INVALID_MANA_HANDLE; - rxq->datasize = ALIGN(ETH_FRAME_LEN, 64); + rxq->datasize = ALIGN(mtu + ETH_HLEN, 64); + + if (mtu > MANA_XDP_MTU_MAX) { + rxq->alloc_size = mtu + MANA_RXBUF_PAD; + rxq->headroom = 0; + } else { + rxq->alloc_size = mtu + MANA_RXBUF_PAD + XDP_PACKET_HEADROOM; + rxq->headroom = XDP_PACKET_HEADROOM; + } err = mana_alloc_rx_wqe(apc, rxq, &rq_size, &cq_size); if (err) diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 037bcabf6b98..fee99d704281 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -291,6 +291,11 @@ struct mana_recv_buf_oob { struct gdma_posted_wqe_info wqe_inf; }; +#define MANA_RXBUF_PAD (SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) \ + + ETH_HLEN) + +#define MANA_XDP_MTU_MAX (PAGE_SIZE - MANA_RXBUF_PAD - XDP_PACKET_HEADROOM) + struct mana_rxq { struct gdma_queue *gdma_rq; /* Cache the gdma receive queue id */ @@ -300,6 +305,8 @@ struct mana_rxq { u32 rxq_idx; u32 datasize; + u32 alloc_size; + u32 headroom; mana_handle_t rxobj; From patchwork Tue Apr 18 15:56:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1770340 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=qEerEghX; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q17p25fj6z23tY for ; Wed, 19 Apr 2023 01:56:42 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ponh4-0005RC-7I; Tue, 18 Apr 2023 15:56:34 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pongz-0005NT-Pa for kernel-team@lists.ubuntu.com; Tue, 18 Apr 2023 15:56:29 +0000 Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 4EAC53F1AC for ; Tue, 18 Apr 2023 15:56:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1681833389; bh=tW8Mr2FrchgoJ+jYZAPIkbOuSQPRaTC7UKyGAIer1V4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qEerEghXiatkp843n8sp4Y+p6zJZy2+uMozhBcOzrcrqzo673sM8gyD1OvdN4SGoK dPo1u+1hVsu/BgRO1u36n/M/XILERKmjWdqS0+tnXDHp1kWVEfph2ot/UCu/lbkgNQ a5nQhxJjdKOXYoHg+xYad64Ehj1o4VC/sSyCNbxlZIMaoKDElORmBrdOjy77tONHVg VyvqxDvGCkjL7Y44l4y+5N6W38/bYktCxGmC7E3Kexn662J7gRXHagMQBxhBD3Xy35 oJsRV45jIbpqecNiHvaghd38+4JuD8GlttfmoLANE1AXhp332q6SuTf0DniE03ZGES wNQZMVW6bQkSA== Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-1a6a1b6a1d9so10234815ad.3 for ; Tue, 18 Apr 2023 08:56:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681833385; x=1684425385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tW8Mr2FrchgoJ+jYZAPIkbOuSQPRaTC7UKyGAIer1V4=; b=e7JE2rqQAIbKENSXWPKMj2lVycJ17zDVxOhbwd25nwwcc/I/KkLpwvDgI9CQCwNlxH Id5e3JL4JhYM5mBF0AVmKseJuQxNQ10HGBvCcM5dPy+RPc3CRNQFmo8rgaIbhvwpmCXr FKIqVWZXznZ3327UWVt59wrCLqipkhm/tiCh0cK3DZFjkc6W18nHit421qezwL9G+mEw nIXsQkD4qvqBORgniIexGgCd4Q3A/Tz2+cAmF66nldrehjcfHSZhKkMe2FdHNZo/rvRP 4l3svla6RmRyN4yYQSrXTgBbeHoqvdxPnSlvHn3RJ+T7GGwwaCItJQoQt/APKMHhxh74 /zTg== X-Gm-Message-State: AAQBX9dpWupVc0f74QzHo+WrGRiwlg905c9Iyut7SRWlk4YALw5kmaSy wdZdKzWcC4weruXh0oPN8sud1a4EgqU5xTtos/pIrmxImKbYabkE1kp8cQjIzurTAKUwBLld6xj ACpzlADPaONUh6a1QJ9l+6+dVM6SEvz9CbED4v0mMDnN1qRXcYA== X-Received: by 2002:a17:902:e745:b0:1a6:d9d7:7463 with SMTP id p5-20020a170902e74500b001a6d9d77463mr2927921plf.12.1681833385081; Tue, 18 Apr 2023 08:56:25 -0700 (PDT) X-Google-Smtp-Source: AKy350aeJ91wUezEGD6s6qGnrP09ppO7Vg8cadma2KhHr5X/sX/2/qkEhqOoSGSzWTaiNWG7a5KfBg== X-Received: by 2002:a17:902:e745:b0:1a6:d9d7:7463 with SMTP id p5-20020a170902e74500b001a6d9d77463mr2927897plf.12.1681833384638; Tue, 18 Apr 2023 08:56:24 -0700 (PDT) Received: from smtp.gmail.com ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id y23-20020a1709029b9700b001a63deeb5d1sm6962333plp.27.2023.04.18.08.56.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 08:56:24 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 4/4] net: mana: Add support for jumbo frame Date: Tue, 18 Apr 2023 09:56:17 -0600 Message-Id: <20230418155617.153531-5-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418155617.153531-1-tim.gardner@canonical.com> References: <20230418155617.153531-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2016898 During probe, get the hardware-allowed max MTU by querying the device configuration. Users can select MTU up to the device limit. When XDP is in use, limit MTU settings so the buffer size is within one page. And, when MTU is set to a too large value, XDP is not allowed to run. Also, to prevent changing MTU fails, and leaves the NIC in a bad state, pre-allocate all buffers before starting the change. So in low memory condition, it will return error, without affecting the NIC. Signed-off-by: Haiyang Zhang Reviewed-by: Jesse Brandeburg Signed-off-by: David S. Miller (cherry picked from commit 80f6215b450eb8e92d8b1f117abf5ecf867f963e linux-next) Signed-off-by: Tim Gardner --- .../net/ethernet/microsoft/mana/mana_bpf.c | 22 +- drivers/net/ethernet/microsoft/mana/mana_en.c | 217 ++++++++++++++++-- include/net/mana/gdma.h | 4 + include/net/mana/mana.h | 14 ++ 4 files changed, 233 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c index 3caea631229c..23b1521c0df9 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c @@ -133,12 +133,6 @@ u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq, return act; } -static unsigned int mana_xdp_fraglen(unsigned int len) -{ - return SKB_DATA_ALIGN(len) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); -} - struct bpf_prog *mana_xdp_get(struct mana_port_context *apc) { ASSERT_RTNL(); @@ -179,17 +173,18 @@ static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog, { struct mana_port_context *apc = netdev_priv(ndev); struct bpf_prog *old_prog; - int buf_max; + struct gdma_context *gc; + + gc = apc->ac->gdma_dev->gdma_context; old_prog = mana_xdp_get(apc); if (!old_prog && !prog) return 0; - buf_max = XDP_PACKET_HEADROOM + mana_xdp_fraglen(ndev->mtu + ETH_HLEN); - if (prog && buf_max > PAGE_SIZE) { - netdev_err(ndev, "XDP: mtu:%u too large, buf_max:%u\n", - ndev->mtu, buf_max); + if (prog && ndev->mtu > MANA_XDP_MTU_MAX) { + netdev_err(ndev, "XDP: mtu:%u too large, mtu_max:%lu\n", + ndev->mtu, MANA_XDP_MTU_MAX); NL_SET_ERR_MSG_MOD(extack, "XDP: mtu too large"); return -EOPNOTSUPP; @@ -206,6 +201,11 @@ static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog, if (apc->port_is_up) mana_chn_setxdp(apc, prog); + if (prog) + ndev->max_mtu = MANA_XDP_MTU_MAX; + else + ndev->max_mtu = gc->adapter_mtu - ETH_HLEN; + return 0; } diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index afbbe447de1d..34fa5c758b28 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -427,6 +427,192 @@ static u16 mana_select_queue(struct net_device *ndev, struct sk_buff *skb, return txq; } +/* Release pre-allocated RX buffers */ +static void mana_pre_dealloc_rxbufs(struct mana_port_context *mpc) +{ + struct device *dev; + int i; + + dev = mpc->ac->gdma_dev->gdma_context->dev; + + if (!mpc->rxbufs_pre) + goto out1; + + if (!mpc->das_pre) + goto out2; + + while (mpc->rxbpre_total) { + i = --mpc->rxbpre_total; + dma_unmap_single(dev, mpc->das_pre[i], mpc->rxbpre_datasize, + DMA_FROM_DEVICE); + put_page(virt_to_head_page(mpc->rxbufs_pre[i])); + } + + kfree(mpc->das_pre); + mpc->das_pre = NULL; + +out2: + kfree(mpc->rxbufs_pre); + mpc->rxbufs_pre = NULL; + +out1: + mpc->rxbpre_datasize = 0; + mpc->rxbpre_alloc_size = 0; + mpc->rxbpre_headroom = 0; +} + +/* Get a buffer from the pre-allocated RX buffers */ +static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, dma_addr_t *da) +{ + struct net_device *ndev = rxq->ndev; + struct mana_port_context *mpc; + void *va; + + mpc = netdev_priv(ndev); + + if (!mpc->rxbufs_pre || !mpc->das_pre || !mpc->rxbpre_total) { + netdev_err(ndev, "No RX pre-allocated bufs\n"); + return NULL; + } + + /* Check sizes to catch unexpected coding error */ + if (mpc->rxbpre_datasize != rxq->datasize) { + netdev_err(ndev, "rxbpre_datasize mismatch: %u: %u\n", + mpc->rxbpre_datasize, rxq->datasize); + return NULL; + } + + if (mpc->rxbpre_alloc_size != rxq->alloc_size) { + netdev_err(ndev, "rxbpre_alloc_size mismatch: %u: %u\n", + mpc->rxbpre_alloc_size, rxq->alloc_size); + return NULL; + } + + if (mpc->rxbpre_headroom != rxq->headroom) { + netdev_err(ndev, "rxbpre_headroom mismatch: %u: %u\n", + mpc->rxbpre_headroom, rxq->headroom); + return NULL; + } + + mpc->rxbpre_total--; + + *da = mpc->das_pre[mpc->rxbpre_total]; + va = mpc->rxbufs_pre[mpc->rxbpre_total]; + mpc->rxbufs_pre[mpc->rxbpre_total] = NULL; + + /* Deallocate the array after all buffers are gone */ + if (!mpc->rxbpre_total) + mana_pre_dealloc_rxbufs(mpc); + + return va; +} + +/* Get RX buffer's data size, alloc size, XDP headroom based on MTU */ +static void mana_get_rxbuf_cfg(int mtu, u32 *datasize, u32 *alloc_size, + u32 *headroom) +{ + if (mtu > MANA_XDP_MTU_MAX) + *headroom = 0; /* no support for XDP */ + else + *headroom = XDP_PACKET_HEADROOM; + + *alloc_size = mtu + MANA_RXBUF_PAD + *headroom; + + *datasize = ALIGN(mtu + ETH_HLEN, MANA_RX_DATA_ALIGN); +} + +static int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu) +{ + struct device *dev; + struct page *page; + dma_addr_t da; + int num_rxb; + void *va; + int i; + + mana_get_rxbuf_cfg(new_mtu, &mpc->rxbpre_datasize, + &mpc->rxbpre_alloc_size, &mpc->rxbpre_headroom); + + dev = mpc->ac->gdma_dev->gdma_context->dev; + + num_rxb = mpc->num_queues * RX_BUFFERS_PER_QUEUE; + + WARN(mpc->rxbufs_pre, "mana rxbufs_pre exists\n"); + mpc->rxbufs_pre = kmalloc_array(num_rxb, sizeof(void *), GFP_KERNEL); + if (!mpc->rxbufs_pre) + goto error; + + mpc->das_pre = kmalloc_array(num_rxb, sizeof(dma_addr_t), GFP_KERNEL); + if (!mpc->das_pre) + goto error; + + mpc->rxbpre_total = 0; + + for (i = 0; i < num_rxb; i++) { + if (mpc->rxbpre_alloc_size > PAGE_SIZE) { + va = netdev_alloc_frag(mpc->rxbpre_alloc_size); + if (!va) + goto error; + } else { + page = dev_alloc_page(); + if (!page) + goto error; + + va = page_to_virt(page); + } + + da = dma_map_single(dev, va + mpc->rxbpre_headroom, + mpc->rxbpre_datasize, DMA_FROM_DEVICE); + + if (dma_mapping_error(dev, da)) { + put_page(virt_to_head_page(va)); + goto error; + } + + mpc->rxbufs_pre[i] = va; + mpc->das_pre[i] = da; + mpc->rxbpre_total = i + 1; + } + + return 0; + +error: + mana_pre_dealloc_rxbufs(mpc); + return -ENOMEM; +} + +static int mana_change_mtu(struct net_device *ndev, int new_mtu) +{ + struct mana_port_context *mpc = netdev_priv(ndev); + unsigned int old_mtu = ndev->mtu; + int err; + + /* Pre-allocate buffers to prevent failure in mana_attach later */ + err = mana_pre_alloc_rxbufs(mpc, new_mtu); + if (err) { + netdev_err(ndev, "Insufficient memory for new MTU\n"); + return err; + } + + err = mana_detach(ndev, false); + if (err) { + netdev_err(ndev, "mana_detach failed: %d\n", err); + goto out; + } + + ndev->mtu = new_mtu; + + err = mana_attach(ndev); + if (err) { + netdev_err(ndev, "mana_attach failed: %d\n", err); + ndev->mtu = old_mtu; + } + +out: + mana_pre_dealloc_rxbufs(mpc); + return err; +} + static const struct net_device_ops mana_devops = { .ndo_open = mana_open, .ndo_stop = mana_close, @@ -436,6 +622,7 @@ static const struct net_device_ops mana_devops = { .ndo_get_stats64 = mana_get_stats64, .ndo_bpf = mana_bpf, .ndo_xdp_xmit = mana_xdp_xmit, + .ndo_change_mtu = mana_change_mtu, }; static void mana_cleanup_port_context(struct mana_port_context *apc) @@ -625,6 +812,9 @@ static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver, mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_DEV_CONFIG, sizeof(req), sizeof(resp)); + + req.hdr.resp.msg_version = GDMA_MESSAGE_V2; + req.proto_major_ver = proto_major_ver; req.proto_minor_ver = proto_minor_ver; req.proto_micro_ver = proto_micro_ver; @@ -647,6 +837,11 @@ static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver, *max_num_vports = resp.max_num_vports; + if (resp.hdr.response.msg_version == GDMA_MESSAGE_V2) + gc->adapter_mtu = resp.adapter_mtu; + else + gc->adapter_mtu = ETH_FRAME_LEN; + return 0; } @@ -1712,10 +1907,14 @@ static void mana_destroy_rxq(struct mana_port_context *apc, static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key, struct mana_rxq *rxq, struct device *dev) { + struct mana_port_context *mpc = netdev_priv(rxq->ndev); dma_addr_t da; void *va; - va = mana_get_rxfrag(rxq, dev, &da, false); + if (mpc->rxbufs_pre) + va = mana_get_rxbuf_pre(rxq, &da); + else + va = mana_get_rxfrag(rxq, dev, &da, false); if (!va) return -ENOMEM; @@ -1797,7 +1996,6 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, struct gdma_dev *gd = apc->ac->gdma_dev; struct mana_obj_spec wq_spec; struct mana_obj_spec cq_spec; - unsigned int mtu = ndev->mtu; struct gdma_queue_spec spec; struct mana_cq *cq = NULL; struct gdma_context *gc; @@ -1817,15 +2015,8 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, rxq->rxq_idx = rxq_idx; rxq->rxobj = INVALID_MANA_HANDLE; - rxq->datasize = ALIGN(mtu + ETH_HLEN, 64); - - if (mtu > MANA_XDP_MTU_MAX) { - rxq->alloc_size = mtu + MANA_RXBUF_PAD; - rxq->headroom = 0; - } else { - rxq->alloc_size = mtu + MANA_RXBUF_PAD + XDP_PACKET_HEADROOM; - rxq->headroom = XDP_PACKET_HEADROOM; - } + mana_get_rxbuf_cfg(ndev->mtu, &rxq->datasize, &rxq->alloc_size, + &rxq->headroom); err = mana_alloc_rx_wqe(apc, rxq, &rq_size, &cq_size); if (err) @@ -2238,8 +2429,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, ndev->netdev_ops = &mana_devops; ndev->ethtool_ops = &mana_ethtool_ops; ndev->mtu = ETH_DATA_LEN; - ndev->max_mtu = ndev->mtu; - ndev->min_mtu = ndev->mtu; + ndev->max_mtu = gc->adapter_mtu - ETH_HLEN; + ndev->min_mtu = ETH_MIN_MTU; ndev->needed_headroom = MANA_HEADROOM; ndev->dev_port = port_idx; SET_NETDEV_DEV(ndev, gc->dev); diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 56189e4252da..96c120160f15 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -145,6 +145,7 @@ struct gdma_general_req { }; /* HW DATA */ #define GDMA_MESSAGE_V1 1 +#define GDMA_MESSAGE_V2 2 struct gdma_general_resp { struct gdma_resp_hdr hdr; @@ -354,6 +355,9 @@ struct gdma_context { struct gdma_resource msix_resource; struct gdma_irq_context *irq_contexts; + /* L2 MTU */ + u16 adapter_mtu; + /* This maps a CQ index to the queue structure. */ unsigned int max_num_cqs; struct gdma_queue **cq_table; diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index fee99d704281..cd386aa7c7cc 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -37,6 +37,7 @@ enum TRI_STATE { #define COMP_ENTRY_SIZE 64 #define RX_BUFFERS_PER_QUEUE 512 +#define MANA_RX_DATA_ALIGN 64 #define MAX_SEND_BUFFERS_PER_QUEUE 256 @@ -390,6 +391,14 @@ struct mana_port_context { /* This points to an array of num_queues of RQ pointers. */ struct mana_rxq **rxqs; + /* pre-allocated rx buffer array */ + void **rxbufs_pre; + dma_addr_t *das_pre; + int rxbpre_total; + u32 rxbpre_datasize; + u32 rxbpre_alloc_size; + u32 rxbpre_headroom; + struct bpf_prog *bpf_prog; /* Create num_queues EQs, SQs, SQ-CQs, RQs and RQ-CQs, respectively. */ @@ -489,6 +498,11 @@ struct mana_query_device_cfg_resp { u16 max_num_vports; u16 reserved; u32 max_num_eqs; + + /* response v2: */ + u16 adapter_mtu; + u16 reserved2; + u32 reserved3; }; /* HW DATA */ /* Query vPort Configuration */