From patchwork Fri Oct 6 16:16:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1844560 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S2D8F5Vw1z1yqH for ; Sat, 7 Oct 2023 03:16:45 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qonVB-0007r1-3P; Fri, 06 Oct 2023 16:16:33 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qonV0-0007iq-AY for kernel-team@lists.ubuntu.com; Fri, 06 Oct 2023 16:16:22 +0000 Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 0DEB13F699 for ; Fri, 6 Oct 2023 16:16:21 +0000 (UTC) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-690a0eedb24so2067292b3a.1 for ; Fri, 06 Oct 2023 09:16:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696608979; x=1697213779; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZiAcvNcTaeUpkpqB2/PnVP0qAdesSe00IkLytDJqJk0=; b=p2jIFLz+5Ewymfk527SNAMGFt32/B8LTtG/rphUWNxwDfep2hwGAkKFqMlLkYMEWfy VJ/YAFrAtMDGvRy1fBMlWxnMjq0HMN7OiiZkI+NQNCwOIrl+JUYKtc5eGnVGef320dNC IyEPya/fV7l+k4I1lG3csyuxfoaRhYqT7DcJS5sSn82hrWPAB6MJ+0p1qt6At8UpHlvr SrjKEks3oi9YKrJuTMq4/+G4lBuPTzLZa2e49Wy5MqrzWklqQVm/H6FBKjmlwHiOX4Aj elLrDhdojqEq/GHTfI2gPThw3ARNlz1XYgKTCLmzOZ8xxsoD1d6M/vIIV4QVB8rdX68Y qV4Q== X-Gm-Message-State: AOJu0YzoUgdgMDgFvk5xzginn8EV8Lcu/kCNUtbP0twy1Ts8szcgdIKI El8xlelJM2JCTdKdrk9dg0VdlU2vcGJAW/bkqVZX8y4hzObIeqQtnG8dNot5Vcg1GZJwt/5tmVh 7vNdYNItHXaYSGMiKiDWyTNXfRqzLKqN/RPzJTFPIhZwtma29nw== X-Received: by 2002:a05:6a21:197:b0:16b:cc6c:d728 with SMTP id le23-20020a056a21019700b0016bcc6cd728mr2095944pzb.44.1696608979411; Fri, 06 Oct 2023 09:16:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF6+tA8xudqFJkFh7LPD+3YoXzZRsPYj9dGPcgvow8eBMFjWs5dHQFy5T15WNZ43L1lhTWgSg== X-Received: by 2002:a05:6a21:197:b0:16b:cc6c:d728 with SMTP id le23-20020a056a21019700b0016bcc6cd728mr2095927pzb.44.1696608979118; Fri, 06 Oct 2023 09:16:19 -0700 (PDT) Received: from smtp.gmail.com (174-045-099-030.res.spectrum.com. [174.45.99.30]) by smtp.gmail.com with ESMTPSA id z9-20020aa785c9000000b006926e3dc2besm1665233pfn.108.2023.10.06.09.16.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Oct 2023 09:16:18 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/3] net: mana: Fix TX CQE error handling Date: Fri, 6 Oct 2023 10:16:10 -0600 Message-Id: <20231006161612.751753-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231006161612.751753-1-tim.gardner@canonical.com> References: <20231006161612.751753-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2038675 For an unknown TX CQE error type (probably from a newer hardware), still free the SKB, update the queue tail, etc., otherwise the accounting will be wrong. Also, TX errors can be triggered by injecting corrupted packets, so replace the WARN_ONCE to ratelimited error logging. Cc: stable@vger.kernel.org Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)") Signed-off-by: Haiyang Zhang Reviewed-by: Simon Horman Reviewed-by: Shradha Gupta Signed-off-by: Paolo Abeni (cherry picked from commit b2b000069a4c307b09548dc2243f31f3ca0eac9c) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index a6e5f020b0d6..2aa3bf1984eb 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1315,19 +1315,23 @@ static void mana_poll_tx_cq(struct mana_cq *cq) case CQE_TX_VPORT_IDX_OUT_OF_RANGE: case CQE_TX_VPORT_DISABLED: case CQE_TX_VLAN_TAGGING_VIOLATION: - WARN_ONCE(1, "TX: CQE error %d: ignored.\n", - cqe_oob->cqe_hdr.cqe_type); + if (net_ratelimit()) + netdev_err(ndev, "TX: CQE error %d\n", + cqe_oob->cqe_hdr.cqe_type); + apc->eth_stats.tx_cqe_err++; break; default: - /* If the CQE type is unexpected, log an error, assert, - * and go through the error path. + /* If the CQE type is unknown, log an error, + * and still free the SKB, update tail, etc. */ - WARN_ONCE(1, "TX: Unexpected CQE type %d: HW BUG?\n", - cqe_oob->cqe_hdr.cqe_type); + if (net_ratelimit()) + netdev_err(ndev, "TX: unknown CQE type %d\n", + cqe_oob->cqe_hdr.cqe_type); + apc->eth_stats.tx_cqe_unknown_type++; - return; + break; } if (WARN_ON_ONCE(txq->gdma_txq_id != completions[i].wq_num)) From patchwork Fri Oct 6 16:16:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1844562 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S2D8X3Y1mz1yq9 for ; Sat, 7 Oct 2023 03:17:00 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qonVR-0007vi-BC; Fri, 06 Oct 2023 16:16:49 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qonV1-0007j9-QY for kernel-team@lists.ubuntu.com; Fri, 06 Oct 2023 16:16:23 +0000 Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id D648C3F699 for ; Fri, 6 Oct 2023 16:16:22 +0000 (UTC) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-690b7fff1c6so2507204b3a.2 for ; Fri, 06 Oct 2023 09:16:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696608981; x=1697213781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JSQ7zIfkA8Y69mLHf9FQquYupV27PXc4RaBI3bJouj0=; b=OSZPTP1a3eFZnhkJLdJOoCwO5oy01kbbcvcly/KGfFSmpQMYuoYT6JbjesIWnUPlPi iZVJch1LV7MYUJcJFtJKsE970Jykxo8tKxxUnLgn6GQHeb/j+nqAQq/omfSpYAUldtL6 mCZPi4uU0Pnb7gU8yIzSmquiPcNDXCPZwxptnpvFF81YYQobfrW/O4zzLJTLDom3WqPn bCnvuw+4VT81JyZioIMkPtPQCb0/srX7vyVWvuEIsCMe2FdwPS+zakQbN009NgXZFCmG gVmt9zAlef+csAFbi3iJVVdn2TcW67OUp3t396F/7+GdPWBjyOuEOwNjuPACyaRTgOT1 1wYw== X-Gm-Message-State: AOJu0Yw7Ne7+C57Oh29yf39BDia/0omgZibqz3Vo13vZcoQiyHjcU64s HX3ptvfCbDMlI6x5suO7wwekTbvtMdOjHWSTcIJOu1d8hnSDSdaJ9GvKK/V+arU6CU8y8Rt/tZu smT+LaEsndNz2OgNIpfwM7PJHZnY0AcVFqxZIZdS+t+1cu4Zjjw== X-Received: by 2002:a05:6a21:8cc5:b0:153:63b9:8bf9 with SMTP id ta5-20020a056a218cc500b0015363b98bf9mr8781403pzb.0.1696608980708; Fri, 06 Oct 2023 09:16:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGlkIHAicgxTwO24AnVh8AuZVTAZKtQ09HV6UGMluM/ImHKZ+B2Ll4RSn0ZQWAHjHN4bN1FAw== X-Received: by 2002:a05:6a21:8cc5:b0:153:63b9:8bf9 with SMTP id ta5-20020a056a218cc500b0015363b98bf9mr8781385pzb.0.1696608980348; Fri, 06 Oct 2023 09:16:20 -0700 (PDT) Received: from smtp.gmail.com (174-045-099-030.res.spectrum.com. [174.45.99.30]) by smtp.gmail.com with ESMTPSA id z9-20020aa785c9000000b006926e3dc2besm1665233pfn.108.2023.10.06.09.16.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Oct 2023 09:16:19 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 2/3] net: mana: Fix the tso_bytes calculation Date: Fri, 6 Oct 2023 10:16:11 -0600 Message-Id: <20231006161612.751753-3-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231006161612.751753-1-tim.gardner@canonical.com> References: <20231006161612.751753-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2038675 sizeof(struct hop_jumbo_hdr) is not part of tso_bytes, so remove the subtraction from header size. Cc: stable@vger.kernel.org Fixes: bd7fc6e1957c ("net: mana: Add new MANA VF performance counters for easier troubleshooting") Signed-off-by: Haiyang Zhang Reviewed-by: Simon Horman Reviewed-by: Shradha Gupta Signed-off-by: Paolo Abeni (cherry picked from commit 7a54de92657455210d0ca71d4176b553952c871a) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 2aa3bf1984eb..b35bfb58c06f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -262,8 +262,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) ihs = skb_transport_offset(skb) + sizeof(struct udphdr); } else { ihs = skb_tcp_all_headers(skb); - if (ipv6_has_hopopt_jumbo(skb)) - ihs -= sizeof(struct hop_jumbo_hdr); } u64_stats_update_begin(&tx_stats->syncp); From patchwork Fri Oct 6 16:16:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1844561 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S2D8P2GRzz1yq9 for ; Sat, 7 Oct 2023 03:16:53 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qonVI-0007s3-4e; Fri, 06 Oct 2023 16:16:40 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qonV3-0007ld-47 for kernel-team@lists.ubuntu.com; Fri, 06 Oct 2023 16:16:25 +0000 Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 2799C3F69A for ; Fri, 6 Oct 2023 16:16:24 +0000 (UTC) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-690bfae9b7eso2128998b3a.0 for ; Fri, 06 Oct 2023 09:16:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696608982; x=1697213782; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7s6pmxOHPon8tqAYAfNdN1ukyuQda54dFyVAIlRvMaQ=; b=n/KIAv2z0rb94qZsFDs/9kfZ8vcbVixjhcw/9Ze+TK/xjpp1PFmustEd81zb7DHQYg 0oWeg29n0hXWMkA3T2E1J88YHUxVdek6V+T3QWmTN90ygjCGH3DWe30z8tPd1jV5/O28 4OYeBWoCw52dmpY4ISixOu2ZFcGUI9et+cuZE3FKS2SSxztEDy0OsURxOdCx267GvcAY gPwNqYc2h49brJ1m42FfnyWagCm2c9hbSgNjv7UH7zu8f+iuIePzVqaS28ixlfJM9kw4 lZKg2bCZ7KuVo9rAhi2aNYO1RjXSZF0aUb4WI5Jz8leCnhV4j80Vk0Hls4XIZ5ETVi/9 N7Yg== X-Gm-Message-State: AOJu0YyAIFu09EPFJMNV0P9iStqLkOkrtl4CMAbVFih9PqxvE+oSgpHG rpiJDjLDiWEkuFODdp0sbg0iKk+cGGb6m+F2IRVFg3E8tUZKN2cxuxbaJZFg7MPa3QhbUdGJ3aZ F0u1dUZB+b+uvU9k1zcrcLdUMIZs1S4Vgn/Vcfu/YjIjYsRCY4w== X-Received: by 2002:a05:6a00:8c7:b0:68a:4d66:caf with SMTP id s7-20020a056a0008c700b0068a4d660cafmr10468918pfu.34.1696608982197; Fri, 06 Oct 2023 09:16:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHI3voXYUttM82KMnv+l5KbNRRTtnU4AAX5HWjRL4Dws4LIW9gpTOZNw0b3CkXLzGAC8gOA+w== X-Received: by 2002:a05:6a00:8c7:b0:68a:4d66:caf with SMTP id s7-20020a056a0008c700b0068a4d660cafmr10468887pfu.34.1696608981762; Fri, 06 Oct 2023 09:16:21 -0700 (PDT) Received: from smtp.gmail.com (174-045-099-030.res.spectrum.com. [174.45.99.30]) by smtp.gmail.com with ESMTPSA id z9-20020aa785c9000000b006926e3dc2besm1665233pfn.108.2023.10.06.09.16.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Oct 2023 09:16:20 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/3] net: mana: Fix oversized sge0 for GSO packets Date: Fri, 6 Oct 2023 10:16:12 -0600 Message-Id: <20231006161612.751753-4-tim.gardner@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231006161612.751753-1-tim.gardner@canonical.com> References: <20231006161612.751753-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2038675 Handle the case when GSO SKB linear length is too large. MANA NIC requires GSO packets to put only the header part to SGE0, otherwise the TX queue may stop at the HW level. So, use 2 SGEs for the skb linear part which contains more than the packet header. Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)") Signed-off-by: Haiyang Zhang Reviewed-by: Simon Horman Reviewed-by: Shradha Gupta Signed-off-by: Paolo Abeni (cherry picked from commit a43e8e9ffa0d1de058964edf1a0622cbb7e27cfe) Signed-off-by: Tim Gardner --- drivers/net/ethernet/microsoft/mana/mana_en.c | 191 +++++++++++++----- include/net/mana/mana.h | 5 +- 2 files changed, 138 insertions(+), 58 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index b35bfb58c06f..58eac610bad2 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -89,63 +89,137 @@ static unsigned int mana_checksum_info(struct sk_buff *skb) return 0; } +static void mana_add_sge(struct mana_tx_package *tp, struct mana_skb_head *ash, + int sg_i, dma_addr_t da, int sge_len, u32 gpa_mkey) +{ + ash->dma_handle[sg_i] = da; + ash->size[sg_i] = sge_len; + + tp->wqe_req.sgl[sg_i].address = da; + tp->wqe_req.sgl[sg_i].mem_key = gpa_mkey; + tp->wqe_req.sgl[sg_i].size = sge_len; +} + static int mana_map_skb(struct sk_buff *skb, struct mana_port_context *apc, - struct mana_tx_package *tp) + struct mana_tx_package *tp, int gso_hs) { struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; + int hsg = 1; /* num of SGEs of linear part */ struct gdma_dev *gd = apc->ac->gdma_dev; + int skb_hlen = skb_headlen(skb); + int sge0_len, sge1_len = 0; struct gdma_context *gc; struct device *dev; skb_frag_t *frag; dma_addr_t da; + int sg_i; int i; gc = gd->gdma_context; dev = gc->dev; - da = dma_map_single(dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); + if (gso_hs && gso_hs < skb_hlen) { + sge0_len = gso_hs; + sge1_len = skb_hlen - gso_hs; + } else { + sge0_len = skb_hlen; + } + + da = dma_map_single(dev, skb->data, sge0_len, DMA_TO_DEVICE); if (dma_mapping_error(dev, da)) return -ENOMEM; - ash->dma_handle[0] = da; - ash->size[0] = skb_headlen(skb); + mana_add_sge(tp, ash, 0, da, sge0_len, gd->gpa_mkey); - tp->wqe_req.sgl[0].address = ash->dma_handle[0]; - tp->wqe_req.sgl[0].mem_key = gd->gpa_mkey; - tp->wqe_req.sgl[0].size = ash->size[0]; + if (sge1_len) { + sg_i = 1; + da = dma_map_single(dev, skb->data + sge0_len, sge1_len, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, da)) + goto frag_err; + + mana_add_sge(tp, ash, sg_i, da, sge1_len, gd->gpa_mkey); + hsg = 2; + } for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + sg_i = hsg + i; + frag = &skb_shinfo(skb)->frags[i]; da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), DMA_TO_DEVICE); - if (dma_mapping_error(dev, da)) goto frag_err; - ash->dma_handle[i + 1] = da; - ash->size[i + 1] = skb_frag_size(frag); - - tp->wqe_req.sgl[i + 1].address = ash->dma_handle[i + 1]; - tp->wqe_req.sgl[i + 1].mem_key = gd->gpa_mkey; - tp->wqe_req.sgl[i + 1].size = ash->size[i + 1]; + mana_add_sge(tp, ash, sg_i, da, skb_frag_size(frag), + gd->gpa_mkey); } return 0; frag_err: - for (i = i - 1; i >= 0; i--) - dma_unmap_page(dev, ash->dma_handle[i + 1], ash->size[i + 1], + for (i = sg_i - 1; i >= hsg; i--) + dma_unmap_page(dev, ash->dma_handle[i], ash->size[i], DMA_TO_DEVICE); - dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE); + for (i = hsg - 1; i >= 0; i--) + dma_unmap_single(dev, ash->dma_handle[i], ash->size[i], + DMA_TO_DEVICE); return -ENOMEM; } +/* Handle the case when GSO SKB linear length is too large. + * MANA NIC requires GSO packets to put only the packet header to SGE0. + * So, we need 2 SGEs for the skb linear part which contains more than the + * header. + * Return a positive value for the number of SGEs, or a negative value + * for an error. + */ +static int mana_fix_skb_head(struct net_device *ndev, struct sk_buff *skb, + int gso_hs) +{ + int num_sge = 1 + skb_shinfo(skb)->nr_frags; + int skb_hlen = skb_headlen(skb); + + if (gso_hs < skb_hlen) { + num_sge++; + } else if (gso_hs > skb_hlen) { + if (net_ratelimit()) + netdev_err(ndev, + "TX nonlinear head: hs:%d, skb_hlen:%d\n", + gso_hs, skb_hlen); + + return -EINVAL; + } + + return num_sge; +} + +/* Get the GSO packet's header size */ +static int mana_get_gso_hs(struct sk_buff *skb) +{ + int gso_hs; + + if (skb->encapsulation) { + gso_hs = skb_inner_tcp_all_headers(skb); + } else { + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { + gso_hs = skb_transport_offset(skb) + + sizeof(struct udphdr); + } else { + gso_hs = skb_tcp_all_headers(skb); + } + } + + return gso_hs; +} + netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) { enum mana_tx_pkt_format pkt_fmt = MANA_SHORT_PKT_FMT; struct mana_port_context *apc = netdev_priv(ndev); + int gso_hs = 0; /* zero for non-GSO pkts */ u16 txq_idx = skb_get_queue_mapping(skb); struct gdma_dev *gd = apc->ac->gdma_dev; bool ipv4 = false, ipv6 = false; @@ -157,7 +231,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) struct mana_txq *txq; struct mana_cq *cq; int err, len; - u16 ihs; if (unlikely(!apc->port_is_up)) goto tx_drop; @@ -207,19 +280,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) pkg.wqe_req.client_data_unit = 0; pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags; - WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); - - if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { - pkg.wqe_req.sgl = pkg.sgl_array; - } else { - pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge, - sizeof(struct gdma_sge), - GFP_ATOMIC); - if (!pkg.sgl_ptr) - goto tx_drop_count; - - pkg.wqe_req.sgl = pkg.sgl_ptr; - } if (skb->protocol == htons(ETH_P_IP)) ipv4 = true; @@ -227,6 +287,26 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) ipv6 = true; if (skb_is_gso(skb)) { + int num_sge; + + gso_hs = mana_get_gso_hs(skb); + + num_sge = mana_fix_skb_head(ndev, skb, gso_hs); + if (num_sge > 0) + pkg.wqe_req.num_sge = num_sge; + else + goto tx_drop_count; + + u64_stats_update_begin(&tx_stats->syncp); + if (skb->encapsulation) { + tx_stats->tso_inner_packets++; + tx_stats->tso_inner_bytes += skb->len - gso_hs; + } else { + tx_stats->tso_packets++; + tx_stats->tso_bytes += skb->len - gso_hs; + } + u64_stats_update_end(&tx_stats->syncp); + pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4; pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6; @@ -250,26 +330,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) &ipv6_hdr(skb)->daddr, 0, IPPROTO_TCP, 0); } - - if (skb->encapsulation) { - ihs = skb_inner_tcp_all_headers(skb); - u64_stats_update_begin(&tx_stats->syncp); - tx_stats->tso_inner_packets++; - tx_stats->tso_inner_bytes += skb->len - ihs; - u64_stats_update_end(&tx_stats->syncp); - } else { - if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { - ihs = skb_transport_offset(skb) + sizeof(struct udphdr); - } else { - ihs = skb_tcp_all_headers(skb); - } - - u64_stats_update_begin(&tx_stats->syncp); - tx_stats->tso_packets++; - tx_stats->tso_bytes += skb->len - ihs; - u64_stats_update_end(&tx_stats->syncp); - } - } else if (skb->ip_summed == CHECKSUM_PARTIAL) { csum_type = mana_checksum_info(skb); @@ -292,11 +352,25 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) } else { /* Can't do offload of this type of checksum */ if (skb_checksum_help(skb)) - goto free_sgl_ptr; + goto tx_drop_count; } } - if (mana_map_skb(skb, apc, &pkg)) { + WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); + + if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { + pkg.wqe_req.sgl = pkg.sgl_array; + } else { + pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge, + sizeof(struct gdma_sge), + GFP_ATOMIC); + if (!pkg.sgl_ptr) + goto tx_drop_count; + + pkg.wqe_req.sgl = pkg.sgl_ptr; + } + + if (mana_map_skb(skb, apc, &pkg, gso_hs)) { u64_stats_update_begin(&tx_stats->syncp); tx_stats->mana_map_err++; u64_stats_update_end(&tx_stats->syncp); @@ -1254,11 +1328,16 @@ static void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc) struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; struct device *dev = gc->dev; - int i; + int hsg, i; + + /* Number of SGEs of linear part */ + hsg = (skb_is_gso(skb) && skb_headlen(skb) > ash->size[0]) ? 2 : 1; - dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE); + for (i = 0; i < hsg; i++) + dma_unmap_single(dev, ash->dma_handle[i], ash->size[i], + DMA_TO_DEVICE); - for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++) + for (i = hsg; i < skb_shinfo(skb)->nr_frags + hsg; i++) dma_unmap_page(dev, ash->dma_handle[i], ash->size[i], DMA_TO_DEVICE); } diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index b12859511839..ac8eb69837c8 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -101,9 +101,10 @@ struct mana_txq { /* skb data and frags dma mappings */ struct mana_skb_head { - dma_addr_t dma_handle[MAX_SKB_FRAGS + 1]; + /* GSO pkts may have 2 SGEs for the linear part*/ + dma_addr_t dma_handle[MAX_SKB_FRAGS + 2]; - u32 size[MAX_SKB_FRAGS + 1]; + u32 size[MAX_SKB_FRAGS + 2]; }; #define MANA_HEADROOM sizeof(struct mana_skb_head)