From patchwork Thu Aug 29 18:58:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Peixoto X-Patchwork-Id: 1978593 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WvrCz4VpFz1yfy for ; Fri, 30 Aug 2024 04:58:54 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1sjkM1-0004IB-9W; Thu, 29 Aug 2024 18:58:45 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1sjkLz-0004Ht-Fr for kernel-team@lists.ubuntu.com; Thu, 29 Aug 2024 18:58:43 +0000 Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 2DEFC3F327 for ; Thu, 29 Aug 2024 18:58:43 +0000 (UTC) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-2025df5742aso10575825ad.1 for ; Thu, 29 Aug 2024 11:58:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724957921; x=1725562721; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8Yagi1DoCouUgq091i9jTDabyu7NJNNjNSmHwGrveAY=; b=UGJoNL0iEcxZ4X7KB5OIYEkE5PFpgvV3z1yUB1Nw9C8fg4tyl8IjZq0Q37/40a7k56 qIV9zNkzX746LUhRI9Hfhz9NbgFJbtr3Q8iYbZ4q2uIFXIwN6/C+MHPurimNa5WuT/7G 4KeShWVZ6682biCRFhuP7LtPXZeCG4GsAAflY2JBa12jhkeJDfFbuSC7tUL2Z3VNYbR2 DtthDviGR+RIa7+yZUwrjQw3NQK/wfIIstUSJNb6kkbomzIbH2VlxRQvCk0CujHZrF+/ iB1r+r5AuJM5nAu7CLqH+nnJe59yGIUxYseynzilaiG3q6lBkLHwZwIDF4M+4h86AZC4 tZSQ== X-Gm-Message-State: AOJu0YxceWf9wmNdHVUx/daVso9Xw4MbS6YoUPHpax9kQ9jnvEYTLfBc FcuW3uxMohdx7MhIyXfMFql+EvQDhV0oSgPMA6xYxaNoiQetyhGuwp8h0sz1IuKO/p5hsfmnbo1 l1pAjx8e5LRF8tV7yLNT/mXcc6g8gVzGoMDLweyRXakkY4GkA/sGNMYA731SC1xK6ezVHD8tNqJ v87xYKd8P+yj0Z X-Received: by 2002:a17:902:ec8e:b0:202:4b99:fd27 with SMTP id d9443c01a7336-2050c4d38b5mr37941345ad.51.1724957921232; Thu, 29 Aug 2024 11:58:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFAYytiNXFgSCAP4uJzpq3vex8KEkTxZGdA9z+0UfCZpFBcfY35dZuNsa/TvODQ9cwg+sQohg== X-Received: by 2002:a17:902:ec8e:b0:202:4b99:fd27 with SMTP id d9443c01a7336-2050c4d38b5mr37941095ad.51.1724957920824; Thu, 29 Aug 2024 11:58:40 -0700 (PDT) Received: from canonical.com ([2801:8a:c811:1:9564:9fa2:7074:64f9]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20515567a2esm14660495ad.295.2024.08.29.11.58.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Aug 2024 11:58:40 -0700 (PDT) From: Vinicius Peixoto To: kernel-team@lists.ubuntu.com Subject: [SRU][j:linux-azure][PATCH 1/1] net: mana: Fix race of mana_hwc_post_rx_wqe and new hwc response Date: Thu, 29 Aug 2024 15:58:28 -0300 Message-ID: <20240829185830.76913-2-vinicius.peixoto@canonical.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240829185830.76913-1-vinicius.peixoto@canonical.com> References: <20240829185830.76913-1-vinicius.peixoto@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Haiyang Zhang BugLink: https://bugs.launchpad.net/bugs/2078001 The mana_hwc_rx_event_handler() / mana_hwc_handle_resp() calls complete(&ctx->comp_event) before posting the wqe back. It's possible that other callers, like mana_create_txq(), start the next round of mana_hwc_send_request() before the posting of wqe. And if the HW is fast enough to respond, it can hit no_wqe error on the HW channel, then the response message is lost. The mana driver may fail to create queues and open, because of waiting for the HW response and timed out. Sample dmesg: [ 528.610840] mana 39d4:00:02.0: HWC: Request timed out! [ 528.614452] mana 39d4:00:02.0: Failed to send mana message: -110, 0x0 [ 528.618326] mana 39d4:00:02.0 enP14804s2: Failed to create WQ object: -110 To fix it, move posting of rx wqe before complete(&ctx->comp_event). Cc: stable@vger.kernel.org Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)") Signed-off-by: Haiyang Zhang Reviewed-by: Long Li Signed-off-by: David S. Miller (cherry-picked from commit 8af174ea863c72f25ce31cee3baad8a301c0cf0f netdev) Signed-off-by: Vinicius Peixoto --- .../net/ethernet/microsoft/mana/hw_channel.c | 62 ++++++++++--------- 1 file changed, 34 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index c71cf2d22773..b35b3f135089 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -51,9 +51,33 @@ static int mana_hwc_verify_resp_msg(const struct hwc_caller_ctx *caller_ctx, return 0; } +static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq, + struct hwc_work_request *req) +{ + struct device *dev = hwc_rxq->hwc->dev; + struct gdma_sge *sge; + int err; + + sge = &req->sge; + sge->address = (u64)req->buf_sge_addr; + sge->mem_key = hwc_rxq->msg_buf->gpa_mkey; + sge->size = req->buf_len; + + memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request)); + req->wqe_req.sgl = sge; + req->wqe_req.num_sge = 1; + req->wqe_req.client_data_unit = 0; + + err = mana_gd_post_and_ring(hwc_rxq->gdma_wq, &req->wqe_req, NULL); + if (err) + dev_err(dev, "Failed to post WQE on HWC RQ: %d\n", err); + return err; +} + static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len, - const struct gdma_resp_hdr *resp_msg) + struct hwc_work_request *rx_req) { + const struct gdma_resp_hdr *resp_msg = rx_req->buf_va; struct hwc_caller_ctx *ctx; int err; @@ -61,6 +85,7 @@ static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len, hwc->inflight_msg_res.map)) { dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n", resp_msg->response.hwc_msg_id); + mana_hwc_post_rx_wqe(hwc->rxq, rx_req); return; } @@ -74,30 +99,13 @@ static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len, memcpy(ctx->output_buf, resp_msg, resp_len); out: ctx->error = err; - complete(&ctx->comp_event); -} - -static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq, - struct hwc_work_request *req) -{ - struct device *dev = hwc_rxq->hwc->dev; - struct gdma_sge *sge; - int err; - - sge = &req->sge; - sge->address = (u64)req->buf_sge_addr; - sge->mem_key = hwc_rxq->msg_buf->gpa_mkey; - sge->size = req->buf_len; - memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request)); - req->wqe_req.sgl = sge; - req->wqe_req.num_sge = 1; - req->wqe_req.client_data_unit = 0; + /* Must post rx wqe before complete(), otherwise the next rx may + * hit no_wqe error. + */ + mana_hwc_post_rx_wqe(hwc->rxq, rx_req); - err = mana_gd_post_and_ring(hwc_rxq->gdma_wq, &req->wqe_req, NULL); - if (err) - dev_err(dev, "Failed to post WQE on HWC RQ: %d\n", err); - return err; + complete(&ctx->comp_event); } static void mana_hwc_init_event_handler(void *ctx, struct gdma_queue *q_self, @@ -216,14 +224,12 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id, return; } - mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, resp); + mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, rx_req); - /* Do no longer use 'resp', because the buffer is posted to the HW - * in the below mana_hwc_post_rx_wqe(). + /* Can no longer use 'resp', because the buffer is posted to the HW + * in mana_hwc_handle_resp() above. */ resp = NULL; - - mana_hwc_post_rx_wqe(hwc_rxq, rx_req); } static void mana_hwc_tx_event_handler(void *ctx, u32 gdma_txq_id,