From patchwork Wed Jun 22 05:17:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1646331 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=dFLEa+Ne; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4LSWqW6BKlz9sG0 for ; Wed, 22 Jun 2022 15:17:58 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1o3skO-0008LU-N7; Wed, 22 Jun 2022 05:17:48 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1o3skK-0008Jv-0y for kernel-team@lists.ubuntu.com; Wed, 22 Jun 2022 05:17:44 +0000 Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 04D853F496 for ; Wed, 22 Jun 2022 05:17:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1655875062; bh=/+VxWX74pvDdsCkz5PBL23bSjZBhte/xupwXQd8nXdM=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dFLEa+NeCoT5qLWaUtMrlIlhQFW4BBBQyiUS7gf4npkpjXeLE36hqN2xojOQ+2AMY uUeB8U8sLUivCVfbIzth3ymrgMK1ZSOC96/ho9usapQ7o8FJizHxcB0PBhYJ8/oO2L Z1gghqO0417k+BozzZdyn3wKDfv+quqOGyPGuiA8OPn/Jj6ZvR6t1a97gsJHJRwcRk K5Gi4HpApxQm55ghScYFN5gsXX/YvKR7pCZZAU4t37Nb3T4WXPJkMwErV8jbG7IKrS XPccMgf+JxWT7ZyXYqXbxf3oUl3KMnj0eYoZ+Ru5uF7M+0AWW6xLDI1jwfNKGZ+kbt LwzpRNl0vvbEQ== Received: by mail-pg1-f199.google.com with SMTP id 15-20020a63020f000000b003fca9ebc5cbso8563058pgc.22 for ; Tue, 21 Jun 2022 22:17:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/+VxWX74pvDdsCkz5PBL23bSjZBhte/xupwXQd8nXdM=; b=OgpuanPTF4XEvA7SLv6P2BJ9bj0RqyozeEo9r6yhUmfEK3s76Ihn6EkLiDYQ8ggU8z XnPPA4KJ8G91M80Cq80etcQEwo9T9jpin6UwJedEW8tnsVL1Y7q94WGIyiL3Mhfocpwt pA7p/EHVtWDg3dL2Vp89GY0yYdADw+lrRoaN+bU8Nea+KoDtiT+MsCO0bH2awYYEkL2Z mw6LkWh1L3QI1BwGqDIHhR+/O4+0Zhc1mbdY5ZKwm+o7nrqfbW6Rfar/uBDzoxt636Rf 3kIBNjRLEHJ9nUXiXZxxxvkJXJrPaFS0OZ4/QEytzLU5Yj5V8gs3k+GRkz7XMYNCsJBm THaA== X-Gm-Message-State: AJIora9u+8rjNuaVTzKxW9osrgq7pqvHawTBaEmQe56xBSwU+DW794li t/l+QDytpYeSE8mH6cU89QA6wxM0KgMj3SnU4G/OytWVXlTRKkLrAOYVX7gimV0geatTgOe5wAi RLMmtQphlW+P4j60JmadAnIJErAk7VoYobn3U/YbjBQ== X-Received: by 2002:a17:90b:4b02:b0:1e2:ff51:272a with SMTP id lx2-20020a17090b4b0200b001e2ff51272amr1799900pjb.56.1655875060640; Tue, 21 Jun 2022 22:17:40 -0700 (PDT) X-Google-Smtp-Source: AGRyM1twcoiQdixNRRKI8PBF66xIfZVAq6QPsiniCpIzQQ9JBkE0Ym77oMmNel/IWELpQO8m+mV5KQ== X-Received: by 2002:a17:90b:4b02:b0:1e2:ff51:272a with SMTP id lx2-20020a17090b4b0200b001e2ff51272amr1799873pjb.56.1655875060219; Tue, 21 Jun 2022 22:17:40 -0700 (PDT) Received: from desktop.. (125-239-70-54-fibre.sparkbb.co.nz. [125.239.70.54]) by smtp.gmail.com with ESMTPSA id c200-20020a624ed1000000b005251b1fde90sm6450316pfb.219.2022.06.21.22.17.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 22:17:39 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][PATCH 1/6] blk-mq: blk-mq: provide forced completion method Date: Wed, 22 Jun 2022 17:17:26 +1200 Message-Id: <20220622051731.23563-2-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220622051731.23563-1-matthew.ruffell@canonical.com> References: <20220622051731.23563-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Keith Busch BugLink: https://bugs.launchpad.net/bugs/1896350 Drivers may need to bypass error injection for error recovery. Rename __blk_mq_complete_request() to blk_mq_force_complete_rq() and export that function so drivers may skip potential fake timeouts after they've reclaimed lost requests. Signed-off-by: Keith Busch Reviewed-by: Daniel Wagner Signed-off-by: Jens Axboe (cherry picked from commit 7b11eab041dacfeaaa6d27d9183b247a995bc16d) Signed-off-by: Matthew Ruffell --- block/blk-mq.c | 15 +++++++++++++-- include/linux/blk-mq.h | 1 + 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 84798d09ca46..82e93cd9f60d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -579,7 +579,17 @@ static void __blk_mq_complete_request_remote(void *data) q->mq_ops->complete(rq); } -static void __blk_mq_complete_request(struct request *rq) +/** + * blk_mq_force_complete_rq() - Force complete the request, bypassing any error + * injection that could drop the completion. + * @rq: Request to be force completed + * + * Drivers should use blk_mq_complete_request() to complete requests in their + * normal IO path. For timeout error recovery, drivers may call this forced + * completion routine after they've reclaimed timed out requests to bypass + * potentially subsequent fake timeouts. + */ +void blk_mq_force_complete_rq(struct request *rq) { struct blk_mq_ctx *ctx = rq->mq_ctx; struct request_queue *q = rq->q; @@ -625,6 +635,7 @@ static void __blk_mq_complete_request(struct request *rq) } put_cpu(); } +EXPORT_SYMBOL_GPL(blk_mq_force_complete_rq); static void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx) __releases(hctx->srcu) @@ -658,7 +669,7 @@ bool blk_mq_complete_request(struct request *rq) { if (unlikely(blk_should_fake_timeout(rq->q))) return false; - __blk_mq_complete_request(rq); + blk_mq_force_complete_rq(rq); return true; } EXPORT_SYMBOL(blk_mq_complete_request); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 0bf056de5cc3..92b48a8e4af3 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -312,6 +312,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list); void blk_mq_kick_requeue_list(struct request_queue *q); void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs); bool blk_mq_complete_request(struct request *rq); +void blk_mq_force_complete_rq(struct request *rq); bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, struct bio *bio, unsigned int nr_segs); bool blk_mq_queue_stopped(struct request_queue *q);