From patchwork Mon Jun 5 05:45:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerald Yang X-Patchwork-Id: 1790220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=G93DXhzh; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QZN000xYzz20fM for ; Mon, 5 Jun 2023 15:46:38 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q632u-00076M-1O; Mon, 05 Jun 2023 05:46:24 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q632s-000763-33 for kernel-team@lists.ubuntu.com; Mon, 05 Jun 2023 05:46:22 +0000 Received: from mail-oi1-f200.google.com (mail-oi1-f200.google.com [209.85.167.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id C42BC3F0E9 for ; Mon, 5 Jun 2023 05:46:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1685943981; bh=hb+4MyGUTxrQ3O8WI0EPQeoiBfnsayfKfpck8xraBEs=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=G93DXhzhUwX3iL8OoI//nKxesC4jVWXVsza4Ar42iusVtgh2lBwUyNFGij2VM8j1f 4pXDFkgGMXFHppEjyI5xdm9eb9oArPiOkjHWeIUY5PAvwzqMLS6aBknMsGgRHhLZEd jRsQIfNFD5hlGcyEoGCzWtSArJ0xchNdI1JFPdKYiJq3U6mwDGPLiR7xoSXTHNzDp1 XpJWB++nxdQzu7cYk1dMwyBlTTIZPuNkLEDPMqLhdJ8HMLH6rMadjA8YTN1LWL+2Bu h3Isr5lHJQQ3GC9/73izmqmDMS6C0ji+gqbpeKY6jIP6W+mKWOAPK1ckP/HXfHeFD1 EovNRmml9ltUQ== Received: by mail-oi1-f200.google.com with SMTP id 5614622812f47-39a9cefa414so1439370b6e.0 for ; Sun, 04 Jun 2023 22:46:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685943980; x=1688535980; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hb+4MyGUTxrQ3O8WI0EPQeoiBfnsayfKfpck8xraBEs=; b=jLHk6/y9n6OeOZvMXdnKM5/Xh1ZPv7rS9cGZMYe1kY3C6C07I3uiRWpskQyA1uuLPN 0kdQ6DHdYq8ZWHTpYyfl6qkKmZwrA7DSmT9Jiv6mfjF9VBTxfMgCHq9+jLRfOk/0ECCi Q4PBGzfZ2QRj7orGgUUpl50o56gmyCY7VOwhpUbvUr50x9Ogl77pqsBtXKG691925rG8 a6SHi8xBnnOaw+8SUw/gza0G+da5IFzSVTna3UPAiwYlwgxFNNFaMBRllxWx+N4wox88 4DlLfkvaubytC/32gPq5kQlBaeALtrNdJiACPhBW6o2aMl0uxzaQp8xNfUwyOl5ZNmcJ in+w== X-Gm-Message-State: AC+VfDzjAWVwaqYYw5NLNVGAu8J2Om+FY05ftQd5MVImwv1amE3g5wyN ZTDV6utvOGBH+JcMVKq6PJp2GtRM1YpQ8na3xubwRyfnvlLEUABdJJJNyF+2sP3uJdlq6+X1qky +pttIVIRsqE4Vx0/YbMbvD8KT58iWI7DZYlj9wNCRByzGdwYniw== X-Received: by 2002:a05:6808:618b:b0:39a:a8c5:b3f0 with SMTP id dn11-20020a056808618b00b0039aa8c5b3f0mr4318794oib.46.1685943980389; Sun, 04 Jun 2023 22:46:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5G2KNNysbpYDatW+slel/rCoxPKBEoFe5PJvoFE59Y+oesdV6sZqLPQsaCxBjmkpwYczFRbw== X-Received: by 2002:a05:6808:618b:b0:39a:a8c5:b3f0 with SMTP id dn11-20020a056808618b00b0039aa8c5b3f0mr4318779oib.46.1685943980083; Sun, 04 Jun 2023 22:46:20 -0700 (PDT) Received: from localhost.localdomain (220-135-31-21.hinet-ip.hinet.net. [220.135.31.21]) by smtp.gmail.com with ESMTPSA id p5-20020a170902eac500b001b03a1a3151sm5637798pld.70.2023.06.04.22.46.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jun 2023 22:46:19 -0700 (PDT) From: Gerald Yang To: kernel-team@lists.ubuntu.com Subject: [SRU][K][PATCH 3/8] sbitmap: fix batched wait_cnt accounting Date: Mon, 5 Jun 2023 13:45:56 +0800 Message-Id: <20230605054601.1410517-4-gerald.yang@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605054601.1410517-1-gerald.yang@canonical.com> References: <20230605054601.1410517-1-gerald.yang@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Keith Busch Batched completions can clear multiple bits, but we're only decrementing the wait_cnt by one each time. This can cause waiters to never be woken, stalling IO. Use the batched count instead. Link: https://bugzilla.kernel.org/show_bug.cgi?id=215679 Signed-off-by: Keith Busch Link: https://lore.kernel.org/r/20220825145312.1217900-1-kbusch@fb.com Signed-off-by: Jens Axboe Signed-off-by: Gerald Yang --- block/blk-mq-tag.c | 2 +- include/linux/sbitmap.h | 3 ++- lib/sbitmap.c | 31 +++++++++++++++++-------------- 3 files changed, 20 insertions(+), 16 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 2dcd738c6952..7aea93047caf 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -200,7 +200,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * other allocations on previous queue won't be starved. */ if (bt != bt_prev) - sbitmap_queue_wake_up(bt_prev); + sbitmap_queue_wake_up(bt_prev, 1); ws = bt_wait_ptr(bt, data->hctx); } while (1); diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 8f5a86e210b9..4d2d5205ab58 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -575,8 +575,9 @@ void sbitmap_queue_wake_all(struct sbitmap_queue *sbq); * sbitmap_queue_wake_up() - Wake up some of waiters in one waitqueue * on a &struct sbitmap_queue. * @sbq: Bitmap queue to wake up. + * @nr: Number of bits cleared. */ -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq); +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr); /** * sbitmap_queue_show() - Dump &struct sbitmap_queue information to a &struct diff --git a/lib/sbitmap.c b/lib/sbitmap.c index a39b1a877366..2fedf07a9db5 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -599,34 +599,38 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq) return NULL; } -static bool __sbq_wake_up(struct sbitmap_queue *sbq) +static bool __sbq_wake_up(struct sbitmap_queue *sbq, int nr) { struct sbq_wait_state *ws; - unsigned int wake_batch; - int wait_cnt; + int wake_batch, wait_cnt, cur; ws = sbq_wake_ptr(sbq); - if (!ws) + if (!ws || !nr) return false; - wait_cnt = atomic_dec_return(&ws->wait_cnt); + wake_batch = READ_ONCE(sbq->wake_batch); + cur = atomic_read(&ws->wait_cnt); + do { + if (cur <= 0) + return true; + wait_cnt = cur - nr; + } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt)); + /* * For concurrent callers of this, callers should call this function * again to wakeup a new batch on a different 'ws'. */ - if (wait_cnt < 0 || !waitqueue_active(&ws->wait)) + if (!waitqueue_active(&ws->wait)) return true; if (wait_cnt > 0) return false; - wake_batch = READ_ONCE(sbq->wake_batch); - /* * Wake up first in case that concurrent callers decrease wait_cnt * while waitqueue is empty. */ - wake_up_nr(&ws->wait, wake_batch); + wake_up_nr(&ws->wait, max(wake_batch, nr)); /* * Pairs with the memory barrier in sbitmap_queue_resize() to @@ -651,12 +655,11 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) return false; } -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq) +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) { - while (__sbq_wake_up(sbq)) + while (__sbq_wake_up(sbq, nr)) ; } -EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up); static inline void sbitmap_update_cpu_hint(struct sbitmap *sb, int cpu, int tag) { @@ -693,7 +696,7 @@ void sbitmap_queue_clear_batch(struct sbitmap_queue *sbq, int offset, atomic_long_andnot(mask, (atomic_long_t *) addr); smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq); + sbitmap_queue_wake_up(sbq, nr_tags); sbitmap_update_cpu_hint(&sbq->sb, raw_smp_processor_id(), tags[nr_tags - 1] - offset); } @@ -721,7 +724,7 @@ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, * waiter. See the comment on waitqueue_active(). */ smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq); + sbitmap_queue_wake_up(sbq, 1); sbitmap_update_cpu_hint(&sbq->sb, cpu, nr); } EXPORT_SYMBOL_GPL(sbitmap_queue_clear);