From patchwork Mon Jun 5 05:46:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerald Yang X-Patchwork-Id: 1790223 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=XOXmV6k/; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QZN0621XHz20Vv for ; Mon, 5 Jun 2023 15:46:46 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q6338-0007Kn-6e; Mon, 05 Jun 2023 05:46:38 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q6330-00078y-3w for kernel-team@lists.ubuntu.com; Mon, 05 Jun 2023 05:46:30 +0000 Received: from mail-oi1-f200.google.com (mail-oi1-f200.google.com [209.85.167.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id B93243F03D for ; Mon, 5 Jun 2023 05:46:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1685943988; bh=sL3WuaJtJ26wa5DJBEh84sh8ppsisG9UwgLuyy6Msu4=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XOXmV6k/bBPTuUu6/VlKwYE4yIuX+eoUsuQOazb8lyRSe9YeMY+5vW4CowIAQKQQu AXN5Op7GIcG9hQPt/3SIMqDOS1NZE628PiNLqEIRBrCm2XHIseTKIDoah2HVDVSgth 7VfN4H9TmdOVsWBeWVgLpl1UbUiwSUEvOipDrHiRgUmbM99MxZ+XgTppD6Z47kE63P i/YlTKaAYLsW1g+ksonedv0QnknYwb/DSFSJ/K+lzP1FDfky6t+W38uY3WgzCDaI5Y HPS5Tkbs1orcc8PMRPRUx1xgoxWDt7KzYsNfN2fJcYP9y4YbOyj5pMGm0QAp2I86r0 dvs/hBecY1M+A== Received: by mail-oi1-f200.google.com with SMTP id 5614622812f47-39a5e922c24so4290301b6e.2 for ; Sun, 04 Jun 2023 22:46:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685943986; x=1688535986; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sL3WuaJtJ26wa5DJBEh84sh8ppsisG9UwgLuyy6Msu4=; b=cAbWcjtzr0N0D/rppPSzWckS7uzzCnDrOjXnQU2V/quFd0ZAJU+yzL+GHT8/xcdsaQ GQD6fiUnJSvrrFhMfQjX+pbMiOENHFgCC7+vZz0orirtvkDshfdsVR/RdhVRxKS+6yXw JdneO17gaei8AL/8zjZaYc6jB9d7uQe4CQh0AAQIpQ4KSwsvJPEoAw19qQyQk1D8/nZx 3W8WwLIgm/qAPDZVJPCU5zc0bQjUu7rkOIb1+qSANw+AoTFeHoAnOFIEPwV7JXOzmEuw mUL60Y96PeEpskvzRAqun1bEhGRLFbpbGiUMqvmLytvIp8hG565ElB6/7oh8lmizrUrS Me4Q== X-Gm-Message-State: AC+VfDxOwwytU7sGP5ZxSqvSkGP3yxW9WL0P76jxNi5zpU3OTGUrnvXa LlCXzD2kUTyWgINjC/a6S0DkIPkCAhegnDOdg5JzA4pfvxHHEbZ8AbSWV9OyZSllxu5LD/3pr4F H3cCkD9VmWN7JJFgpyDuGg2yrTT7kIz6/qUvAHMXVMDpeccXnzQ== X-Received: by 2002:a05:6808:997:b0:399:8529:6726 with SMTP id a23-20020a056808099700b0039985296726mr7298758oic.51.1685943985898; Sun, 04 Jun 2023 22:46:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6jt5HG9qC+4pjunsIVyoOS7br/bAsiufaJIyW+Nl7RgWupj01THTl4qg87v5txym64rbfXTA== X-Received: by 2002:a05:6808:997:b0:399:8529:6726 with SMTP id a23-20020a056808099700b0039985296726mr7298745oic.51.1685943985530; Sun, 04 Jun 2023 22:46:25 -0700 (PDT) Received: from localhost.localdomain (220-135-31-21.hinet-ip.hinet.net. [220.135.31.21]) by smtp.gmail.com with ESMTPSA id p5-20020a170902eac500b001b03a1a3151sm5637798pld.70.2023.06.04.22.46.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jun 2023 22:46:25 -0700 (PDT) From: Gerald Yang To: kernel-team@lists.ubuntu.com Subject: [SRU][K][PATCH 7/8] sbitmap: fix batched wait_cnt accounting Date: Mon, 5 Jun 2023 13:46:00 +0800 Message-Id: <20230605054601.1410517-8-gerald.yang@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605054601.1410517-1-gerald.yang@canonical.com> References: <20230605054601.1410517-1-gerald.yang@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Keith Busch Batched completions can clear multiple bits, but we're only decrementing the wait_cnt by one each time. This can cause waiters to never be woken, stalling IO. Use the batched count instead. Link: https://bugzilla.kernel.org/show_bug.cgi?id=215679 Signed-off-by: Keith Busch Link: https://lore.kernel.org/r/20220909184022.1709476-1-kbusch@fb.com Signed-off-by: Jens Axboe Signed-off-by: Gerald Yang --- block/blk-mq-tag.c | 2 +- include/linux/sbitmap.h | 3 ++- lib/sbitmap.c | 37 +++++++++++++++++++++++-------------- 3 files changed, 26 insertions(+), 16 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 2dcd738c6952..7aea93047caf 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -200,7 +200,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * other allocations on previous queue won't be starved. */ if (bt != bt_prev) - sbitmap_queue_wake_up(bt_prev); + sbitmap_queue_wake_up(bt_prev, 1); ws = bt_wait_ptr(bt, data->hctx); } while (1); diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 8f5a86e210b9..4d2d5205ab58 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -575,8 +575,9 @@ void sbitmap_queue_wake_all(struct sbitmap_queue *sbq); * sbitmap_queue_wake_up() - Wake up some of waiters in one waitqueue * on a &struct sbitmap_queue. * @sbq: Bitmap queue to wake up. + * @nr: Number of bits cleared. */ -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq); +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr); /** * sbitmap_queue_show() - Dump &struct sbitmap_queue information to a &struct diff --git a/lib/sbitmap.c b/lib/sbitmap.c index cbfd2e677d87..624fa7f118d1 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -599,24 +599,31 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq) return NULL; } -static bool __sbq_wake_up(struct sbitmap_queue *sbq) +static bool __sbq_wake_up(struct sbitmap_queue *sbq, int *nr) { struct sbq_wait_state *ws; unsigned int wake_batch; - int wait_cnt; + int wait_cnt, cur, sub; bool ret; + if (*nr <= 0) + return false; + ws = sbq_wake_ptr(sbq); if (!ws) return false; - wait_cnt = atomic_dec_return(&ws->wait_cnt); - /* - * For concurrent callers of this, callers should call this function - * again to wakeup a new batch on a different 'ws'. - */ - if (wait_cnt < 0) - return true; + cur = atomic_read(&ws->wait_cnt); + do { + /* + * For concurrent callers of this, callers should call this + * function again to wakeup a new batch on a different 'ws'. + */ + if (cur == 0) + return true; + sub = min(*nr, cur); + wait_cnt = cur - sub; + } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt)); /* * If we decremented queue without waiters, retry to avoid lost @@ -625,6 +632,8 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) if (wait_cnt > 0) return !waitqueue_active(&ws->wait); + *nr -= sub; + /* * When wait_cnt == 0, we have to be particularly careful as we are * responsible to reset wait_cnt regardless whether we've actually @@ -660,12 +669,12 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) sbq_index_atomic_inc(&sbq->wake_index); atomic_set(&ws->wait_cnt, wake_batch); - return ret; + return ret || *nr; } -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq) +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) { - while (__sbq_wake_up(sbq)) + while (__sbq_wake_up(sbq, &nr)) ; } EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up); @@ -705,7 +714,7 @@ void sbitmap_queue_clear_batch(struct sbitmap_queue *sbq, int offset, atomic_long_andnot(mask, (atomic_long_t *) addr); smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq); + sbitmap_queue_wake_up(sbq, nr_tags); sbitmap_update_cpu_hint(&sbq->sb, raw_smp_processor_id(), tags[nr_tags - 1] - offset); } @@ -733,7 +742,7 @@ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, * waiter. See the comment on waitqueue_active(). */ smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq); + sbitmap_queue_wake_up(sbq, 1); sbitmap_update_cpu_hint(&sbq->sb, cpu, nr); } EXPORT_SYMBOL_GPL(sbitmap_queue_clear);