From patchwork Thu Oct 29 03:07:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389816 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9Ny4XnFz9sV0; Thu, 29 Oct 2020 14:08:02 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIA-0005Cb-MG; Thu, 29 Oct 2020 03:07:58 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI8-0005Bj-Mf for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:56 +0000 Received: from mail-pl1-f200.google.com ([209.85.214.200]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI8-00034u-9X for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:56 +0000 Received: by mail-pl1-f200.google.com with SMTP id u14so1013123plq.5 for ; Wed, 28 Oct 2020 20:07:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NVzC+BgQav429/rkzh74b2bBYoHY/aOz0KyUxs7BHe4=; b=LqOQ6cBjCq13pzASQLcZDBD7F8WdkWc3NLQcmSu7NLJqYB5kLA/GYCAvIWAewh3Ozt 6QWWpCm4fVWOl/OUAicHXw0Qo4K1WKGhEwKxX7HIfU5dzugmnBzQxZFo2YzppvG8Mtgm /gyhfQNjg+5sUmWFCeIhVaf8b7dLJ2mgkHqUEG+OS8ZHzUwEbb2EaturIbRjtMBiSdPH UKDY+P0U+1WG4xp3QaI+GunyZbi0KIkXfd1/iit9ooVB8GGzcuq1v7Q6S+CuZr2UpmR4 552HXRAvbqUXSkBbhF0iPrEMrm3tip7/BceTjEiBFi/ecrlV20KSQXNAOs/XcJHJEqXy 1DKA== X-Gm-Message-State: AOAM53397J1UPAH9R4HURCZ5f+9Enx7/SCyYTfoOsjramnxe+dV/NGLP mqsKqObOdaLPugUgMKB/Mt5qlbUb/oDRVsv/qsQ+2RL6MX4vDAvR67orT066eD4GOzGs/xYQy+l poMn96CzT9nljCMCND00uIEcUXO6NpvxYo5WiLOhjlg== X-Received: by 2002:aa7:8588:0:b029:152:a38c:fbba with SMTP id w8-20020aa785880000b0290152a38cfbbamr2484997pfn.0.1603940874646; Wed, 28 Oct 2020 20:07:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZtmkuQWERw96q2Er3TmYjoSMW4MfP00a59ZhtP0N08XDRdhDxYI2hhjmV/W4QecODWqNeew== X-Received: by 2002:aa7:8588:0:b029:152:a38c:fbba with SMTP id w8-20020aa785880000b0290152a38cfbbamr2484979pfn.0.1603940874416; Wed, 28 Oct 2020 20:07:54 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:53 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][F][G][PATCH 3/7] md/raid10: pull codes that wait for blocked dev into one function Date: Thu, 29 Oct 2020 16:07:31 +1300 Message-Id: <20201029030737.21204-5-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 The following patch will reuse these logics, so pull the same codes into one function. Signed-off-by: Xiao Ni Signed-off-by: Song Liu (cherry picked from commit f046f5d0d79cdb968f219ce249e497fd1accf484) Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 118 +++++++++++++++++++++++++------------------- 1 file changed, 67 insertions(+), 51 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index d37a83fd1ccf..51c483d71562 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1292,12 +1292,75 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, } } +static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) +{ + int i; + struct r10conf *conf = mddev->private; + struct md_rdev *blocked_rdev; + +retry_wait: + blocked_rdev = NULL; + rcu_read_lock(); + for (i = 0; i < conf->copies; i++) { + struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); + struct md_rdev *rrdev = rcu_dereference( + conf->mirrors[i].replacement); + if (rdev == rrdev) + rrdev = NULL; + if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { + atomic_inc(&rdev->nr_pending); + blocked_rdev = rdev; + break; + } + if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { + atomic_inc(&rrdev->nr_pending); + blocked_rdev = rrdev; + break; + } + + if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) { + sector_t first_bad; + sector_t dev_sector = r10_bio->devs[i].addr; + int bad_sectors; + int is_bad; + + /* Discard request doesn't care the write result + * so it doesn't need to wait blocked disk here. + */ + if (!r10_bio->sectors) + continue; + + is_bad = is_badblock(rdev, dev_sector, r10_bio->sectors, + &first_bad, &bad_sectors); + if (is_bad < 0) { + /* Mustn't write here until the bad block + * is acknowledged + */ + atomic_inc(&rdev->nr_pending); + set_bit(BlockedBadBlocks, &rdev->flags); + blocked_rdev = rdev; + break; + } + } + } + rcu_read_unlock(); + + if (unlikely(blocked_rdev)) { + /* Have to wait for this device to get unblocked, then retry */ + allow_barrier(conf); + raid10_log(conf->mddev, "%s wait rdev %d blocked", + __func__, blocked_rdev->raid_disk); + md_wait_for_blocked_rdev(blocked_rdev, mddev); + wait_barrier(conf); + goto retry_wait; + } +} + static void raid10_write_request(struct mddev *mddev, struct bio *bio, struct r10bio *r10_bio) { struct r10conf *conf = mddev->private; int i; - struct md_rdev *blocked_rdev; sector_t sectors; int max_sectors; @@ -1355,8 +1418,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, r10_bio->read_slot = -1; /* make sure repl_bio gets freed */ raid10_find_phys(conf, r10_bio); -retry_write: - blocked_rdev = NULL; + + wait_blocked_dev(mddev, r10_bio); + rcu_read_lock(); max_sectors = r10_bio->sectors; @@ -1367,16 +1431,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, conf->mirrors[d].replacement); if (rdev == rrdev) rrdev = NULL; - if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { - atomic_inc(&rdev->nr_pending); - blocked_rdev = rdev; - break; - } - if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { - atomic_inc(&rrdev->nr_pending); - blocked_rdev = rrdev; - break; - } if (rdev && (test_bit(Faulty, &rdev->flags))) rdev = NULL; if (rrdev && (test_bit(Faulty, &rrdev->flags))) @@ -1397,15 +1451,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, is_bad = is_badblock(rdev, dev_sector, max_sectors, &first_bad, &bad_sectors); - if (is_bad < 0) { - /* Mustn't write here until the bad block - * is acknowledged - */ - atomic_inc(&rdev->nr_pending); - set_bit(BlockedBadBlocks, &rdev->flags); - blocked_rdev = rdev; - break; - } if (is_bad && first_bad <= dev_sector) { /* Cannot write here at all */ bad_sectors -= (dev_sector - first_bad); @@ -1441,35 +1486,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } rcu_read_unlock(); - if (unlikely(blocked_rdev)) { - /* Have to wait for this device to get unblocked, then retry */ - int j; - int d; - - for (j = 0; j < i; j++) { - if (r10_bio->devs[j].bio) { - d = r10_bio->devs[j].devnum; - rdev_dec_pending(conf->mirrors[d].rdev, mddev); - } - if (r10_bio->devs[j].repl_bio) { - struct md_rdev *rdev; - d = r10_bio->devs[j].devnum; - rdev = conf->mirrors[d].replacement; - if (!rdev) { - /* Race with remove_disk */ - smp_mb(); - rdev = conf->mirrors[d].rdev; - } - rdev_dec_pending(rdev, mddev); - } - } - allow_barrier(conf); - raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); - md_wait_for_blocked_rdev(blocked_rdev, mddev); - wait_barrier(conf); - goto retry_write; - } - if (max_sectors < r10_bio->sectors) r10_bio->sectors = max_sectors;