From patchwork Thu May 6 04:04:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1474775 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FbKjm4y3Gz9sTD; Thu, 6 May 2021 14:05:16 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1leVGB-0005Df-NQ; Thu, 06 May 2021 04:05:11 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1leVG3-000578-Mq for kernel-team@lists.ubuntu.com; Thu, 06 May 2021 04:05:03 +0000 Received: from mail-pl1-f199.google.com ([209.85.214.199]) by youngberry.canonical.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1leVG1-0005PB-T2 for kernel-team@lists.ubuntu.com; Thu, 06 May 2021 04:05:01 +0000 Received: by mail-pl1-f199.google.com with SMTP id t12-20020a170902dcccb02900ed4648d0f9so1730089pll.2 for ; Wed, 05 May 2021 21:05:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=utpFvmNa3ERUOOdbji85xyTxxTqOqKigItYTtiV2cp4=; b=O8qffi+iC2ngS8nj+90mGdqI22JNcyvkgw0Dk+cc3Q8/WScvJNHaaIpkgqYXsRx/VT sf7l4vCmRHeCSg0swtTVu8KyDiV/2Wc/Vt0b9IIihOXdISNUcdvNpa60kYjSgJVpe7bT ikbonC8LAkDZOcj8+2uqtrJl3xKGpIUYAjTYLmUecsarmvK+PLo6AktCd4jpB5QMllqq WwJjzuNCBOiGDWqAsiOIvp7LRbKgMIohhTgNrhnchT+t9Nm684WfFQFs348G56NT5EnP g3uK82PevHofymtP1i64tdAU3nByKyEV2L+dw0SheEYxZSXoyyHXgAVHpJjBS+ikH/vn tmOw== X-Gm-Message-State: AOAM530GZUw9YbzHzGqgoVmEHHQ++WDr3c7RrXbvERz9BUgC2OZzAOJS PEci0FJ/JSp6Oyt2+zcZbUD3iEz2x7TFAwIif8dbjCWRz0KsmiNXHAGfzkLyMDrzHKAY5L+mQBE wxMhQ2KIsEE1pBigSxzhnFVtZmLp4q74j+cRVXCLrRQ== X-Received: by 2002:a62:5209:0:b029:278:648f:99b6 with SMTP id g9-20020a6252090000b0290278648f99b6mr2339633pfb.9.1620273900621; Wed, 05 May 2021 21:05:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3RY9pfYzq1Ph/rBjiZB8bOAQAU8FITD6E5mNW8GAzI6fpnmmWDaUYmzzLLIs0CI27HuT69A== X-Received: by 2002:a62:5209:0:b029:278:648f:99b6 with SMTP id g9-20020a6252090000b0290278648f99b6mr2339623pfb.9.1620273900400; Wed, 05 May 2021 21:05:00 -0700 (PDT) Received: from desktop.. (122-58-78-211-adsl.sparkbb.co.nz. [122.58.78.211]) by smtp.gmail.com with ESMTPSA id ge4sm675315pjb.4.2021.05.05.21.04.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 21:05:00 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][F][G][H][PATCH 3/6] md/raid10: pull the code that wait for blocked dev into one function Date: Thu, 6 May 2021 16:04:36 +1200 Message-Id: <20210506040442.10877-6-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210506040442.10877-1-matthew.ruffell@canonical.com> References: <20210506040442.10877-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 The following patch will reuse these logics, so pull the same codes into one function. Tested-by: Adrian Huang Signed-off-by: Xiao Ni Signed-off-by: Song Liu (cherry picked from commit f2e7e269a7525317752d472bb48a549780e87d22) Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 120 +++++++++++++++++++++++++------------------- 1 file changed, 69 insertions(+), 51 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 8b132ae8279c..789f575503de 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1273,12 +1273,77 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, } } +static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) +{ + int i; + struct r10conf *conf = mddev->private; + struct md_rdev *blocked_rdev; + +retry_wait: + blocked_rdev = NULL; + rcu_read_lock(); + for (i = 0; i < conf->copies; i++) { + struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); + struct md_rdev *rrdev = rcu_dereference( + conf->mirrors[i].replacement); + if (rdev == rrdev) + rrdev = NULL; + if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { + atomic_inc(&rdev->nr_pending); + blocked_rdev = rdev; + break; + } + if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { + atomic_inc(&rrdev->nr_pending); + blocked_rdev = rrdev; + break; + } + + if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) { + sector_t first_bad; + sector_t dev_sector = r10_bio->devs[i].addr; + int bad_sectors; + int is_bad; + + /* + * Discard request doesn't care the write result + * so it doesn't need to wait blocked disk here. + */ + if (!r10_bio->sectors) + continue; + + is_bad = is_badblock(rdev, dev_sector, r10_bio->sectors, + &first_bad, &bad_sectors); + if (is_bad < 0) { + /* + * Mustn't write here until the bad block + * is acknowledged + */ + atomic_inc(&rdev->nr_pending); + set_bit(BlockedBadBlocks, &rdev->flags); + blocked_rdev = rdev; + break; + } + } + } + rcu_read_unlock(); + + if (unlikely(blocked_rdev)) { + /* Have to wait for this device to get unblocked, then retry */ + allow_barrier(conf); + raid10_log(conf->mddev, "%s wait rdev %d blocked", + __func__, blocked_rdev->raid_disk); + md_wait_for_blocked_rdev(blocked_rdev, mddev); + wait_barrier(conf); + goto retry_wait; + } +} + static void raid10_write_request(struct mddev *mddev, struct bio *bio, struct r10bio *r10_bio) { struct r10conf *conf = mddev->private; int i; - struct md_rdev *blocked_rdev; sector_t sectors; int max_sectors; @@ -1336,8 +1401,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, r10_bio->read_slot = -1; /* make sure repl_bio gets freed */ raid10_find_phys(conf, r10_bio); -retry_write: - blocked_rdev = NULL; + + wait_blocked_dev(mddev, r10_bio); + rcu_read_lock(); max_sectors = r10_bio->sectors; @@ -1348,16 +1414,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, conf->mirrors[d].replacement); if (rdev == rrdev) rrdev = NULL; - if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { - atomic_inc(&rdev->nr_pending); - blocked_rdev = rdev; - break; - } - if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { - atomic_inc(&rrdev->nr_pending); - blocked_rdev = rrdev; - break; - } if (rdev && (test_bit(Faulty, &rdev->flags))) rdev = NULL; if (rrdev && (test_bit(Faulty, &rrdev->flags))) @@ -1378,15 +1434,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, is_bad = is_badblock(rdev, dev_sector, max_sectors, &first_bad, &bad_sectors); - if (is_bad < 0) { - /* Mustn't write here until the bad block - * is acknowledged - */ - atomic_inc(&rdev->nr_pending); - set_bit(BlockedBadBlocks, &rdev->flags); - blocked_rdev = rdev; - break; - } if (is_bad && first_bad <= dev_sector) { /* Cannot write here at all */ bad_sectors -= (dev_sector - first_bad); @@ -1422,35 +1469,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } rcu_read_unlock(); - if (unlikely(blocked_rdev)) { - /* Have to wait for this device to get unblocked, then retry */ - int j; - int d; - - for (j = 0; j < i; j++) { - if (r10_bio->devs[j].bio) { - d = r10_bio->devs[j].devnum; - rdev_dec_pending(conf->mirrors[d].rdev, mddev); - } - if (r10_bio->devs[j].repl_bio) { - struct md_rdev *rdev; - d = r10_bio->devs[j].devnum; - rdev = conf->mirrors[d].replacement; - if (!rdev) { - /* Race with remove_disk */ - smp_mb(); - rdev = conf->mirrors[d].rdev; - } - rdev_dec_pending(rdev, mddev); - } - } - allow_barrier(conf); - raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); - md_wait_for_blocked_rdev(blocked_rdev, mddev); - wait_barrier(conf); - goto retry_write; - } - if (max_sectors < r10_bio->sectors) r10_bio->sectors = max_sectors;