From patchwork Wed Aug 17 08:51:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerald Yang X-Patchwork-Id: 1667152 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=EaaaxIem; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4M71x669h7z1ygF for ; Wed, 17 Aug 2022 18:52:26 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1oOEmb-0000Ux-9o; Wed, 17 Aug 2022 08:52:13 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1oOEmU-0000Nl-Nj for kernel-team@lists.ubuntu.com; Wed, 17 Aug 2022 08:52:06 +0000 Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 4BF8F3F46B for ; Wed, 17 Aug 2022 08:52:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1660726326; bh=nId7+ctMTDlURp2QwwPpAWqcKpUgu0Lq+e+WQkrLg30=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EaaaxIemI52MuA/XECG99j/cEmbOjlCNAQ7GRKwaIGHyYMueefDSbHLfiYvlKw9xp hP+2LZTiwixAfXmvWTcnMxc9DJoja6R/8WY6QHyI8DRT3dJbnS5hzM6bCaXpsRihD9 NuoKATBtQJMrGUMQyIfjXmn6Ch0/CUpPGsrqTicc5gk4c57l7DFcDZN15KbB+WV9pH d0kMCLHErMDkdCMZ8alvelKDibQwBx0YjhNduBtjSlnEhGJJDc+bEk6MfB6ajns256 5UC7T0H67EflhtyASpJqD8NeT06cIyywXg5psQUHHgLvqZg6M7WyV+ToglB0lMboqp kRDrhwkrDUNrA== Received: by mail-pj1-f69.google.com with SMTP id 92-20020a17090a09e500b001d917022847so5209005pjo.1 for ; Wed, 17 Aug 2022 01:52:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc; bh=nId7+ctMTDlURp2QwwPpAWqcKpUgu0Lq+e+WQkrLg30=; b=cPpK53ckSzMEYLEqel4MfoX9eSOAZneseECdsyGlNbJVES+zBN7rzaigcxBc+6TxyN cKjMiw8xPS9zB/klhQhVUFpC8+GvHRglfkP5WE5qsl0uGnliKcOA/CJHxCeeJtZyZZ1D WQo2/PrRlC3hpHtsvfROu6OzvwkCcx93Q4V4+HNonXAnwNHjLwIOG77gNbho8fMx8Qvr y+vR2jVKAaomRDKk2sTujOHcNfKGJDPSHh3WpmXDo9Dj7GMarE3N/wl5R4NShkDMJ+TK a2525O4Lz/OcSM8QD//UvPr3W8aCj08tibYbii6zrh4l7BRLZyVJTWaZ57lnf992k+4C Tv0Q== X-Gm-Message-State: ACgBeo3JJWxN6Om4R3Gcf9t374WvecxYBO360ndkeKVQxHIQ7g+xdjwO Sv+d1PjbSHk39QkjOpT9iFLa6DB0MsjZsdKsly76y81+Yz/jbYtdCXXPb3ABtH5gRFQlMbSUVyu rsAqzYj1AD+M1/AUqXSqw6xrFnHXo0GgiS6Eo/SQT+w== X-Received: by 2002:a17:902:e884:b0:170:2845:986a with SMTP id w4-20020a170902e88400b001702845986amr24928124plg.43.1660726324650; Wed, 17 Aug 2022 01:52:04 -0700 (PDT) X-Google-Smtp-Source: AA6agR78Rq/Hu/8vPgHkqgQ3e052j8ggkTYKItUmz2Ypky8Zu98ER2kqA1TD4pKeugIO0wP9gCNInQ== X-Received: by 2002:a17:902:e884:b0:170:2845:986a with SMTP id w4-20020a170902e88400b001702845986amr24928101plg.43.1660726324182; Wed, 17 Aug 2022 01:52:04 -0700 (PDT) Received: from localhost.localdomain (220-135-31-21.hinet-ip.hinet.net. [220.135.31.21]) by smtp.gmail.com with ESMTPSA id s90-20020a17090a69e300b001f522180d46sm1001033pjj.8.2022.08.17.01.52.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Aug 2022 01:52:03 -0700 (PDT) From: Gerald Yang To: kernel-team@lists.ubuntu.com Subject: [SRU][jammy/linux-aws][kinetic/linux-aws][PATCH 06/20] UBUNTU: SAUCE: xen-blkfront: add callbacks for PM suspend and hibernation Date: Wed, 17 Aug 2022 16:51:34 +0800 Message-Id: <20220817085150.2078055-9-gerald.yang@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220817085150.2078055-1-gerald.yang@canonical.com> References: <20220817085150.2078055-1-gerald.yang@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Munehisa Kamata BugLink: https://bugs.launchpad.net/bugs/1968062 Add freeze and restore callbacks for PM suspend and hibernation support. The freeze handler stops a block-layer queue and disconnect the frontend from the backend while freeing ring_info and associated resources. The restore handler re-allocates ring_info and re-connect to the backedend, so the rest of the kernel can continue to use the block device transparently.Also, the handlers are used for both PM suspend and hibernation so that we can keep the existing suspend/resume callbacks for Xen suspend without modification. If a backend doesn't have commit 12ea729645ac ("xen/blkback: unmap all persistent grants when frontend gets disconnected"), the frontend may see massive amount of grant table warning when freeing resources. [ 36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff) [ 36.855089] xen:grant_table: WARNING: g.e. 0x112 still in use! In this case, persistent grants would need to be disabled. Ensure no reqs/rsps in rings before disconnecting. When disconnecting the frontend from the backend in blkfront_freeze(), there still may be unconsumed requests or responses in the rings, especially when the backend is backed by network-based device. If the frontend gets disconnected with such reqs/rsps remaining there, it can cause grant warnings and/or losing reqs/rsps by freeing pages afterward. This can lead resumed kernel into unrecoverable state like unexpected freeing of grant page and/or hung task due to the lost reqs or rsps. Therefore we have to ensure that there is no unconsumed requests or responses before disconnecting. Actually, the frontend just needs to wait for some amount of time so that the backend can process the requests, put responses and notify the frontend back. Timeout used here is based on some heuristic. If we somehow hit the timeout, it would mean something serious happens in the backend, the frontend will just return an error to PM core and PM suspend/hibernation will be aborted. This may be something should be fixed by the backend side, but a frontend side fix is probably still worth doing to work with broader backends. Backport Note: Unlike 4.9 kernel, blk-mq is default for 4.14 kernel and request-based mode cod eis not included in this frontend driver. Signed-off-by: Munehisa Kamata Signed-off-by: Anchal Agarwal Reviewed-by: Munehisa Kamata Reviewed-by: Eduardo Valentin CR: https://cr.amazon.com/r/8297625/ (cherry picked from commit e516934f6a0e1b1a641b3a62edd9345e6d222bc8 amazon-5.15.y/mainline) Signed-off-by: Gerald Yang Signed-off-by: Matthew Ruffell --- drivers/block/xen-blkfront.c | 163 +++++++++++++++++++++++++++++++++-- 1 file changed, 155 insertions(+), 8 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 390817cf1221..1efa83e11c85 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -49,6 +49,8 @@ #include #include #include +#include +#include #include #include @@ -82,6 +84,8 @@ enum blkif_state { BLKIF_STATE_CONNECTED, BLKIF_STATE_SUSPENDED, BLKIF_STATE_ERROR, + BLKIF_STATE_FREEZING, + BLKIF_STATE_FROZEN }; struct grant { @@ -222,6 +226,7 @@ struct blkfront_info struct list_head requests; struct bio_list bio_list; struct list_head info_list; + struct completion wait_backend_disconnected; }; static unsigned int nr_minors; @@ -263,6 +268,16 @@ static DEFINE_SPINLOCK(minor_lock); static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo); static void blkfront_gather_backend_features(struct blkfront_info *info); static int negotiate_mq(struct blkfront_info *info); +static void __blkif_free(struct blkfront_info *info); + +static inline bool blkfront_ring_is_busy(struct blkif_front_ring *ring) +{ + if (RING_SIZE(ring) > RING_FREE_REQUESTS(ring) || + RING_HAS_UNCONSUMED_RESPONSES(ring)) + return true; + else + return false; +} #define for_each_rinfo(info, ptr, idx) \ for ((ptr) = (info)->rinfo, (idx) = 0; \ @@ -1154,6 +1169,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, info->sector_size = sector_size; info->physical_sector_size = physical_sector_size; blkif_set_queue_limits(info); + init_completion(&info->wait_backend_disconnected); xlvbd_flush(info); @@ -1178,6 +1194,8 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, /* Already hold rinfo->ring_lock. */ static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo) { + if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING)) + return; if (!RING_FULL(&rinfo->ring)) blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true); } @@ -1302,9 +1320,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo) static void blkif_free(struct blkfront_info *info, int suspend) { - unsigned int i; - struct blkfront_ring_info *rinfo; - /* Prevent new requests being issued until we fix things up. */ info->connected = suspend ? BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED; @@ -1312,6 +1327,14 @@ static void blkif_free(struct blkfront_info *info, int suspend) if (info->rq) blk_mq_stop_hw_queues(info->rq); + __blkif_free(info); +} + +static void __blkif_free(struct blkfront_info *info) +{ + unsigned int i; + struct blkfront_ring_info *rinfo; + for_each_rinfo(info, rinfo, i) blkif_free_ring(rinfo); @@ -1523,8 +1546,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { - xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); - return IRQ_HANDLED; + if (info->connected != BLKIF_STATE_FREEZING) { + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + return IRQ_HANDLED; + } } spin_lock_irqsave(&rinfo->ring_lock, flags); @@ -2026,6 +2051,7 @@ static int blkif_recover(struct blkfront_info *info) unsigned int segs; struct blkfront_ring_info *rinfo; + bool frozen = info->connected == BLKIF_STATE_FROZEN; blkfront_gather_backend_features(info); /* Reset limits changed by blk_mq_update_nr_hw_queues(). */ blkif_set_queue_limits(info); @@ -2047,6 +2073,9 @@ static int blkif_recover(struct blkfront_info *info) kick_pending_request_queues(rinfo); } + if (frozen) + return 0; + list_for_each_entry_safe(req, n, &info->requests, queuelist) { /* Requeue pending requests (flush or discard) */ list_del_init(&req->queuelist); @@ -2345,6 +2374,7 @@ static void blkfront_connect(struct blkfront_info *info) return; case BLKIF_STATE_SUSPENDED: + case BLKIF_STATE_FROZEN: /* * If we are recovering from suspension, we need to wait * for the backend to announce it's features before @@ -2463,12 +2493,36 @@ static void blkback_changed(struct xenbus_device *dev, break; case XenbusStateClosed: - if (dev->state == XenbusStateClosed) + if (dev->state == XenbusStateClosed) { + if (info->connected == BLKIF_STATE_FREEZING) { + __blkif_free(info); + info->connected = BLKIF_STATE_FROZEN; + complete(&info->wait_backend_disconnected); + break; + } + + break; + } + + /* + * We may somehow receive backend's Closed again while thawing + * or restoring and it causes thawing or restoring to fail. + * Ignore such unexpected state anyway. + */ + if (info->connected == BLKIF_STATE_FROZEN && + dev->state == XenbusStateInitialised) { + dev_dbg(&dev->dev, + "ignore the backend's Closed state: %s", + dev->nodename); break; + } fallthrough; case XenbusStateClosing: - blkfront_closing(info); - break; + if (info->connected == BLKIF_STATE_FREEZING) + xenbus_frontend_closed(dev); + else + blkfront_closing(info); + break; } } @@ -2500,6 +2554,96 @@ static int blkfront_is_ready(struct xenbus_device *dev) return info->is_ready && info->xbdev; } +static int blkfront_freeze(struct xenbus_device *dev) +{ + unsigned int i; + struct blkfront_info *info = dev_get_drvdata(&dev->dev); + struct blkfront_ring_info *rinfo; + struct blkif_front_ring *ring; + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */ + unsigned int timeout = 5 * HZ; + int err = 0; + + info->connected = BLKIF_STATE_FREEZING; + + blk_mq_stop_hw_queues(info->rq); + + for (i = 0; i < info->nr_rings; i++) { + rinfo = &info->rinfo[i]; + + gnttab_cancel_free_callback(&rinfo->callback); + flush_work(&rinfo->work); + } + + for (i = 0; i < info->nr_rings; i++) { + spinlock_t *lock; + bool busy; + unsigned long req_timeout_ms = 25; + unsigned long ring_timeout; + + rinfo = &info->rinfo[i]; + ring = &rinfo->ring; + + lock = &rinfo->ring_lock; + + ring_timeout = jiffies + + msecs_to_jiffies(req_timeout_ms * RING_SIZE(ring)); + + do { + spin_lock_irq(lock); + busy = blkfront_ring_is_busy(ring); + spin_unlock_irq(lock); + + if (busy) + msleep(req_timeout_ms); + else + break; + } while (time_is_after_jiffies(ring_timeout)); + + /* Timed out */ + if (busy) { + xenbus_dev_error(dev, err, "the ring is still busy"); + info->connected = BLKIF_STATE_CONNECTED; + return -EBUSY; + } + } + + /* Kick the backend to disconnect */ + xenbus_switch_state(dev, XenbusStateClosing); + + /* + * We don't want to move forward before the frontend is diconnected + * from the backend cleanly. + */ + timeout = wait_for_completion_timeout(&info->wait_backend_disconnected, + timeout); + if (!timeout) { + err = -EBUSY; + xenbus_dev_error(dev, err, "Freezing timed out;" + "the device may become inconsistent state"); + } + + return err; +} + +static int blkfront_restore(struct xenbus_device *dev) +{ + struct blkfront_info *info = dev_get_drvdata(&dev->dev); + int err = 0; + + err = negotiate_mq(info); + if (err) + goto out; + + err = talk_to_blkback(dev, info); + if (err) + goto out; + blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings); + +out: + return err; +} + static const struct block_device_operations xlvbd_block_fops = { .owner = THIS_MODULE, @@ -2521,6 +2665,9 @@ static struct xenbus_driver blkfront_driver = { .resume = blkfront_resume, .otherend_changed = blkback_changed, .is_ready = blkfront_is_ready, + .freeze = blkfront_freeze, + .thaw = blkfront_restore, + .restore = blkfront_restore }; static void purge_persistent_grants(struct blkfront_info *info)