From patchwork Wed Jun 2 06:53:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1486485 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=pN6BvYNz; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4Fw1Gc6pCNz9sCD for ; Wed, 2 Jun 2021 17:43:04 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=e1wd8Ty53TLpDDfsEpv7Ao9VESwz7cXpCCT1px0QMQk=; b=pN6BvYNz7fF03c bX9J520mV0DEFjlMxtyaQtoB3iVN05/8qlzQm7ZTGzIMcPuN8PQObh4rcqeKTzJ1c1sx6IcD8wPsP w5dvnyunQfnij7zCJ4UuhoMv8rSsHZFkPJ1sXLFML3u+cvLWV4sKMzkhkVolHShR4pL9ynrQ1vS47 9fyIHrrZlSNyCUm3Lmc7dPst++l/mpq4qXFLVqSYY83/TI56IT6DZ1ZJoeAEQ9Ok28pRp6+AN1IXD 715ONFftCL6nZic80tNXzRiBUDCo3ufF/dKW5iqoGTUOCakFGKbSmK9c16mhSIzJlKJqGfgmMDSTV d9s45HgbvxZpR/h9gqIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loLVy-002NgA-As; Wed, 02 Jun 2021 07:42:10 +0000 Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1loKnA-0026PT-6J; Wed, 02 Jun 2021 06:55:53 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Justin Sanders , Denis Efremov , Josef Bacik , Tim Waugh , Geoff Levand , Ilya Dryomov , "Md. Haris Iqbal" , Jack Wang , "Michael S. Tsirkin" , Jason Wang , Konrad Rzeszutek Wilk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= , Mike Snitzer , Maxim Levitsky , Alex Dubov , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , dm-devel@redhat.com, linux-block@vger.kernel.org, nbd@other.debian.org, linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org Subject: [PATCH 26/30] ubi: use blk_mq_alloc_disk and blk_cleanup_disk Date: Wed, 2 Jun 2021 09:53:41 +0300 Message-Id: <20210602065345.355274-27-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210602065345.355274-1-hch@lst.de> References: <20210602065345.355274-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-mtd" Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and request_queue allocation. Signed-off-by: Christoph Hellwig --- drivers/mtd/ubi/block.c | 68 ++++++++++++++++++----------------------- 1 file changed, 29 insertions(+), 39 deletions(-) diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c index e1a2ae21dfd3..e003b4b44ffa 100644 --- a/drivers/mtd/ubi/block.c +++ b/drivers/mtd/ubi/block.c @@ -394,53 +394,46 @@ int ubiblock_create(struct ubi_volume_info *vi) dev->vol_id = vi->vol_id; dev->leb_size = vi->usable_leb_size; + dev->tag_set.ops = &ubiblock_mq_ops; + dev->tag_set.queue_depth = 64; + dev->tag_set.numa_node = NUMA_NO_NODE; + dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + dev->tag_set.cmd_size = sizeof(struct ubiblock_pdu); + dev->tag_set.driver_data = dev; + dev->tag_set.nr_hw_queues = 1; + + ret = blk_mq_alloc_tag_set(&dev->tag_set); + if (ret) { + dev_err(disk_to_dev(dev->gd), "blk_mq_alloc_tag_set failed"); + goto out_free_dev;; + } + + /* Initialize the gendisk of this ubiblock device */ - gd = alloc_disk(1); - if (!gd) { - pr_err("UBI: block: alloc_disk failed\n"); - ret = -ENODEV; - goto out_free_dev; + gd = blk_mq_alloc_disk(&dev->tag_set, dev); + if (IS_ERR(gd)) { + ret = PTR_ERR(gd); + goto out_free_tags; } gd->fops = &ubiblock_ops; gd->major = ubiblock_major; + gd->minors = 1; gd->first_minor = idr_alloc(&ubiblock_minor_idr, dev, 0, 0, GFP_KERNEL); if (gd->first_minor < 0) { dev_err(disk_to_dev(gd), "block: dynamic minor allocation failed"); ret = -ENODEV; - goto out_put_disk; + goto out_cleanup_disk; } gd->private_data = dev; sprintf(gd->disk_name, "ubiblock%d_%d", dev->ubi_num, dev->vol_id); set_capacity(gd, disk_capacity); dev->gd = gd; - dev->tag_set.ops = &ubiblock_mq_ops; - dev->tag_set.queue_depth = 64; - dev->tag_set.numa_node = NUMA_NO_NODE; - dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; - dev->tag_set.cmd_size = sizeof(struct ubiblock_pdu); - dev->tag_set.driver_data = dev; - dev->tag_set.nr_hw_queues = 1; - - ret = blk_mq_alloc_tag_set(&dev->tag_set); - if (ret) { - dev_err(disk_to_dev(dev->gd), "blk_mq_alloc_tag_set failed"); - goto out_remove_minor; - } - - dev->rq = blk_mq_init_queue(&dev->tag_set); - if (IS_ERR(dev->rq)) { - dev_err(disk_to_dev(gd), "blk_mq_init_queue failed"); - ret = PTR_ERR(dev->rq); - goto out_free_tags; - } + dev->rq = gd->queue; blk_queue_max_segments(dev->rq, UBI_MAX_SG_COUNT); - dev->rq->queuedata = dev; - dev->gd->queue = dev->rq; - /* * Create one workqueue per volume (per registered block device). * Rembember workqueues are cheap, they're not threads. @@ -448,7 +441,7 @@ int ubiblock_create(struct ubi_volume_info *vi) dev->wq = alloc_workqueue("%s", 0, 0, gd->disk_name); if (!dev->wq) { ret = -ENOMEM; - goto out_free_queue; + goto out_remove_minor; } list_add_tail(&dev->list, &ubiblock_devices); @@ -460,14 +453,12 @@ int ubiblock_create(struct ubi_volume_info *vi) mutex_unlock(&devices_mutex); return 0; -out_free_queue: - blk_cleanup_queue(dev->rq); -out_free_tags: - blk_mq_free_tag_set(&dev->tag_set); out_remove_minor: idr_remove(&ubiblock_minor_idr, gd->first_minor); -out_put_disk: - put_disk(dev->gd); +out_cleanup_disk: + blk_cleanup_disk(dev->gd); +out_free_tags: + blk_mq_free_tag_set(&dev->tag_set); out_free_dev: kfree(dev); out_unlock: @@ -483,11 +474,10 @@ static void ubiblock_cleanup(struct ubiblock *dev) /* Flush pending work */ destroy_workqueue(dev->wq); /* Finally destroy the blk queue */ - blk_cleanup_queue(dev->rq); - blk_mq_free_tag_set(&dev->tag_set); dev_info(disk_to_dev(dev->gd), "released"); + blk_cleanup_disk(dev->gd); + blk_mq_free_tag_set(&dev->tag_set); idr_remove(&ubiblock_minor_idr, dev->gd->first_minor); - put_disk(dev->gd); } int ubiblock_remove(struct ubi_volume_info *vi)