From patchwork Wed May 29 18:18:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1107329 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45Df9b3jKHz9sCJ; Thu, 30 May 2019 04:18:59 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1hW3A7-0007Ub-F8; Wed, 29 May 2019 18:18:55 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1hW3A4-0007Tz-Tl for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:52 +0000 Received: from mail-qk1-f198.google.com ([209.85.222.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1hW3A4-00069O-Gg for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:52 +0000 Received: by mail-qk1-f198.google.com with SMTP id i4so2586603qkk.22 for ; Wed, 29 May 2019 11:18:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zBoui3ICZv+tFDzXo6gEO2G/WEf6UqDhkoAUGSH/3H0=; b=Hu794fUgjzijBNK7iJwKMQ3z8zOzeGh0kfNzPY+t+m7kIhxE3aQGCsZNk2BGQrucQy mT3pdMoU4v6T7lg5/VtgCI2AsDGI6S2sbcRqv5AhIy3LTSO4kUP71kSLO/p9ksI9+tB7 4AkzwPD+kvFShdcJs9IIdu+aiDrmRs3pbmoVHn7VGlFyoXsoM9dsFbjUCyFwyOs4XTAX 7uo/nLFLxeHB5gjIERld1iUqImr2GOQalqEtHvcY5F6++jiawHV33SqaivDyVPHMs+Wy nwxOEDV2CcVq8Q5Vbt/EQM+RPW9OP0z5pUxelGWSRKmwHu+6VqM7oLJ23L7WaVQLT1Pa Yyig== X-Gm-Message-State: APjAAAWHNNolT7Q7uHD0pJk4/D1fzw51Ogc6cLnsVRJkZBrpH+XzAh8/ N2AUMHoI1HJLRa/Sq756T5DwFj+156xLXqlHcvpd0JoaMOR9OThws7t9zyWt3tuLOXOkqERSYp/ GptFJh6LkZBJJvvpBQFAo7jEoRRkiPCh78iqS6Idz X-Received: by 2002:a37:72c7:: with SMTP id n190mr97131280qkc.189.1559153931248; Wed, 29 May 2019 11:18:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqyCk+5l8k1l3YCk3geUcUtudhjRzYdpVJH6xmbjg674t3RzafbiBeJH87fwYNbNLZXJew1YnQ== X-Received: by 2002:a37:72c7:: with SMTP id n190mr97131262qkc.189.1559153930962; Wed, 29 May 2019 11:18:50 -0700 (PDT) Received: from gallifrey.lan ([2804:14c:4e3:4a76:acfb:3ff5:b0d4:d195]) by smtp.gmail.com with ESMTPSA id i123sm128350qkd.32.2019.05.29.11.18.49 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 29 May 2019 11:18:50 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [x/azure][PATCH 1/3] blk-mq: remove the request_list usage Date: Wed, 29 May 2019 15:18:42 -0300 Message-Id: <20190529181844.24579-2-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190529181844.24579-1-marcelo.cerri@canonical.com> References: <20190529181844.24579-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jens Axboe BugLink: http://bugs.launchpad.net/bugs/1830242 We don't do anything with it, that's just the legacy path. Reviewed-by: Hannes Reinecke Tested-by: Ming Lei Reviewed-by: Omar Sandoval Signed-off-by: Jens Axboe (backported from commit 7ac257b862f2cfba3a909d1051499d390cffad6c) [marcelo.cerri@canonical.com: fixed context] Acked-by: Stefan Bader Acked-by: Kleber Sacilotto de Souza Signed-off-by: Marcelo Henrique Cerri --- block/blk-mq.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 67a80fd597e1..a8e530027a3e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -501,9 +501,6 @@ void blk_mq_free_request(struct request *rq) wbt_done(q->rq_wb, &rq->issue_stat); - if (blk_rq_rl(rq)) - blk_put_rl(blk_rq_rl(rq)); - clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags); clear_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags); if (rq->tag != -1) @@ -1637,8 +1634,6 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio) { blk_init_request_from_bio(rq, bio); - blk_rq_set_rl(rq, blk_get_rl(rq->q, bio)); - blk_account_io_start(rq, true); } From patchwork Wed May 29 18:18:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1107330 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45Df9b3wR0z9sMM; Thu, 30 May 2019 04:18:59 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1hW3A7-0007Uw-O7; Wed, 29 May 2019 18:18:55 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1hW3A6-0007UQ-UB for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:54 +0000 Received: from mail-qk1-f200.google.com ([209.85.222.200]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1hW3A6-00069a-Hb for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:54 +0000 Received: by mail-qk1-f200.google.com with SMTP id i196so2600804qke.20 for ; Wed, 29 May 2019 11:18:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SlEu7lbxLVA3N2Kkk4w+e0UkWShMA/Fu2jVZ6qIhIIo=; b=mg3CBoVpfc+ntnsAQ81PLdt1lmqVT+To3zU4BJXYhu06fR7tAxHHpMoyYi6gD9dTR9 cwOgArNHllsK4OSjU5dxiqn1tJlOhyxW7mlz0XLQIshnwv74vpS7lo+PKG3Qwh0LI/hQ aIlVjcD5o3ZpyBT7BU2+A5XA259tiEe+rYZbVKg1GzgeKqE2hXkIko+Lw50F8SRcDOCg J5TzfuoCwVBNM5EDBPJfMS9WNmz2bJEdAql7rDs4n0bbTugpJnNZe7icNJCtF3/FQqSj Nj2Y5PHQ1GA8ZglS5USo4lwkX7bUkNkutk9hj2ZCZolBIWAOrw4qxie9VQDZ0/0iBiqg dcwA== X-Gm-Message-State: APjAAAVK0hpKofIs+Bhi7QgaP/iFVRG2FR+nl/si/kDHxxo7WcF3By0h d52s8pTFx1TP6B5l2wtarUK501v0SA9E4HLlicrAO034mEULYyT6u7PQk3RSAvWexEkJgbLBKhc zR+M4sy5owzz5kP2SCfya1kYLoUJXXYR7232vSGtn X-Received: by 2002:a37:693:: with SMTP id 141mr16772998qkg.92.1559153933266; Wed, 29 May 2019 11:18:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqwidK/xFTPsrQ0YQXS7J0UsHWpQABf3ZKjMbGz5wcpzEkK1AnnF/tPb+Unx0y7LaweFPSuOdw== X-Received: by 2002:a37:693:: with SMTP id 141mr16772985qkg.92.1559153932979; Wed, 29 May 2019 11:18:52 -0700 (PDT) Received: from gallifrey.lan ([2804:14c:4e3:4a76:acfb:3ff5:b0d4:d195]) by smtp.gmail.com with ESMTPSA id i123sm128350qkd.32.2019.05.29.11.18.51 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 29 May 2019 11:18:52 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [x/azure][PATCH 2/3] nvme-pci: remove cq check after submission Date: Wed, 29 May 2019 15:18:43 -0300 Message-Id: <20190529181844.24579-3-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190529181844.24579-1-marcelo.cerri@canonical.com> References: <20190529181844.24579-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jens Axboe BugLink: http://bugs.launchpad.net/bugs/1830242 We always check the completion queue after submitting, but in my testing this isn't a win even on DRAM/xpoint devices. In some cases it's actually worse. Kill it. Signed-off-by: Jens Axboe Signed-off-by: Keith Busch Signed-off-by: Christoph Hellwig (cherry picked from commit f9dde187fa921c12a8680089a77595b866e65455) Signed-off-by: Marcelo Henrique Cerri --- drivers/nvme/host/pci.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 78972f85df3c..c93d6ab3f55f 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -69,7 +69,6 @@ MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2"); struct nvme_dev; struct nvme_queue; -static void nvme_process_cq(struct nvme_queue *nvmeq); static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown); /* @@ -902,7 +901,6 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, goto out_cleanup_iod; } __nvme_submit_cmd(nvmeq, &cmnd); - nvme_process_cq(nvmeq); spin_unlock_irq(&nvmeq->q_lock); return BLK_STS_OK; out_cleanup_iod: From patchwork Wed May 29 18:18:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1107331 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45Df9h55tcz9sB8; Thu, 30 May 2019 04:19:04 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1hW3AC-0007YC-VJ; Wed, 29 May 2019 18:19:00 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1hW3A8-0007Vq-SU for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:56 +0000 Received: from mail-qt1-f198.google.com ([209.85.160.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1hW3A8-00069m-FO for kernel-team@lists.ubuntu.com; Wed, 29 May 2019 18:18:56 +0000 Received: by mail-qt1-f198.google.com with SMTP id w34so2707138qtc.16 for ; Wed, 29 May 2019 11:18:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9UHChq5iv2JTtuvPTROPlz8HoANG7rkdkUmTofW+d9s=; b=iuJFogs5BCyEU/RvdK9P7TrvcvlXDO0L/BZKFPfKq+fWiPa1GipWl18i0OJ3yv0EOI /r6sV3M/DRUDWLepX4AulC3czk3m0zSckdFGD0qUSlIldEoCeUrpJTRYxhDkl+7wLN3w dyDQ0hcCzOrtOpUpTY+ZlVRI8xACiuABaHcqB9u5alrQ1C5dBB2TVVQWllp7RH+hNUW6 8GS+5CYPaoOjv6KN0qp6BEG+yR6jZRCJHlUOy/S7QdROHFXQEVt0FJ87KdzXsZSPAZth 2Pbu7ckPmM+QLBUigLJ88MtI1yBcOEezEMLQWkcwQ3TbBS0nhc22Uj2NheUSspgSfwXr LjBw== X-Gm-Message-State: APjAAAVmuAChj9cm91Y1azrUL0YOBcjKIsxqtaX53Rs1SzkndRPps5Z1 gM10C1o/vGdAZB+Xc/yvkbaB8gNAiNhuUrWjhr/4h8Gj54i18tmHO7+2O0YtZtaBN/u36ibEqa1 tuFgN2SIsH/mkZ6t1MEzhapFn/yGAY8vpRi+hlhEE X-Received: by 2002:a37:4c11:: with SMTP id z17mr70373122qka.316.1559153935213; Wed, 29 May 2019 11:18:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqzuvE/sh9uMmzQFgLYc0dayt8rTwXd2Us6pocauXp8dne002HxQAtMgQQF13MVe93oubt+fvQ== X-Received: by 2002:a37:4c11:: with SMTP id z17mr70373113qka.316.1559153934914; Wed, 29 May 2019 11:18:54 -0700 (PDT) Received: from gallifrey.lan ([2804:14c:4e3:4a76:acfb:3ff5:b0d4:d195]) by smtp.gmail.com with ESMTPSA id i123sm128350qkd.32.2019.05.29.11.18.53 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 29 May 2019 11:18:54 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [x/azure][PATCH 3/3] nvme-pci: split the nvme queue lock into submission and completion locks Date: Wed, 29 May 2019 15:18:44 -0300 Message-Id: <20190529181844.24579-4-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190529181844.24579-1-marcelo.cerri@canonical.com> References: <20190529181844.24579-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jens Axboe BugLink: http://bugs.launchpad.net/bugs/1830242 This is now feasible. We protect the submission queue ring with ->sq_lock, and the completion side with ->cq_lock. Reviewed-by: Christoph Hellwig Signed-off-by: Jens Axboe Signed-off-by: Christoph Hellwig (backported from commit 1ab0cd6966fc4a7e9dfbd7c6eda917ae9c977f42) [marcelo.cerri@canonical.com: fixed context] Signed-off-by: Marcelo Henrique Cerri --- drivers/nvme/host/pci.c | 46 +++++++++++++++++++++-------------------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c93d6ab3f55f..8f1c8e57440e 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -146,9 +146,10 @@ static inline struct nvme_dev *to_nvme_dev(struct nvme_ctrl *ctrl) struct nvme_queue { struct device *q_dmadev; struct nvme_dev *dev; - spinlock_t q_lock; + spinlock_t sq_lock; struct nvme_command *sq_cmds; struct nvme_command __iomem *sq_cmds_io; + spinlock_t cq_lock ____cacheline_aligned_in_smp; volatile struct nvme_completion *cqes; struct blk_mq_tags **tags; dma_addr_t sq_dma_addr; @@ -894,14 +895,14 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, blk_mq_start_request(req); - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->sq_lock); if (unlikely(nvmeq->cq_vector < 0)) { ret = BLK_STS_IOERR; - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->sq_lock); goto out_cleanup_iod; } __nvme_submit_cmd(nvmeq, &cmnd); - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->sq_lock); return BLK_STS_OK; out_cleanup_iod: nvme_free_iod(dev, req); @@ -1001,11 +1002,11 @@ static irqreturn_t nvme_irq(int irq, void *data) { irqreturn_t result; struct nvme_queue *nvmeq = data; - spin_lock(&nvmeq->q_lock); + spin_lock(&nvmeq->cq_lock); nvme_process_cq(nvmeq); result = nvmeq->cqe_seen ? IRQ_HANDLED : IRQ_NONE; nvmeq->cqe_seen = 0; - spin_unlock(&nvmeq->q_lock); + spin_unlock(&nvmeq->cq_lock); return result; } @@ -1025,7 +1026,7 @@ static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) if (!nvme_cqe_valid(nvmeq, nvmeq->cq_head, nvmeq->cq_phase)) return 0; - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->cq_lock); while (nvme_read_cqe(nvmeq, &cqe)) { nvme_handle_cqe(nvmeq, &cqe); consumed++; @@ -1038,7 +1039,7 @@ static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) if (consumed) nvme_ring_cq_doorbell(nvmeq); - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->cq_lock); return found; } @@ -1060,9 +1061,9 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl) c.common.opcode = nvme_admin_async_event; c.common.command_id = NVME_AQ_BLK_MQ_DEPTH; - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->sq_lock); __nvme_submit_cmd(nvmeq, &c); - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->sq_lock); } static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id) @@ -1322,15 +1323,15 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq) { int vector; - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->cq_lock); if (nvmeq->cq_vector == -1) { - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->cq_lock); return 1; } vector = nvmeq->cq_vector; nvmeq->dev->online_queues--; nvmeq->cq_vector = -1; - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->cq_lock); if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); @@ -1354,9 +1355,9 @@ static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) else nvme_disable_ctrl(&dev->ctrl, dev->ctrl.cap); - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->cq_lock); nvme_process_cq(nvmeq); - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->cq_lock); } static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, @@ -1417,7 +1418,8 @@ static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid, nvmeq->q_dmadev = dev->dev; nvmeq->dev = dev; - spin_lock_init(&nvmeq->q_lock); + spin_lock_init(&nvmeq->sq_lock); + spin_lock_init(&nvmeq->cq_lock); nvmeq->cq_head = 0; nvmeq->cq_phase = 1; nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride]; @@ -1455,7 +1457,7 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid) { struct nvme_dev *dev = nvmeq->dev; - spin_lock_irq(&nvmeq->q_lock); + spin_lock_irq(&nvmeq->cq_lock); nvmeq->sq_tail = 0; nvmeq->cq_head = 0; nvmeq->cq_phase = 1; @@ -1463,7 +1465,7 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid) memset((void *)nvmeq->cqes, 0, CQ_SIZE(nvmeq->q_depth)); nvme_dbbuf_init(dev, nvmeq, qid); dev->online_queues++; - spin_unlock_irq(&nvmeq->q_lock); + spin_unlock_irq(&nvmeq->cq_lock); } static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) @@ -1996,14 +1998,14 @@ static void nvme_del_cq_end(struct request *req, blk_status_t error) unsigned long flags; /* - * We might be called with the AQ q_lock held - * and the I/O queue q_lock should always + * We might be called with the AQ cq_lock held + * and the I/O queue cq_lock should always * nest inside the AQ one. */ - spin_lock_irqsave_nested(&nvmeq->q_lock, flags, + spin_lock_irqsave_nested(&nvmeq->cq_lock, flags, SINGLE_DEPTH_NESTING); nvme_process_cq(nvmeq); - spin_unlock_irqrestore(&nvmeq->q_lock, flags); + spin_unlock_irqrestore(&nvmeq->cq_lock, flags); } nvme_del_queue_end(req, error);